Академический Документы
Профессиональный Документы
Культура Документы
Yue-Xian Li
April 3, 2019
Contents
1 Introduction to Differential Equations 4
1.1 What is a differential equation (DE)? . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Ordinary DE (ODE) versus partial DE (PDE) . . . . . . . . . . . . . . . . . . . . . 6
1.3 Order of a DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Linear versus nonlinear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Why should we learn ODEs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Solutions and solution curves of an ODE . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Slope fields and solution curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 First-Order ODEs 10
2.1 Vanishing derivative or direct integration/antiderivation . . . . . . . . . . . . . . . . 10
2.2 Mathematical modeling using differential equations . . . . . . . . . . . . . . . . . . 12
2.3 Brief summary of what we have learned . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 First-order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Numerical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.8 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.9 Autonomous equations and phase space analysis . . . . . . . . . . . . . . . . . . . . 30
2.10 Applications of first-order ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1
3.3.6 One more example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Applications: mechanical and electrical vibrations . . . . . . . . . . . . . . . . . . . 53
3.4.1 Harmonic vibration of a spring-mass system in frictionless medium . . . . . 53
3.4.2 Spring-mass-damper system: a simple model of suspension systems . . . . . 57
3.4.3 Oscillations in a RLC circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5 Linear independence, Wronskian, and fundamental solutions . . . . . . . . . . . . . 65
3.6 Nonhomogeneous, 2nd-order, linear ODEs . . . . . . . . . . . . . . . . . . . . . . . 76
3.6.1 Method 1: Undetermined coefficients . . . . . . . . . . . . . . . . . . . . . . 79
3.6.2 Method 2: Variation of parameters (Lagrange) . . . . . . . . . . . . . . . . . 85
3.7 Applications to forced vibrations: beats and resonance . . . . . . . . . . . . . . . . 93
3.7.1 Forced vibration in the absence of damping: γ = 0 . . . . . . . . . . . . . . . 93
3.7.2 Forced vibration in the presence of damping: γ 6= 0 . . . . . . . . . . . . . . 96
2
6.11 Convolution product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
3
1 Introduction to Differential Equations
A Differential Equation (DE): An equation that relates one derivative of an unknown function,
y(t), to other quantities that often include y itself and/or its other derivatives.
• In (1), we call y (n) the LHS of the equation and F (t, y, y 0 , ... , y (n−1) ) the RHS of the
equation.
• A solution of the DE (1) is a function y(t) that satisfy the DE. In other words, when y(t) is
plugged in the DE, it will make LHS=RHS.
• Solving a differential equation is to find all possible functions y(t) that satisfy the equation.
4
Example 1.1.1:
dN (t)
N 0 (t) = kN (t), (N 0 (t) for ), (2)
dt
models the change of the size of a population N (t) when the average net growth rate is k (typically
a known constant).
Example 1.1.2:
00 0 dy d2 y
0 00
my + cy + ky = F (t), (y , y for , ) (3)
dt dt2
models the displacement y(t) of a mass suspended by a spring, subject to an external force F (t).
m, c, k are known constants.
Example 1.1.3:
∂ 2c ∂ 2c ∂ 2c
∂c
ct = D(cxx + cyy + czz ) also expressed as =D + + , (4)
∂t ∂x2 ∂y 2 ∂z 2
models the diffusion of a pollutant in the atmosphere in which c(t, x, y, z) (a multivariable function)
describes the concentration of the pollutant at time t and space location (x, y, z). D (> 0) is the
diffusion constant of the pollutant.
Example 1.1.4:
∂c ∂ 2c
cx + cxx = c also expressed as + 2 =c (5)
∂x ∂x
where c(t, x, y, z) is a multivariable function.
Example 1.1.5:
d4 y
ty (4) − y 2 = 0, (y (4) for ), (6)
dt4
does not necessarily model anything.
5
1.2 Ordinary DE (ODE) versus partial DE (PDE)
Among the previous examples, (2), (3), (5), and (6) are ODEs.
1.3 Order of a DE
Order of a DE: the order of the highest derivative of the unknown that appears in the equations.
Among the previous examples, (2) is 1st order, (3), (4), and (5) are 2nd order, (6) is 4th order.
Remark: An nth (n > 1) order ODE can always be reduced to a system of n 1st order ODEs.
Example 1.3.1:
my 00 (t) = −mg or y 00 (t) = −g (7)
models the displacement of a free-falling mass in a frictionless medium on the surface of the earth.
y(t) is the vertical displacement of the mass at time t. It is a 2nd order ODE.
By introducing a second unknown function v(t) = y 0 (t) (i.e. its velocity), we turn the above ODE
into the following system of two 1st order ODEs involving two unknowns y(t) and v(t):
0
y (t) = v(t),
(8)
v 0 (t) = −g.
6
1.4 Linear versus nonlinear ODEs
Linear vs nonlinear: An ODE is linear if it can be expressed in the following form
where a0 , a1 , ..., an−1 , f (t) are known functions of t but often are known constants, i.e. if only linear
combinations of the unknown and its derivatives appear in the equation. Otherwise, it is nonlinear.
Remarks:
• Linear ODEs can often be solved in closed forms. We shall devote more than two third of the
time in this course on solving linear ODEs.
• Nonlinear ODEs are often too difficult to solve in closed forms. Numerical, qualitative, and
approximation methods are often required in solving nonlinear ODEs. We shall only touch a
little bit on nonlinear ODEs.
The answers to many social, economic, scientific, and engineering problem are achieved in solving
ODEs that correctly model/describe the corresponding problem.
7
1.6 Solutions and solution curves of an ODE
Solution of an ODE: A function y(t) is a solution of an ODE if y and its derivatives are continuous
and satisfy the ODE on an interval a < t < b.
Solution curve: The graph of a solution y(t) in the ty-plan is called the solution curve.
At this point, a few questions should be asked (but not answered). First, does every ODE have a
solution? Second, if so, is it the only solution? Last, what is the interval in which the solution is
valid?
Existence and uniqueness: In eq.(1), if F and its partial derivatives w.r.t. its dependent variables
are continuous in the closed ty-domain of our interest, the existence and uniqueness of a solution
can be guaranteed for any initial condition (t0 , y(t0 )) located in the interior of the domain. We shall
not get deeper into this in the lecture here. Read Ch.1.2.2 of the online text and/or Ch.2.8 of the
Boyce-DiPrima textbook for more details.
Remark: Before we learn any of the techniques for solving ODEs, we can always try to guess what
the solution could be. Fortunately, we can always verify if a guessed solution is indeed a solution
by substituting it into the ODE (by verifying it).
Example 1.6.1: Find one solution and then all possible solutions of the following ODE.
y 0 (t) = y(t) (10)
Answer:
Notice that we are looking for a function whose derivative is equal to itself. Only exponential func-
tions have this property. Thus, we realize y(t) = et is one non-trivial solution.
A more careful inspection reveals that y(t) = 2et is also a solution, further more
y(t) = Cet
for an arbitrary constant C. This form actually represents all possible solutions of the ODE includ-
ing the above two special solutions (for C = 0 and C = 1, respectively).
Special solution vs general solution: Any function that satisfies the ODE is one special solu-
tion. The expression that represents all possible solutions of an ODE is called the general solution.
It must contain at least one arbitrary constant.
Remark: An ODE usually has a family of infinitely many solutions that differ by one or more
constants. The number of arbitrary constants is equal to its order and are often referred to as
integration constants since they arise each time we integrate.
8
1.7 Slope fields and solution curves
Since its right-hand-side (rhs) is known, we can always explicitly calculate the slope of y(t) (i.e.
y 0 (t)) for any given pair of values (t, y). In other words, the slope of the solution can always be
predetermined.
Slope field of the ODE is obtained by the plotting the slope of the solution at regular intervals
in the ty-plane. Since the solution curve must be tangent to the slope field, it can be obtained
by tracing out a curving while making sure that it is tangent to the slope field at every point of its
passage.
Example 1.7.1: Plot the slope field of the following ODE and trace out the solution curve for
y(0) = 0.5.
y0 = y (12)
1
t
0 1 2 3 4 5
Figure 1: Sketch of the slope field and one solution curve of y 0 = y.
9
2 First-Order ODEs
All first-order ODEs can be expressed in the following ‘normal’ form:
y(t) = y(t0 ) = C.
Part 2. Let F (t) be an antiderivative of f on I, i.e. F 0 (t) = f (t). Rewrite the ODE y 0 = f into the
following form
y 0 − f = (y − F )0 = 0
Applying the 1st part of the theorem, we see that for all t in I,
y(t) − F (t) = C.
we see that
y(t) − sin t = C ⇒ y(t) = sin t + C.
10
Alternatively, Z
0
y (t) = cos t ⇒ y(t) = cos tdt = sin t + C.
N 0 + kN = 0
3y 2 y 0 = et , (nonlinear!) (16)
3 0 t 3
Z √
et dt = et + C
3 t
(y ) = e ⇒ y = ⇒ y(t) = e + C.
11
2.2 Mathematical modeling using differential equations
Modeling is a process loosely described as recasting a problem from its natural environment into
a form, called a model, that can be analyzed via techniques we understand and trust (ODEs). It
often involves the following steps listed in this very simple example.
Motivations: Questions to answer: (1) If it takes 3 seconds for a piece of stone to drop from the
top of a building, what is the height of the building? (2) If a cat falls from a roof of 4.9 meters
high, what is the speed when she hits the ground? Assume that we can ignore air resistance.
Determine system variables: One single independent variable is time t. The unknown variables
(functions) are the vertical position of the free-falling object y(t) and its speed v(t).
Determine the domain of variation for all variables: For example, t varies between 0 and 3
seconds for question (1), y(t) varies between 0 and 4.9 meters in question (2).
Natural law: Newton’s second law of motion: A free-falling object near earth’s surface moves with
constant acceleration g if air resistence can be ignored, where g is the earth’s gravitational constant.
Determine system parameters: The earth’s gravitational constant g = 9.8 m/s2 , the total time
T = 3 s, the initial height h = 4.9 m..
Solve the equation: Remember, by introducing a new function v(t) = y 0 (t) (i.e. its speed), we
turn this 2nd-order ODE in a system of two 1st order ODEs:
0
y (t) = v(t),
(19)
v 0 (t) = −g.
with initial conditions y(0) = h and v(0) = 0. Note that −gt is the antiderivative of −g,
v 0 (t) = −g ⇒ v(t) = −gt + C1
Since v(0) = 0, C1 = 0. Thus, v(t) = −gt (unique).
12
Since y(0) = h, C2 = h. Therefore, y(t) = h − gt2 /2 (unique).
The answer to our first question about the height of the building is found by substituting t = 3 sec,
g = 9.8 m/sec2 , and y(t) = 0 (means object on the ground) into the solution: h = 44.1 m! If
you feel any doubt about this result, you should blame the assumption that air resistance can be
neglected in the model but NOT the math. As to the cat’s impact speed, we can first find out how
long does it take for her to reach the ground (1 sec for h = 4.9 m), then to find out that the speed
is v(1) = −9.8 m/s (minus sign for downward direction).
1. Solving ODEs basically involves finding a function given a constraint on its derivative(s).
3. There are usually infinitely many solutions to an ODE that differ by one or more integration
constants.
4. An expression of all possible solutions to an ODE is called the general solution. Any other
solution is called a special/particular solution.
5. Unique solution is obtained only when appropriate initial conditions are satisfied.
13
2.4 Separable Equations
14
Example 2.4.3
Solve y 0 = 2xey (1 + x2 )−1 .
Step 1. Separate the variables and express the equation into the following form:
2xdx
e−y dy =
1 + x2
Step 2. Integrate both sides. Z Z
−y 2xdx
e dy =
1 + x2
2xdx
R R
Thus (calculate the integrals ey dy = ey and 1+x2
= ln(1 + x2 )),
This solution is defined only for −ln(1 + x2 ) − C > 0 or C < −ln(1 + x2 ). For this problem, y can
be expressed explicitly in terms of x.
Example 2.4.4
xy
Solve y 0 = (1+x)(1+y)
, with y(0) = 1 and x > 0. (A problem from the final exam in 1995).
This is an initial value problem (IVP). First find the genral solution then determine the constant.
Step 1. Separate the variables and express the equation into
1+y x
dy = dx
y 1+x
Step 2. Integrate both sides. Z Z
1 1
(1 + )dy = (1 − )dx
y 1+x
Thus,
ex
y + ln y = x − ln(1 + x) + C0 or yey = C1
1+x
where C1 = eC0 . This general solution is always valid for x > 0 and C1 > 0.
Step 3. Substitute intitial condition into the general solution to determine the value C0 or C1 .
C0 = 1 or C1 = e
e1+x
y + ln y = x − ln(1 + x) + 1 or yey = .
1+x
In this problem, y can not be solved explicitly in terms of x.
15
Example 2.4.5
Solve y 0 + p(t)y = 0
This is a homogeneous, first-order, linear ODE. We can see that it is also separable.
1
dy = −p(t)dt
y
Integrate both sides, we get
ln y = −P (t) + C0
Or by taking the exponential of both sides
Summary: Separable equations are ODEs that can be either linear or nonlinear. This a special
class of ODEs that can be solved by applying the method called separation of variables. However,
linear ODEs can readily be solved in closed forms using other methods as discussed below.
16
2.5 First-order linear ODEs
All first-order linear ODEs can be expressed in the following ‘normal’ form:
Method of integrating factor: Main idea is to turn the lhs of eq.(22) into the derivative of a
single function F (t, y) such that eq.(22) is turned into the form
F 0 = f (t),
Integrating factor: the idea expressed above is always achievable by multiplying both sides of
eq.(22) by its integrating factor
Z
P (t)
e , with P (t) = p(t)dt (i.e. P 0 (t) = p(t)),
which yields
P (t) 0
eP (t) y 0 + p(t)eP (t) y = q(t)eP (t) ⇒ e y = q(t)eP (t) ⇒ eP (t) y(t) = R(t) + C,
R
where R(t) = q(t)eP (t) dt. Dividing both sides by eP (t) , we obtain
Example 2.5.1
y 0 + y = t.
17
Theorem 2.5.1 (General Solution Theorem): The general solution of the linear ODE
y 0 + p(t)y = q(t),
where p(t) and q(t) are continuous on a t-interval I, has the form
Proof. Multiply both sides of the equation by the integrating factor eP (t) ,
Multiply both sides by the nonzero function e−P (t) . End of proof.
Remarks: This theorem tells us that to solve a linear ODE, we only need to calculate the an-
tiderivatives (indefinite integrals) of the two functions p(t) and eP (t) q(t).
18
Example 2.5.3: Solve ty 0 + 2y = sin t
t
. (A problem from the 1995 final exam!)
Answer:
Step 1: Transform the equation into the normal form given by eq.(22) and identify the p(t) and
q(t) for this problem. In this case, we have to devide both sides by t. (Be careful that we can do
this only for t 6= 0! At t = 0 the equation is reduced to 2y(0) = 1 (remember limt→0 sint
t
= 1)).
2 sin t
y 0 + ( )y = 2
t t
Thus, p(t) = 2/t and q(t) = sin t/t2 for this equation.
Step 3: Verify the result by substituting it back into the original ODE.
Note that we can always check if our calculation is correct! The special concern here is not to forget
to discuss the special situations like t = 0.
19
2.6 Existence and uniqueness
Thoerem 2.6.1 (for linear 1st-order ODEs): If p(t), q(t) are continuous on an open interval
α < t < β containing the point t = t0 , then there exists a unique solution y = φ(t) that satisfies
the following initial value problem (IVP):
0
y + p(t)y = q(t),
(24)
y(t0 ) = y0 .
q(t) = 4t
20
Example 2.6.2: Find a rectangle in the xy-plane in which the following IVP has a unique solution.
( 2 +4x+2
y 0 = 3x2(y−1) ,
(27)
y(0) = −1 .
3x2 + 4x + 2
f (x, y) =
2(y − 1)
21
2.7 Numerical methods
In many practical science, engineering, and social economical application, the ODEs are often too
hard or impossible to solve exactly in closed forms. Fortunately, one can always solve an ODE using
numerical methods on computers.
solve approximately the values of y(t) on discretized points of t (i.e. a ≤ t0 , t1 , ... , tN ≤ b) with
desired accuracy and explicit error estimates.
Standard procedure:
• Step 1: Discretize the interval [a, b] into N (usually N >> 1 is a very big integer)
subintervals often of equal length
tN − t0 b−a
h= = ,
N N
h (usually 0 < h << 1 is a very small number) is called the step-size.
y y(t)
y(t N)
...
y(t 2)
y(t 1)
y0 =y(t 0)
t
t 0=a t1 t2 . . . . . . t =b
N
• Step 2: Select one iteration scheme (the actual numerical method) from a collection of
known schemes to calculate yi (i = 1, 2, ... , N ) sequentially
y0 (given) → y1 → y2 → ··· → yN
such that
yi ≈ y(ti ), (i = 1, 2, ... , N ).
22
• Step 3: Determine the upper bound of truncation errors based on the step-size and the
specific iteration method used. When the iteration method is fixed, usually the smaller the
step-size h the better the sequence of numbers yi (i = 1, 2, ... , N ) approximates the exact
solution y(t) itself in [a, b].
Now we focus on how to calculate the value yi based on the previous value yi−1 .
Basic idea behind Euler’s method: If the step-size is very small, the solution curve is seg-
mented into N very small pieces by the red dashed vertical lines (see Fig. 3) drawn at t = ti
(i = 1, 2, ... , N ). We can approximate each segment by a straight line tangent to the curve (see
Fig. 3 for a schematic diagram).
y y(t)
y(t N)
...
y2
y(t 2)
y1
y(t 1)
y0 =y(t 0)
t
t 0=a t1 t2 . . . . . . t =b
N
Figure 3: Euler’s/tangent-line method demonstrated. Green solid lines are the tangent lines while
dash-dotted green lines represent the values of y1 and y2 .
Formula for Euler’s method: can be derived using the discretized version of the ODE
yi+1 − yi yi+1 − yi
≈ f (ti , yi ) ⇒ ≈ f (ti , yi ) ⇒ yi+1 − yi ≈ hf (ti , yi )
ti+1 − ti h
Therefore,
23
is Euler’s iteration scheme (Euler’s method). Given the value of yi we can use it to calculate the
value of yi+1 . Thus, given the initial value of y0 , we can calculate y1 , y2 , ... , yN sequentially.
Sources of error: There are two major sources of error in such numerical methods:
1. truncation error that arises when we approximated the curve by its tangent line and it prop-
agates as we iterate further from the initial point (see Fig. 3). This error can be limited by
choosing a better iteration scheme and/or reducing the step-size h.
2. round-off error that occurs when we calculate real numbers with infinitely many decimal
points by computer numbers with finite decimal points. This error can be reduced by using
computers that allow float numbers with longer digits.
1. Local truncation error is the error that occurs at a single iteration step.
Elocal = Kh2
2. Global truncation error is the error that accumulated after N steps of iteration.
Eglobal = M h
Euler’s method is first-order because its Eglobal is proportional to the first power of step-size h.
For an n-th order method, usually Eglobal = M hn . For an n-th order method, Eglobal is reduced by
n orders of magnitude when h is reduced by 10 fold.
24
Example 2.7.1: Use Euler’s method with step-size h = 0.1 to solve the following IVP on interval
0 ≤ t ≤ 1. 0
y + 2y = t3 e−2t ,
(30)
y(0) = 1 .
Therefore,
y1 = (1 − 2h)y0 = (1 − 2h),
25
y2 = (1 − 2h)y1 = (1 − 2h)2 ,
y3 = (1 − 2h)y2 = (1 − 2h)3 ,
··· = ··· ,
yN = (1 − 2h)yN −1 = (1 − 2h)N .
Thus,
N
2t N (−2t) t
N →∞, h= N →0
N
y(t) ≈ yN = (1 − 2h) = (1 − ) = 1 + −→ e−2t ,
N N
where the famous limit
a
lim (1 + )x = ea
x→∞ x
was used.
26
2.8 Exact Equations
To understand this subsection, one needs to know the Chain rule for differentiating a multivariable
function.
dz(x(t), y(t)) ∂z dx ∂z dy
= +
dt ∂x dt ∂y dt
Example 2.8.1
Solve 2x + y 2 + 2xyy 0 = 0.
This equation is neither linear nor separable. The methods for sovling those types of ODEs can’t
apply here. Yet, as we will show in the following, this equation can also be solved in closed form.
Why? Because it belongs to another special class of equations that happens to be solvable: exact
equations.
My = Nx (= 2y) (34)
This is actually the definition of an exact equation. According to this definition, separable equations
are also exact because for separable equations My (x) = Nx (y)(= 0).
What is good about this feature? It allows us to find a function Ψ(x, y) with its derivative with
respect to x equals the lhs of eq.(33), thus having a vanishing derivative.
chain rule
dΨ dx dy
= Ψx + Ψy = Ψx + Ψy y 0 = M + N y 0 = 0 ⇒ Ψ(x, y(x)) = C. (35)
z}|{
dx dx dx
The problem is how to find the function Ψ. According to eq.(35),
Ψx = M Ψy = N
To solve the particular problem in this example, we can first solve Ψy = N by integrating both
sides with respect to y (treating x as constant while doing this):
27
Z Z Z
Ψy dy = N dy = 2xydy ⇒ Ψ = xy 2 + h(x)
where h(x) is an unknown function of x. Now substitute the Ψ expression into Ψx = M , we obtain
Step 3. Integrate the equation for h0 (x). Before that, we have to check if the rhs is indenpendent
of y by differnetiating it with respect to y
Z
∂
[M (x, y) − Nx (x, y)dy] = My − Nx = 0
∂y
Therefore, Rthe special feature My = Nx guarantees that h(x) can be determined by solving h0 (x) =
M (x, y) − Nx (x, y)dy. We have proved the following theorem.
Theorem 2.8.1 Let M, N, My , Nx be continuous in the rectangular region R in the xy-plane. Then
My (x, y) = Nx (x, y)
Since dΨ
dx
= M (x, y) + N (x, y)y 0 = 0. The general solution of M (x, y) + N (x, y)y 0 = 0 is implicitly
defined by Ψ(x, y) = C (C is any constant).
Example 2.8.2
Solve (ycosx + 2xey ) + (sinx + x2 ey − 1)y 0 = 0.
Step 2. Start calculating the function Ψ(x, y) with Ψx = M or Ψy = N which ever is simpler.
28
Integrating with respect to x, we obtain
Thus,
h0 (y) = −1 ⇒ h(y) = −y + C0
Step 4. Write down the solution in implicit form.
Therefore, the constant in h(y) can always be obsorbed in the constant on the rhs of the final
solution. We do not have to put a constant when solving the ODE for h.
Example 2.8.3.
Solve (3xy + y 2 ) + (x2 + xy)y 0 = 0.
My (x, y) = 3x + 2y 6= Nx (x, y) = 2x + y
However, some equations that are not exact (including this particular one given above), can be
transformed into an exact equation by multiplying both sides of the equation by an function µ(x, y)
(also called integrating factor). Unfortunately, to find the integrating factor is often as difficult as
solving the original ODE. A detail discussion is out of the scope of this course.
However, the the above equation. It is not so hard to find that x is an integrating factor. Since the
ODE (3x2 y + xy 2 ) + (x3 + x2 y)y 0 = 0 is exact.
My = 3x2 + 2xy = Nx
Thus,
Ψy = x3 + x2 y ⇒ Ψ = x3 y + x2 y 2 /2 + h(x)
And,
3x2 y + xy 2 + h0 (x) = 3x2 y + xy 2 ⇒ h0 (x) = 0 ⇒ h(x) = C0
Therefore the solution is x3 y + x2 y 2 /2 = C.
Notice that the same equation can be solved by using other integrating factors. For exampl,
µ(x, y) = 1/(xy(2x + y)) is also an integrating factor.
29
2.9 Autonomous equations and phase space analysis
f (t, y) = f (y)
i.e. if the RHS does not explicitly depend on the independent variable t. The space (i.e. a line
here) span by the unknown function (i.e. the y-axis) is call the phase space or state space.
• The slope/vector filed of an autonomous 1st order ODE does not change as a function of time
(i.e. on each line parallel to the t-axis, the vectors are all parallel.)
• The direction of phase flow is fixed at every point in phase space (i.e it does not change with
t).
• Non autonomous ODEs can be transformed into autonomous ODEs by introducing one addi-
tional variable.
Thus,
y−1 y−1 1
ln = −t + C1 ⇒ = Ce−t ⇒ y(t) = .
y y 1 − Ce−t
Substitute the initial condition y(0) = y0 into the solution, one finds
y0
y(t) = .
y0 − (y0 − 1)e−t
One finds two special solutions: (i) for y0 = 0, y(t) = 0 for all t ≥ 0; (ii) for y0 = 1, y(t) = 1 for
all t ≥ 0. One needs a graph plotter to find out how the solution curves looks like for other initial
conditions (see figure).
30
y(t) y
y(0)
1 Projecting 1
y(0)
1/2 onto y−axis 1/2
y(0)
0 0
0 t
The image on the solutions projected onto the phase space on the right is called the phase portrait
of the equation. In this phase portrait, the direction of time evolution is represented by arrow
directions which are often referred to as flow direction or phase trajectories.
Phase space analysis is a qualitative method we often employ in the study of autonomous ODEs
so that we can achieve the same results presented in the figure above without having to solve the
equation and/or use a graph plotter to find the solution curves and the phase portrait.
Step 1. Finding the steady states (also called fixed points, critical points, equilibriums, ...)
Def: A steady state of the autonomous equation y 0 = f (y) is a state at which y 0 = 0, i.e. it is a
special solution of the equation that remains constant (steady) for all t ≥ 0. All steady states are
found by solving
f (y) = 0.
Step 2. Sketch the curve of f (y) and determine the phase flow directions.
y’=f(y) y’=f(y)
0 1/2 1 y 0 1/2 1 y
31
Notice that if f 0 (y) > 0 at a steady state, the trajectories flow away in both directions; however, if
f 0 (y) < 0, the trajectories flow toward it in both directions.
Step 3. Determine the stability of the steady states using the graph of f (y).
Def: Stability of a steady state. A steady state ys is stable, if trajectories flow toward it from
all possible directions; it is unstable if trajectories flow away from it in at least one direction.
In real world, only stable steady states can be observed in experimental observations due to the
existence of noises.
For the logistic model y 0 = y(1 − y), plot of f (y) vs y shows clearly that ys = 0 is unstable because
the trajectories point away from it. However, ys = 1 is stable because trajectories from all possible
directions point toward it.
Step 4. Sketch the solution curves y(t) using the graph of f (y).
For an autonomous equation, the plot of f (y) uniquely defines the slope of y(t), i.e. y 0 (t), at each
value of y. Therefore, one can always trace out the concavity of the solution y(t) and determine
the point(s) of inflection in y(t). This allows us the sketch the shape of the solution curve for each
initial condition we choose. The result is the following qualitative graph of the solutions for a few
representative initial conditions.
y(t)
y(0)
y(0)
1/2
y(0)
0
0 t
For y(0) > 0 close to zero (blue curve in the figure above), the solution is “sigmoidal” because the
curve is initially concave up as the slope increases. It reaches maximum slope at y = 0.5, after that
the slope decreases as it turns into concave down shape. The point of inflection happens at the
local maximum of f (y).
32
Example 2.9.2:
Find all steady states of the autonomous ODE y 0 = y(y − 0.25)(1 − y) and determine their stability.
Sketch the solution curve for each representative choice of the initial value y(0).
Ans: It is clear that ys = 0, 0.25, 1 are the values that make f (y) = y(y − 0.25)(1 − y) = 0. Thus,
there are 3 steady states. A sketch of f (y) is given below which clearly show the 3 zeros/steady
states. The flow directions show that ys = 0 and 1 are two stable steady states but ys = 0.25 is
unstable. This is a system that show the phenomenon of bistability.
Based on the sketch of the f (y), one can sketch the solutions curves for 6 representative initial value
of y (see the figure below.)
f(y)
y
0 1/4 1
y
1
1/4
0 t
Therefore, phase space analysis allows us to qualitatively find the solutions of the nonlinear ODE
without having to solve the equation analytically in closed form or numerically using a computer.
33
Linear stability analysis.
Often f (y) is a function that is not easy to sketch the curve. In this case, we still can determine
the stability of the steady states by using analytical method.
Consider an ODE y 0 = f (y). Let ys be a steady state, i.e. f (ys ) = 0. The stability of ys is
determined by the behaviour of the system near ys . Let δ(t) (|δ(t)| << 1) be a small difference
between y(t) and ys , thus
d(ys + δ)
= f (ys + δ), ⇒ δ 0 = f (ys ) + f 0 (ys )δ + O(δ 2 ).
dt
Since |δ|2 << |δ|, we can ignore higher order terms in δ and obtain the following linearized ODE
0
δ 0 = f 0 (ys )δ, ⇒ δ(t) = δ0 ef (ys )t .
Therefore,
• If f 0 (ys ) > 0, δ(t) → ∞ as t → ∞, ys is unstable.
• If ys is unstable, phase trajectories (flows) move away from it in at least one directions.
• If f 0 (ys ) < 0, δ(t) → 0 as t → ∞, ys is stable.
• If ys is stable, phase trajectories (flows) converge (move) toward it from all possible directions.
• If f 0 (ys ) = 0, neutral or stability determined by higher order terms (if exist).
• Unstable steady states, under normal conditions, cannot be detected in experimental settings
or numerical simulations.
Application to Example 2.9.1: For logistic equation, f (y) = y(1 − y), ys = 0, 1 are the steady
states.
Application to Example 2.9.2: For this example, f (y) = y(y − 0.25)(1 − y), ys = 0, 0.25, 1 are
the steady states.
34
2.10 Applications of first-order ODEs
mkv
mg
where “-” of the first term implies the positive direction of the y coordinate is upward, “-” of the
second term means the air resistance is alway opposite to the direction of the speed v. The initial
condition is v(0) = 0.
This is a first-order, linear ODE with p(t) = k and q(t) = −g. Thus, the integrating factor eP (t)
and R(t) are Z
R
p(t)dt kt g
e =e , ⇒ R(t) = (−g)ekt dt = − ekt .
k
Therefore the general solution is
g
v(t) = e−kt [R(t) + C] = Ce−kt − .
k
Using the initial condition v(0) = 0, we obtain C = kg . Thus
g −kt t→∞ g
v(t) = [e − 1] −→ − .
k k
The terminal speed is v(∞) = − kg , where the “-” sign implies it is downward.
35
Example 2.10.2: Mixing problem.
Salt solution of concentration s ([kg]/[L]) is continuously flown into the mixing tank at a rate r
([L]/[min]). Solution in the tank is continuously stirred and well mixed. The rate for the outflow
of the well-mixed solution is also r. Initially (i.e., at t = 0), the volume of the solution inside the
tank is V ([L]) and the total amount of salt is Q0 ([kg]). Find:
(a) the total amount of salt in the tank as a function of time Q(t);
(b) the amount of salt a very long time after the initiation of the experiment, Q(∞).
Answer: Because the rates of in and out flows are identical, the volume remains unchanged in the
experiment, V = const.
dQ Q r
= rate in − rate out = rs − r , ⇒ Q0 + Q = rs.
dt V V
r
This is a first-order, linear ODE with p(t) = V
and q(t) = rs. Thus, the integrating factor eP (t)
and R(t) are Z
r r r
R
p(t)dt t
e =e ,
V ⇒ R(t) = rse V t dt = sV e V t .
after a very long time the total amount of salt in the tank is Q(∞) = sV .
36
Example 2.10.3: Escape speed.
A bullet is shot vertically into the sky. Assume that air resistance can be ignored. Answer the
following questions:
1. If the initial speed is v0 , at what height, hmax , it reverses direction and starts to fall?
2. What should the value of v0 be in order to make hmax = ∞? (i.e., the escape speed ve .)
v(t)
0
1
0
1
0
1
1
00
1
0
1
0
1
0
1
0
1
0
1
0
1
y(t)
Re
Note that the ODE as it is contains two unknown functions y(t) and v(t). Using the chain rule
dv dv dy dv
= =v .
dt dy dt dy
This expression allows us to solve v as a function of y since t is not explicitly involved in the
equation. Now, we have
dv Me G
v =− .
dy (Re + y)2
37
This is a separable equation. Separation of variables yields
Z Z
Me G Me G
vdv = − 2
dy ⇒ vdv = − dy
(Re + y) (Re + y)2
which leads to
1 2 Me G Me G Re2 gRe2
v = +C = + C = + C.
2 Re + y Re2 Re + y Re + y
Using the initial condition v(0) = v0 , we obtain C = 21 v02 − gRe . Thus,
2gRe
v2 = + v02 − 2gRe .
1 + y/Re
Re v02 2gRe hmax
1. To find hmax , plug in v(hmax ) = 0. We obtain hmax = 2gRe −v02
and v02 = Re +hmax
.
√
2. To find the escape speed ve , plug in hmax = ∞. We obtain ve2 = 2gRe ⇒ ve = 2gRe ≈
11.1 km/s.
38
3 Second-Order Linear ODEs
3.1 Introduction
A second-order ODE can be generally expressed as
y 00 = f (t, y, y 0 ). (36)
If f (t, y, y 0 ) = f (y, y 0 ) (i.e. its rhs is explicitly independent of t), it is called an autonomous ODE.
Otherwise, it is non-autonomous.
where p(t), q(t), g(t) are known functions of t. Basically, if these three functions are continuous in
an interval containing t0 , a unique solution exists for proper initial conditions y(t0 ) = y0 , y 0 (t0 ) = y1
in the neighbourhood of t0 (read Boyce and DiPrima for more details).
d2 d
L≡ 2
+ p(t) + q(t) (38)
dt dt
such that
d2 y dy
L[y] = 2 + p(t) + q(t)y = y 00 + p(t)y 0 + q(t)y. (39)
dt dt
Thus, the 2nd-order linear ODE in eq.(37) is shortened to
Theorem 3.1.1 Principle of superposition: If y1 (t) and y2 (t) are both solutions of the homo-
geneous equation
L[y] = 0,
then a linear combination (superposition) of the two
39
Proof:
y1 is a solution ⇒ L[y1 ] = 0.
y2 is a solution ⇒ L[y2 ] = 0.
Now,
substitute in & reorganize
L[C1 y1 + C2 y2 ] = C1 L[y1 ] + C2 L[y2 ] = C1 × 0 + C2 × 0 = 0,
therefore, y(t) = C1 y1 + C2 y2 is also a solution.
can be satisfied by two constants k1 and k2 that are not both zero (i.e., one is the constant multiple
of the other). Otherwise, if it is satisfied only when k1 = k2 = 0 (i.e., one is not the constant
multiple of the other), then the two are linearly independent (LI).
Example 3.1.1: Determine whether the following pairs of functions are LI?
Answer:
1. k1 e2t + k2 e−t = e−t [k1 e3t + k2 ] = 0 if and only if (iff) k1 = k2 = 0. So, they are LI.
Later, we shall introduce a more straight forward method for verifying the linear independence of
two functions.
where C1 , C2 are arbitrary constants, y1 (t), y2 (t) are two LI solutions of L[y] = 0 and are referred
to as a fundamental set of solutions.
Remark: Based on this theorem, finding the general solution of L[y] = 0 is reduced to finding two
LI solutions.
40
3.2 Homogeneous, 2nd-order, linear ODEs with constant coefficients
If in the homogeneous 2nd-order linear ODE L[y] = 0, both p(t) and q(t) are constants, the equation
becomes an ODE with constant coefficients
ay 00 + by 0 + cy = 0, (42)
Conclusion: For eq.(42), we always look for solutions of the form y(t) = ert .
41
Example 3.2.1 Find the general solution of
y 00 + 5y 0 + 6y = 0.
Answer: Let y(t) = ert . Plug into the ODE, we obtain the following characteristic equation
r2 + 5r + 6 = 0 =⇒ (r + 3)(r + 2) = 0,
are solutions to the ODE and are LI of each other (because r1 6= r2 ). Thus, they form a fundamental
set. The general solution is
y(0) = 2, y 0 (0) = 21 .
4r2 − 8r + 3 = 0
which yields √
8± 64 − 48 1 3 1
r1, 2 = =1± = , .
8 2 2 2
3 1
y(t) = c1 e 2 t + c2 e 2 t .
y(0) = 2 =⇒ c1 + c2 = 2;
1 3 1 1
y 0 (0) = =⇒ c1 + c2 = =⇒ 3c1 + c2 = 1.
2 2 2 2
42
1 3 5 1
y(t) = − e 2 t + e 2 t .
2 2
Everything seems easy and moves smoothly until we encounter some suprprises.
y 00 − 4y 0 + 4y = 0.
r2 − 4r + 4 = 0 =⇒ (r − 2)2 = 0
which yields
r = r1 = r2 = 2,
This means that there is only one value r = 2 that allows y1 (t) = ert = e2t to satisfy the ODE.
Without a second LI solution, we cannot write down the general solution!
Question: In this case, how can we find a 2nd LI solution y2 (t)? How to find the general solution?
Answer: Reduction of order (Due to D’Alembert) The general solution can be expressed as
y(t) = v(t)e2t (an educated guess!)
To determine the function v(t), we substitute the guessed solution into the ODE:
Thus, v(t) = c1 + c2 t.
43
We notice that this general solution is a linear combination of two terms y1 (t) = e2t and y2 (t) = te2t .
Indeed, we can easily verify that y2 (t) is another solution of the ODE that is LI of y1 (t).
Proof:
We know y1 (t) = ert is a solution since r is the root of the characteristic equation ar2 + br + c = 0.
The two roots are given by √
−b ± b2 − 4ac
r1, 2 = .
2a
r1 = r2 only occurs when b2 − 4ac = 0 which yields
−b
r = r1 = r2 = =⇒ 2ar + b = 0.
2a
Notice that: 0
y20 (t) = tert = ert + rtert = (1 + rt)ert ,
0
y200 (t) = (1 + rt)ert = rert + (r(1 + rt)ert = (2r + r2 t)ert .
Plug into the ODE:
a(2r + r2 t)ert + b(1 + rt)ert + ctert = 0,
which simplifies to
(ar2 + br + c)tert + (2ar + b)ert = 0.
Therefore, y2 (t) = tert is a solution. We shall learn a technique later to verify that it is LI of
y1 (t) = ert .
Based on Theorem 3.2.1, y1 (t) = e−3t and y2 (t) = te−3t form a fundamental set.
44
Thus, the general solution is
y(t) = c1 e−3t + c2 te−3t .
Its derivative is
y 0 (t) = −3c1 e−3t + c2 e−3t − 3c2 te−3t
Using initial condition,
y 00 + 2y 0 + 10y = 0.
√
−2 ± 4 − 40 √ √
r1, 2 = = −1 ± 1 − 10 = −1 ± −9.
2
No real-valued root!!!
45
3.3 Introduction to complex numbers and Euler’s formula
A complex number:
z = a + bi, (46)
where a, b are real, is the sum of a real and an imaginary number.
A complex number z=a+bi represents a point (a, b) in a 2D space, called the complex space.
Im{z}
z=a+bi
b
Re{z}
a
−b
z=a−bi
Figure 7: A complex number z and its conjugate z̄ in complex space. Horizontal axis contains all
real numbers, vertical axis contains all imaginary numbers.
The complex conjugate of z=a+bi: is z̄ = a − bi (i.e., reversing the sign of Im{z} changes z
into z̄!)
Notice that:
z + z̄ z − z̄
Re{z} = = a, Im{z} = = b.
2 2i
46
3.3.2 Basic complex computations
Answer:
1. z1 − z2 = (3 − 1) + (4 − (−2))i = 2 + 6i;
z1 3
2. 2
+ 24 i = 1.5 + 2i;
= 2
√ √ √
3. |z1 | = z1 z̄1 = 32 + 42 = 25 = 5;
z2 z2 z̄1 (1−2i)(3−4i) 11−10i 11
4. z1
= z1 z̄1
= 52
= 25
= 25
− 25 i.
47
3.3.3 Back to Example 3.2.5
y 00 + 2y 0 + 10y = 0.
Since r1 6= r2 (actually r1 = r̄2 , i.e. they are a pair of complex conjugates), the general solution is
Remarks:
• The solution should also be real-valued and with clear physical meanings.
48
3.3.4 Complex-valued exponential and Euler’s formula
Euler’s formula:
eit = cos t + i sin t. (47)
Based on this formula and that e−it = cos(−t) + i sin(−t) = cos t − i sin t:
∞ ∞ ∞ ∞ ∞
it
X (it)n X (it)n X (it)n X (it)2m X (it)2m+1
e = = + = +
n=0
n! n even
n! n odd
n! m=0
(2m)! m=0 (2m + 1)!
∞ ∞ ∞ ∞
X i2m t2m X i2m+1 t2m+1 X (i2 )m t2m X i(i2 )m t2m+1
= + = +
m=0
(2m)! m=0 (2m + 1)! m=0
(2m)! m=0
(2m + 1)!
∞ ∞
X (−1)m t2m X (−1)m t2m+1
= +i = cos t + i sin t.
m=0
(2m)! m=0
(2m + 1)!
49
3.3.5 Back again to Example 3.2.5
y 00 + 2y 0 + 10y = 0.
Since r1 6= r2 (actually r1 = r̄2 , i.e. they are a pair of complex conjugates), the general solution is
y100 (t) = e−t [cos 3t + 3 sin 3t] − e−t [−3 sin 3t + 9 cos 3t] = e−t [−8 cos 3t + 6 sin 3t].
Substitute into the ODE, we obtain
e−t [−8 cos 3t + 6 sin 3t] − 2e−t [cos 3t + 3 sin 3t] + 10e−t cos 3t = 0,
which shows that y1 (t) is indeed a real-valued solution. Similarly, y2 (t) is also a solution. We shall
demonstrate later that they are LI. Thus, the real-valued general solution is:
50
Theorem 3.3.1 If the characteristic equation of the ODE ay 00 + by 0 + cy = 0 (a, b, c are constants)
yields a pair of complex conjugates r, r̄ = α ± βi (complex-valued roots alway occur in pairs of
conjugates), then
yc (t) = ert = e(α+βi)t and ȳc (t) = er̄t = e(α−βi)t
form a fundamental set of complex-valued solutions (complex-valued solutions are also conjugates
of each other).
1 1
y1 (t) = Re{yc } = (yc + ȳc ) = eαt cos(βt) and y2 (t) = Im{yc } = (yc − ȳc ) = eαt sin(βt)
2 2i
form a fundamental set of real-valued solutions.
Proof: yc (t) and ȳc (t) are obviously solutions to the ODE since r and r̄ are roots of the character-
istic equation. Later we shall demonstrate using Wronskian that they are indeed LI.
Since y1 (t) and y2 (t) are both linear combinations of yc (t) and ȳc (t). Based on the Principle of
Superposition, they must also be solutions of the ODE. We shall demonstrate later that they are
LI.
51
3.3.6 One more example
Therefore,
yc (t) = e(−2+3i)t = e−2t e3it = e−2t [cos 3t + i sin 3t].
y1 (t) = Re{yc (t)} = e−2t cos 3t, y2 (t) = Im{yc (t)} = e−2t sin 3t
y(t) = c1 e−2t cos 3t + c2 e−2t sin 3t = e−2t [c1 cos 3t + c2 sin 3t].
y 0 (t) = −2e−2t [c1 cos 3t + c2 sin 3t] + e−2t [−3c1 sin 3t + 3c2 cos 3t].
52
3.4 Applications: mechanical and electrical vibrations
Consider the spring-mass system in frictionless medium as shown in Fig. 8. u = 0 represents the
equilibrium position where the spring length is equal to its unstressed natural length. Let the spring
constant be k (> 0), which measures the hardness of the spring. Assume that the spring obeys
Hook’s law: F = −ku, where u the displacement of the mass w.r.t. the equilibrium position.
1. Find the general solutions.
(a) u(0) = u0 , u(0) = 0 (initial displacement but no initial speed);
(b) u(0) = 0, u0 (0) = v0 (initial speed but no initial displacement);
(c) u(0) = u0 , u0 (0) = v0 (both initial displacement and speed).
0 u(t)
Figure 8: Spring-mass system in frictionless medium giving rise to sustained harmonic oscillations.
53
Therefore,
y1 (t) = Re{yc (t)} = cos(ω0 t), y2 (t) = Im{yc (t)} = sin(ω0 t)
Notice that
y 0 (t) = −c1 ω0 sin(ω0 t) + c2 ω0 cos(ω0 t).
y(0) = u0 =⇒ c1 = u0 ; y 0 (0) = 0 =⇒ c2 = 0.
Thus,
y(t) = u0 cos(ω0 t).
Thus,
v0
y(t) = sin(ω0 t).
ω0
54
Thus,
v0
y(t) = u0 cos(ω0 t) + sin(ω0 t).
ω0
Question: How does this solution relate to a single sin or cos function?
2
v0
2 + ω0
u0 v0
= ω0
A
φ
u0
where s 2 v0
v0 ω0 v0
A= u20 + , φ = tan−1 = tan−1 .
ω0 u0 u0 ω0
55
Here are a few important concepts related to harmonic oscillations:
56
3.4.2 Spring-mass-damper system: a simple model of suspension systems
See Fig. 10. A mass m is supported by a Hookian spring with a spring constant k. The damper
provides a damping force that is proportional to the speed with a damping constant γ. All constants
are positive valued. At equilibrium, the length of the spring is shrunk by s, i.e. ks = mg. This
equilibrium position is chosen to be y = 0.
1. Write down the ODE that describes the motion of the mass.
y(t) m s=mg/k
0
my 00 + γy 0 + ky = 0, (49)
57
which yields
√ i
p
−γ ± γ 2 − 4mk γ h
r1, 2 = = −1 ± δ , (50)
2m 2m
where
4mk
δ =1− .
γ2
(1) Over damped condition: When damping is big, γ 2 > 4mk such that
4mk h √ i
0<δ =1− <1 =⇒ −1 ± δ < 0.
γ2
which means the displacement is damped to zero exponentially irrespective of initial conditions.
Figure 11: Over damped solutions for three different initial conditions.
58
(2) Critically damped condition: This happens when γ 2 = 4mk such that δ = 0.
Now the decay to zero is often slower than a simple exponential function because of the term
γ
c2 te− 2m t .
Figure 12: Critically damped solutions for three different initial conditions.
(3) Under damped condition: This occurs when γ 2 < 4mk such that δ < 0. Now, the ch. eq.
gives a pair of complex roots.
γ h √ i γ h √ i γ
r, r̄ = −1 ± δ = −1 ± i −δ = − ± ωi,
2m 2m 2m
where
s r r r
γ √ γ2
γ 4mk k γ2 k γ2
ω= −δ = −1= − = 1− = ω0 1 − + ···
2m 2m γ2 m 4m2 m 4mk 8mk
59
where r
k
ω0 ≡
m
is the intrinsic frequency of the harmonic oscillator for γ = 0 (i.e., in frictionless medium). Note
that, ω < ω0 . Now, the general solution is
γ γ
y(t) = e− 2m t [c1 cos(ωt) + c2 sin(ωt)] = Ae− 2m t cos(ωt − φ),
where
c2
q
A = c21 + c22 , φ = tan−1 .
c1
where ω is referred to as the quasi-frequency since the vibration is not exactly periodic.
Figure 13: Under damped solution for one set of initial conditions.
Conclusion: In the presence of friction the motion of the mass will eventually stop under all con-
ditions, but the approach to equilibrium is different. In under damped condition, the approach is
oscillatory with exponentially decreasing amplitude.
60
Example 3.4.1: In a spring-mass-damper system, m = 2 kg. The spring is known to give a force
of Fs = 3 N when stretched/compressed by l = 10 cm. The damper is known to provide a drag
force of Fd = 3 N at a speed of vd = 5 m/s. Let y(t) be the displacement of the mass w.r.t. the
equilibrium position at time t. Initial conditions are: y(0) = 5 cm, y 0 (0) = 10 cm/s.
1. Find y(t).
2. If the motion is under damped, find the ratio between the quasi-frequency of the motion and
its intrinsic frequency, ωω0 =?
Therefore,
y(t) = e−0.15t [0.05 cos(3.87008t) + 0.2778 sin(3.87008t)] .
61
3.4.3 Oscillations in a RLC circuit
A typical RLC circuit is composed of a resistor (R), an inductor (L), a capacitor (C) combined with
a power source E(t) and a switch (see Fig. 14).
R L
E(t)
Figure 14: A typical RLC circuit.
Let Q(t) = the amount of electric charge (measured in units of Coulomb) on each plate of the
capacitor at time t. Then,
dQ
I=
dt
(measured in units of Amperes) is the current that flows through the circuit.
Kirchhoff ’s 2nd law: In a closed RLC circuit, the impressed voltage is equal to the sum of voltage
drops across all elements in the circuit.
Thus,
1
VL + VR + VC = E(t) =⇒ LQ00 + RQ0 + Q = E(t). (53)
C
62
When the impressed voltage E(t) is turned to zero, the equation becomes homogeneous
1
LQ00 + RQ0 + Q = 0. (54)
C
dQ
Interestingly, if we differentiate both sides w.r.t t and remembering that dt
= Q0 = I, we obtain
1
LI 00 + RI 0 + I = 0.
C
Therefore, the current I(t) obeys the same ODE as the charge Q(t).
In the absence of resistor: R = 0, the circuit is reduced to an LC circuit which gives rise to
harmonic oscillations
1 1
LQ00 + Q = 0 =⇒ Q00 + Q=0 =⇒ Q00 + ω02 Q = 0,
C LC
where the intrinsic frequency is
1
ω0 = √ . (55)
LC
In the presence of resistor: The ch. eq. yields the following roots
q
−R ± R2 − 4L
" r #
C R 4L
r1 , r2 = =− 1± 1− . (56)
2L 2L CR2
4L
Over, critical, and under damped conditions can occur depending on magnitude of the term CR2
.
63
Example 3.4.2: Consider a RLC circuit with C = 10−5 F , R = 200 Ω, L = 0.5 H with initial
conditions: Q(0) = Q0 = 10−6 C, Q0 (0) = I0 = −2 × 10−4 A.
Answer:
1.
1 1
ω0 = √ =√ ≈ 447.2 s−1
LC 0.5 × 10−5
1
2. The roots to the ch. eq. Lr2 + Rr + C
= 0 are
q
4L
−R ± R2 − C √ √
r1 , r2 = = −200 ± 40000 − 200000 = −200 ± −160000 = −200 ± 400i,
2L
where the quasi-freqyency ω = 400 < ω0 . So, the general solution is
Therefore,
t→∞
Q(t) = 10−6 e−200t cos(400t) −→ 0.
64
3.5 Linear independence, Wronskian, and fundamental solutions
Theorem 3.5.1: If r1 6= r2 , then y1 (t) = er1 t , y2 (t) = er2 t are two linearly independent functions.
= er1 t0 +r2 t1 [1 − er2 t0 +r1 t1 −(r1 t0 +r2 t1 ) ] = er1 t0 +r2 t1 [1 − e(r2 −r1 )(t0 −t1 ) ] 6= 0,
because r1 6= r2 , t0 6= t1 . Therefore,
k1 0
~x = =
k2 0
is the only possible solution for arbitrary choices of t0 and t1 (t0 6= t1 ). Thus, er1 t and er2 t are LI
provided r1 6= r2 .
65
Theorem 3.5.2 Wronskian and LI: If f (t), g(t) are differentiable functions in an open interval
I, the Wronskian of the two is defined by
f g
W [f, g](t) = 0 0 = f g 0 − f 0 g.
f g
Often, if W [f, g](t) = 0 for every t ∈ I, then f and g are LD. This result should be used with
caution although it applies to almost all cases that we encounter in this course. However, there
has been controversy and counter examples on this result. For example, it was reported that Peano
pointed out (1889) that f (t) = t2 and g(t) = |t|t are both differentiable. But
It is true that they are LD on (−∞, 0) and (0, ∞) because they are constant multiples of each
other. However, in any open interval containing t = 0, e.g. (−a, a) (a > 0), they are LI. A number
of other conditions should be satisfied to ensure that vanishing Wronskian means linear dependence.
Thus, W [f, g](t) 6= 0 for any one t = t0 ∈ I, k1 = k2 = 0 is the only solution. Then, f and g are
LI in I.
Example 3.5.1: When repeated roots occur, the ODE ay 00 +by 0 +cy = 0 has two solutions y1 = ert ,
y2 = tert . Use Wronskian to show that they are LI.
Answer:
W [y1 , y2 ](t) = y1 y20 − y10 y2 = ert (ert + rtert ) − rert tert = e2rt 6= 0.
Thus, y1 and y2 are LI.
66
Example 3.5.2: When complex-valued roots occur, the ODE ay 00 +by 0 +cy = 0 has two real-valued
solutions y1 = ert cos ωt, y2 = ert sin ωt. Use Wronskian to show that they are LI.
Answer:
W [y1 , y2 ](t) = y1 y20 − y10 y2 = ert cos ωt[rert sin ωt + ωert cos ωt] − ert sin ωt[rert cos ωt − ωert sin ωt]
67
Wronskian of two solutions of L[y] = 0 can be calculated BEFORE they are solved!
Theorem 3.5.3 (Abel’s theorem): If y1 (t), y2 (t) are two solutions of the following ODE
where p(t), q(t) are continuous in an open interval I, then the Wronskian between the two is totally
determined by p(t) (even before the solutions are found!)
R
W [y1 , y2 ](t) = Ce− p(t)dt
,
where C is a constant that depends on y1 , y2 but not on t. If p(t) = p0 = const., then W [y1 , y2 ](t) =
Ce−p0 t . In this case, only two possibilities can occur: If y1 , y2 are two solutions of L[y] = 0 (where
2
L = dtd 2 + p0 dtd + q0 (p0 , q0 are constants)), then (i) W [y1 , y2 ](t) 6= 0 for all t (when they are LI);
(ii) W [y1 , y2 ](t) = 0 for all t (when they are LD). In this case, It is impossible for W [y1 , y2 ](t) to
be zero for some values of t but not for other values of t! W [y1 , y2 ](t) 6= 0 when y1 (t), y2 (t) are LI
and form a fundamental set of solutions.
Proof:
y1 is a solution =⇒ y100 + p(t)y10 + q(t)y1 = 0. (a)
y2 is a solution =⇒ y200 + p(t)y20 + q(t)y2 = 0. (b)
y1 × (b) =⇒ y1 y200 + p(t)y1 y20 + q(t)y1 y2 = 0, (I)
(a) × y2 =⇒ y100 y2 + p(t)y10 y2 + q(t)y1 y2 = 0. (II)
Note that:
W ≡ W [y1 , y2 ](t) = y1 y20 − y10 y2 ,
W 0 = [y1 y20 − y10 y2 ]0 = y10 y20 + y1 y200 − y100 y2 − y10 y20 = y1 y200 − y100 y2 .
dW
W 0 + p(t)W = 0 =⇒ W 0 = −p(t)W =⇒ = −p(t)dt
W
Z Z Z
dW R
= −p(t)dt =⇒ ln W = − p(t)dt + C1 =⇒ W = Ce− p(t)dt
.
W
68
Example 3.5.3: If p(t) is differentiable and p(t) > 0, then for two solutions y1 (t), y2 (t) of the
following ODE
0
[p(t)y 0 ] + q(t)y = 0,
the Wronskian
C
W [y1 , y2 ](t) = .
p(t)
Answer: Rewrite the ODE into the ‘normal’ form
0 p0 (t) 0 q(t)
[p(t)y 0 ] + q(t)y = p(t)y 00 + p0 (t)y 0 + q(t)y = 0 =⇒ y 00 + y + y = 0.
p(t) p(t)
R p0 (t) R dp C C
W [y1 , y2 ](t) = Ce− p(t)
dt
= Ce− p = Ce− ln p = ln p
= .
e p
Theorem 3.5.4 (Uniqueness of solution): Suppose that y1 (t), y2 (t) are solutions of
69
Turn this system in matrix form:
y1 (t0 ) y2 (t0 ) c1 y0
= =⇒ A~x = ~b.
y10 (t0 ) y20 (t0 ) c2 y00
Since
y (t ) y2 (t0 )
det A = 10 0
= W [y1 , y2 ](t0 ) 6= 0,
y1 (t0 ) y20 (t0 )
70
Example 3.5.4: For the following ODEs:
(a) y 00 + ω 2 y = 0,
(b) y 00 − ω 2 y = 0.
Answer:
(a) y 00 + ω 2 y = 0 =⇒ ch. eq. r2 + ω 2 = 0 =⇒ r1, 2 = ±ωi =⇒ yc (t) = eiωt = cos ωt + i sin ωt.
Therefore,
y1 (t) = Re{yc (t)} = cos ωt, y2 (t) = Im{yc (t)} = sin ωt
form a fundamental set. It is easy to verify that y1 (0) = 1, y2 (0) = 0, and y10 (0) = 0, y20 (0) = ω.
To make y20 (0) = 1, we choose
y2 (t) = ω −1 sin ωt,
while leaving y1 (t) as expressed above.
form a fundamental set. But yi (0) = yii (0) = 1, yi0 (0) = −yii0 (0) = ω. They do not satisfy the
initial conditions listed above.
Notice that both are linear combinations of yi (t) and yii (t).
We notice the striking similarity between these definitions and those of sine and cosine:
1 iωt 1
sin ωt = [e − e−iωt ], cos ωt = [eiωt + e−iωt ].
2i 2
71
Figure 15: Graphs of sinh x = 12 (ex − e−x ) (purple) and cosh x = 12 (ex + e−x ) (green) .
Now,
y1 (t) = cosh ωt, y2 (t) = sinh ωt
also form a fundamental set since both are also solutions of the ODE based on the Principle
of Superposition.
Catenary: The shape of the curve of cosh(ax) is often referred to as as catenary - the curve
that an idealized hanging chain or cable assumes under its own weight when supported only at
its ends. To learn more about catenary check here: http://en.wikipedia.org/wiki/Catenary.
72
More similarities between hyperbolic trig and trig functions (not a complete list):
For a fundamental set of y 00 − ω 2 y = 0 that satisfies the above listed conditions, we choose
It is easy to check that y1 (0) = 1, y2 (0) = 0 and y10 (0) = 0, y20 (0) = 1.
73
Example 3.5.5: Some advantage of using the hyperbolic trig functions as fundamental set. For
the ODE
4y 00 − y = 0.
(b) Find the general solution expressed in terms of hyperbolic trig functions obeying the conditions
in Example 3.5.4.
(c) In each case, determine the constants by using the ICs: y(0) = 1, y 0 (0) = −1.
1 1 3 1
y(t) = − e 2 t + e− 2 t . (A)
2 2
y(0) = 1 = c1 , y 0 (0) = −1 = c2 .
It is easy to verify with a little bit algebra that (A) and (B) are identical but expressed in
different forms.
74
Example 3.5.6: Solve the followin ODEs
(a) y 00 + ω 2 y = 0,
(b) y 00 − ω 2 y = 0,
Therefore,
y(t) = A cos ω(t − t0 ) + Bω −1 sin ω(t − t0 ).
Therefore,
y(t) = A cosh ω(t − t0 ) + Bω −1 sinh ω(t − t0 ).
75
3.6 Nonhomogeneous, 2nd-order, linear ODEs
A model of suspension systems: When a car runs on bumpy roads, the suspension system
undergoes a sustained oscillatory force. To model this situation, we consider a modified version of
our quarter car model. See Fig. 17. For this system, the ODE becomes
my 00 + γy 0 + ky = Ff cos ωf t, (61)
where Ff is the amplitude of the external forcing, ωf is the forcing frequency. Both Ff , ωf are
supposed to be known constants.
ωf
y(t) m s=mg/k
0
Figure 17: Periodically forced spring-mass-damper system: a model of forced suspension systems.
76
A RLC circuit powered by an alternative voltage source: When a closed RCL circuit is
powered by an alternative voltage source, See Fig. 18. For this system, the ODE becomes
1
LQ00 + RQ0 + Q = Ff cos ωf t, (62)
C
where Ff is the amplitude of the external forcing, ωf is the forcing frequency. Both Ff , ωf are
supposed to be known constants.
R L
E(t)=Ff cos ωf t
77
Questions : How to solve these nonhomogeneous ODEs?
Theorem 3.6.1: If Y1 and Y2 are solutions of the nonhomogeneous ODE L[y] = g(t), then the
difference between the two Y1 − Y2 is a solution of the homogeneous ODE L[y] = 0.
Y 1 − Y 2 = c1 y 1 + c2 y 2
This means that Y1 − Y2 is a solution of L[y] = 0. Any solution of L[y] = 0 can be expressed as
c1 y 1 + c2 y 2
Theorem 3.6.2 Solution structure of L[y] = g(t): The general solution of L[y] = g(t) can be
expressed in the following form
Remark: The general solution of L[y] = g(t) is the sum of the general solution yh (t) of the ho-
mogeneous ODE L[y] = 0 and a particular solution yp (t) of L[y] = g(t). We know how to solve yh (t).
78
Question: How to find one yp (t)? (Usually takes more time and effort to solve than yh (t)!).
Basic idea: Based on the nature of g(t), we generate an educated guess of yp (t), leaving some
constants to be determined by plugging it into the ODE.
2y 00 + 3y 0 + y = t2 .
Answer: For this example, g(t) = t2 is a 2nd degree polynomial. It is natural to assume that yp (t)
is also a polynomial of degree 2:
yp (t) = At2 + Bt + C,
where A, B, C are coefficients to be determined.
Notice that
yp0 (t) = 2At + B, yp00 (t) = 2A.
Substitute yp00 (t), yp0 (t), yp (t) into the ODE, we obtain
A = 1;
6A + B = 0 =⇒ B = −6A = −6;
4A + 3B + C = 0 =⇒ C = −4A − 3B = −4 + 18 = 14.
Therefore,
yp (t) = t2 − 6t + 14,
is one particular solution.
2y 00 + 3y 0 + y = 3 sin t.
79
Answer: An educated guess is: yp (t) = A cos t + B sin t. Note that
lhs = 2(−A cos t − B sin t) + 3(−A sin t + B cos t) + A cos t + B sin t = (3B − A) cos t − (B + 3A) sin t
rhs = 3 sin t.
2y 00 + 3y 0 + y = t2 + 3 sin t.
Answer: Observation:
g(t) = g1 (t) + g2 (t),
with g1 (t) = t2 and g2 (t) = 3 sin t, each was solved separately in the previous two examples.
80
Theorem 3.6.3 Principle of Superposition: For the nonhomogeneous ODE
If
g(t) = g1 (t) + g2 (t),
and that
L[yp1 ] = g1 (t), L[yp2 ] = g2 (t),
then
yp (t) = yp1 (t) + yp2 (t)
is a particular solution of L[y] = g(t).
Back to Example 3.6.3: Based on the Principle of Superposition and results from the two previous
examples, a particular solution is:
3
yp (t) = yp1 (t) + yp2 (t) = t2 − 6t + 14 − [3 cos t + sin t] .
10
Notice that
1 t→∞
yh (t) = c1 e− 2 t + c2 e−t −→ 0!
Therefore,
t→∞ 3
y(t) = yh (t) + yp (t) −→ yp (t) = t2 − 6t + 14 − [3 cos t + sin t] .
10
81
yp cannot be identical to either solution in the fundamental set:
2y 00 + 3y 0 + y = e−t .
yp (t) = Ae−t .
It is not Ok here!!! Because Ae−t is already contained in the second term of yh (t).
yp (t) = Ate−t .
Notice that
yp0 (t) = Ae−t − Ate−t = A(1 − t)e−t ,
yp00 (t) = −Ae−t − A(1 − t)e−t = A(−2 + t)e−t .
lhs = 2A(−2 + t)e−t + 3A(1 − t)e−t + Ate−t = −Ae−t = e−t = rhs =⇒ A = −1.
82
A list of frequently encountered g(t) and the educated guess of yp (t):
Table 1: Educated guess of yp (t) for L[y] = g(t). Here, s = 0, 1, or 2 is the integer that ensures no
term in yp (t) is identical to either terms of yh (t).
g(t) yp (t)
Pn (t)eαt cos βt, Pn (t)eαt sin βt ts [(A0 tn + A1 tn−1 + · · · + An−1 t + An ) cos βt+
(B0 tn + B1 tn−1 + · · · + Bn−1 t + Bn ) sin βt]eαt
Example 3.6.4: Use the table above to determine the appropriate guess of yp (t) for the following
ODE
y 00 + 2y 0 + y = t2 e−t .
83
Pros and cons of the method of undetermined coefficients:
• Does not apply to linear ODEs with varying coefficients y 00 + p(t)y 0 + q(t)y = g(t).
84
3.6.2 Method 2: Variation of parameters (Lagrange)
Basic idea: If y1 (t), y2 (t) form a fundamental set of the homogeneous ODE L[y] = 0, then
Lagrange suggested that a particular solution for the nonhomogeneous ODE L[y] = g(t) can always
be expressed as
yp (t) = u1 (t)y1 (t) + u2 (t)y2 (t)
which is obtained by allowing the parameters c1 , c2 in yh (t) to vary as functions of t.
where p(t), q(t), g(t) are continuous in an open interval I. y1 (t), y2 (t) form a fundamental set of
the homogeneous ODE L[y] = 0, then one particular solution for L[y] = g(t) is
where Z Z
y2 (t)g(t) y1 (t)g(t)
u1 (t) = − dt, u2 (t) = dt,
W [y1 , y2 ](t) W [y1 , y2 ](t)
are calculated by excluding the integration constant. The general solution of L[y] = g(t) is
y(t) = yh (t) + yp (t) = c1 y1 (t) + c2 y2 (t) + u1 (t)y1 (t) + u2 (t)y2 (t) = [c1 + u1 (t)]y1 (t) + [c2 + u2 (t)]y2 (t).
Proof: (Not required!) The goal is to solve for u1 (t) and u2 (t).
yp = u1 y1 + u2 y2 =⇒
yp0 = u1 y10 + u2 y20 + u01 y1 + u02 y2 =⇒
yp0 = u1 y10 + u2 y20 ,
if we force the following constraint on u1 (t) and u2 (t) such that
Now,
yp00 = u1 y100 + u2 y200 + u01 y10 + u02 y20 .
Substitute yp00 , yp0 , yp into the L[y] = g(t) and reorganize and simplify the terms, we obtain
85
Notice that L[y1 ] = 0 and L[y2 ] = 0 , we obtain
where
u01
y1 y2 ~b = 0
A= , ~x = , .
y10 y20 u02 g(t)
Important note: Since the formula given in Theorem 3.6.4 was derived using the ODE of the
“standard form”
y 00 + p(t)y 0 + q(t)y = g(t),
it is important to change your equation into this standard form before using this formula. Atually,
it is always a good habit to change your ODE into this form before you try to solve it.
Since
y1 y2
det A = 0
= W [y1 , y2 ] 6= 0,
y1 y20
Since we only need one particular solution, we do not need to include the integration constant in
these integrals.
86
Back to Example 3.6.4: Find the general solution for the following ODE.
y 00 + 2y 0 + y = t2 e−t .
te−t t2 e−t
Z Z Z
y2 g 3 1 4
u1 (t) = − dt = − dt = − t dt = − t.
W [y1 , y2 ] e−2t 4
Z Z −t 2 −t Z
y1 g e te 1
u2 (t) = dt = −2t
dt = t2 dt = t3 .
W [y1 , y2 ] e 3
Thus,
1 4 −t 1 3 −t 1 1 4 −t 1
yp (t) = u1 y1 + u2 y2 = − t e + t (te ) = − t e = t4 e−t .
4 3 3 4 12
87
Example 3.6.5: Find the general solution for the following ODE (taken from 2000 final exam).
√
y 00 + 2y 0 + y = te−t .
√
Answer: Because g(t) = te−t is not included in the in Table 3.6.1, we do not have an educated
guess of yp (t). We use the “variation of parameters” method.
The Wronskian is
Thus, √
te−t te−t
Z Z Z
y2 g 3 2 5
u1 (t) = − dt = − −2t
dt = − t 2 dt = − t 2 .
W [y1 , y2 ] e 5
Z Z −t √ −t Z
y1 g e te 1 2 3
u2 (t) = dt = −2t
dt = t 2 dt = t 2 .
W [y1 , y2 ] e 3
Thus,
2 5 −t 2 3 −t 2 2 5 −t 4 5
yp (t) = u1 y1 + u2 y2 = − t 2 e + t 2 (te ) = − t 2 e = t 2 e−t .
5 3 3 5 15
88
Example 3.6.6: Given that y1 (t) = t, y2 (t) = tet form a fundamental set of the homogeneous
ODE corresponding to the nonhomgeneous ODE
find the general solution to this nonhomgeneous ODE. (Note that for ODEs with non-constant
coefficients, we do not yet know how to solve for y1 (t), y2 (t).)
The Wronskian
t tet
= t(1 + t)et − tet = t2 et .
W [y1 , y2 ] =
1 (1 + t)et
Now,
tet (2t)
Z Z Z
y2 g
u1 = − dt = − dt = − 2dt = −2t.
W [y1 , y2 ] t2 et
Z Z Z
y1 g t(2t)
u2 = dt = 2 t
dt = 2e−t dt = −2e−t .
W [y1 , y2 ] te
Thus,
yp (t) = u1 y1 + u2 y2 = (−2t)t + (−2e−t )(tet ) = −2t2 − 2t.
Therefore,
y(t) = yh (t) + yp (t) = c1 t + c2 tet − 2t2 − 2t = (c1 − 2)t + c2 et − 2t2 = c1 t + c2 tet − 2t2 .
89
Example 3.6.7: For the nonhomgeneous ODE
find its general solution given that y1 (t) = t is one solution to the corresponding homogeneous ODE
t2 y 00 − 2ty 0 + 2y = 0.
To find both yh (t) and yp (t), we need a second LI solution y2 (t) for the homogeneous equation
L[y] = 0.
Assuming that y2 (t) is LI of y1 (t), we can use Abel’s theorem to calculate the Wronskian (by picking
the constant C = 1):
2 2
R R
W [y1 , y2 ] = y1 y20 − y10 y2 = e− p(t)dt
=e t
dt
= eln t = t2 .
1
R R R
= e− = 1t , and R(t) =
p(t)dt dt
R p(t)dt
R
The integrating factor is e t e tdt = dt = t + C = t. Thus,
R
y2 (t) = e− p(t)dt
[R(t) + C] = t[t + C] = t2 .
Therefore,
yh (t) = c1 y1 (t) + c2 y2 (t) = c1 t + c2 t2 .
The Wronskian obtained by using Abel’s theorem is indeed the one we are looking for
W [y1 , y2 ] = y1 y20 − y10 y2 = t(t2 )0 − (t)0 (t2 ) = 2t2 − t2 = t2 (> 0, since t > 0).
90
where
4t2
Z Z Z
gy2
u1 (t) = − dt = − dt = − 4dt = −4t,
W [y1 , y2 ] t2
Z Z Z
gy1 4t 4
u2 (t) = dt = 2
dt = dt = ln t4 .
W [y1 , y2 ] t t
Thus,
yp (t) = u1 (t)y1 (t) + u2 (t)y2 (t) = −4t2 + t2 ln t4 .
Separation of variables
Therefore,
91
Substitute p(t) = − 2t and y1 (t) = t into the formula,
y10 (t)0
Z Z
−2 + 2
Z Z
2
− p+2 dt = − − +2 dt = − dt = − 0dt = C1 =⇒
y1 t t t
Z Z
C1 C2 =1, C3 =0
u(t) = e dt = C2 dt = C2 t + C3 = t.
Therefore
y2 (t) = u(t)y1 (t) = t2 .
92
3.7 Applications to forced vibrations: beats and resonance
Case I: ωf 6= ω0 . We know
yh (t) = c1 cos(ω0 t) + c2 sin(ω0 t).
F
−Bωf2 cos(ωf t) + Bω02 cos(ωf t) = F cos(ωf t) =⇒ B= .
ω02 − ωf2
y 0 (0) = 0 = ω0 c2 =⇒ c2 = 0.
Thus,
F
y(t) = [cos(ωf t) − cos(ω0 t)] .
ω02 − ωf2
Question: How to combine the sum of two trig functions with different frequencies into a product
between two trig functions?
1 1
Let ω+ = (ω0 + ωf ), ω− = (ω0 − ωf ); =⇒ ω0 = ω+ + ω− , ωf = ω+ − ω− .
2 2
93
Using the trig identities
cos(α ∓ β) = cos α cos β ± sin α sin β,
we obtain
cos(ωf t) = cos(ω+ − ω− )t = cos ω+ t cos ω− t + sin ω+ t sin ω− t,
cos(ω0 t) = cos(ω+ + ω− )t = cos ω+ t cos ω− t − sin ω+ t sin ω− t.
Therefore.
F 2F 2F ω0 + ωf ω0 − ωf
y(t) = [cos(ωf t) − cos(ω0 t)] = sin ω+ t sin ω− t = sin t sin t.
ω02 − ωf2 ω02 − ωf2 ω02 − ωf2 2 2
ω0 +ωf
Assuming that ω̄ = 2
is the average between ω0 and ωf , we can write the solution in the
following form
2F ω0 − ωf
y(t) = A(t) sin ω̄t, where A(t) = sin t.
ω02 2
− ωf 2
A(t) is the periodically modulated amplitude, thus y(t) shows the phenomenon of “beats” with a
beat frequency
ω0 − ωf
ωbeat = .
2
94
Case II: ωf = ω0 = ω. Again,
but
yp (t) = Bt sin ωt.
Notice that
yp00 (t) = B(2ω cos ωt − ω 2 t sin ωt)
we obtain
F F
B= , =⇒ yp (t) = t sin ωt.
2ω 2ω
Therefore,
F
y(t) = t sin ωt.
2ω
Resonance: The amplitude of this vibration increases to infinity as time approaches infinity.
95
3.7.2 Forced vibration in the presence of damping: γ 6= 0
p
where Γ = γ/(2m), ω0 = k/m, and F = Ff /m.
Characteristic equation:
r2 + 2Γr + ω02 = 0.
where s
Γ2
q
ω = ω02 − Γ2 = ω0 1− .
ω02
Thus,
t→∞
yh (t) = Ae−Γt cos(ωt − φ) −→ 0!
Therefore,
t→∞
y(t) = yh (t) + yp (t) −→ yp (t),
Since the nonhomogeneous term g(t) = F cos(ωf t) (no exponential factor e−Γt !), we can safely
assume
√
where A = C 2 + D2 , φ = tan−1 D
C
.
Plug yp (t) into the ODE and after lengthy calculations, we obtain
96
A(ωf2) γ=0
γ3 γ1 > γ2 > γ3
γ2
γ1
2 γ2 ωf2
ω max =ω − 2
2
0
2m
Figure 21: Resonant amplitude (expressed as multiples of Ff ) as a function of forcing frequency ωf2 .
γ changes the steepness of the curve, steeper for smaller values of γ. When ωf2 = ωmax
2
, maximum
amplitude is achieved for that value of γ.
In Fig. 21, the amplitude of the vibration A(ωf2 ) is plotted as a function of ωf2 for four different
2
values of γ. The smaller the value of γ, the steeper the curve. The location of ωmax where the
maximum amplitude is achieved also shift to the right as the value of γ decreases.
By requiring
0 dA Ff γ 2 − 2m2 (ω02 − ωf2 )
A = =− = 0,
dωf2 2 (γ 2 ωf2 + m2 (ω02 − ωf2 )2 ) 23
γ2
ωf2 = ωmax
2
= ω02 − ≈ ω02 (if γ << 1!).
2m2
2 Ff
Amax = A(ωmax )= q .
γ2
γ ω02 − 4m2
97
4 Systems of linear first-order ODEs
In this lecture, we study systems of linear, first-order ODEs with constant coefficients of the form
0
y1 = a11 y1 + a12 y2 + b1 (t),
(65)
y20 = a21 y1 + a22 y2 + b2 (t),
where y1 (t), y2 (t) are the two unknown functions, b1 (t), b2 (t) are known functions, aij (i, j =
1, 2) are constants. As long as a12 and a21 are not both zero, the two unknown functions are
interdependent and cannot be solved independently.
Remarks:
which is a special case of eq.(65) where a11 = 0 and b1 (t) = 0. Solving eq.(65) automatically
solves ay 00 + by 0 + cy = g(t) as a special case.
(2) An n − th order linear ODE can be reduced to a system of n first order linear ODEs.
(3) Although many results obtained here can apply to systems of n (n > 2) first order ODEs, we
shall focus mostly on a system of two ODEs as given in eq.(65).
Definition of dy
R
dt
and ydt: To express eq.(65) in matrix form, we introduce the definition of the
derivation and integration of a vector/matrix of functions. Let
y1 (t)
y(t) = .
y2 (t)
0 0
dy
0 d y1 (t) y1 (t) y1 (t)
Then, y = = = = ,
dt dt y2 (t) y2 (t) y20 (t)
Z Z R
y1 (t) y 1 (t)dt
and ydt = dt = R .
y2 (t) y2 (t)dt
In these notes, we use the bold lower case letters for vectors and capital letters for matrices.
which simplifies to
y0 = Ay + b, (66)
98
which is a nonhomogeneous system of linear, 1st-order ODEs with constant coefficients. Notice that
a11 a12 b1 (t)
A= , b= .
a21 a22 b2 (t)
y0 = Ay.
99
4.1 Solving the homogeneous system y0 = Ay
Similar to the fact that the solution to the scalar ODE y 0 = ay is y(t) = eat y(0), the solution to
y0 = Ay
Theorem 5.1.1 Principle of Superposition: If y1 (t), y2 (t) are two solution of y0 = Ay, where
A is n × n (n ≥ 2), then
is also a solution. If y1 (t), y2 (t) are linearly independent, then it is the general solution for the
case when n = 2.
An educated guess of the solution: Based on a very similar argument as was given in Section
3.2, we look for solutions of the form
y1 (t) v1 eλt v1
y= = = eλt = veλt , (67)
y2 (t) v2 eλt v2
where v1 , v2 are t-independent constants (i.e., v is a constant vector). Notice that both λ and v are
to be determined.
Av = λv, (68)
which defines the eigenvalue λ and eigenvector v of the matrix A.
100
Theorem 5.1.2: For a given system y0 = Ay, where A is 2 × 2. If λ1 , λ2 are the two eigenvalues
of A with corresponding eigenvectors v1 , v2 , then
y1 = v1 eλ1 t , y2 = v2 eλ2 t ,
are both solutions of the system. If λ1 6= λ2 , then y1 , y2 are linearly independent and form a
fundamental set. The general solution is
y = c1 y1 + c2 y2 = c1 v1 eλ1 t + c2 v2 eλ2 t .
Proof: It follows from the argument outlined above and the fact about eigenvalues and eigenvectors
in linear algebra. The linear independence shall be verified with the introduction of the Wronskian.
Remark: Solving the system y0 = Ay is reduced to solving Av = λv for the eigenvalues and
eigenvectors of A. This result holds for a system of n (n > 2) equations where the matrix A is
n × n.
Example 5.1.1: Find the general solution for the ODE y 00 + 3y 0 + 2y = 0 by solving the corre-
sponding linear system.
Answer: Let z = y 0 and turn the ODE into the following system
0
y = z,
z 0 = −2y − 3z.
y0
0 1 y
= , =⇒ y0 = Ay,
z0 −2 −3 z
where
0 1
A= .
−2 −3
y = c1 v1 eλ1 t + c2 v2 eλ2 t ,
101
a11 a12
Review of matrix algebra: Finding eigenvalues and eigenvectors of a matrix A = ?
a21 a22
(1) Starting with the equation that defines eigenvectors and eigenvalues
Av = λv =⇒ Av − λv = 0 =⇒ (A − λI)v = 0.
Remarks:
(0) Basically both the eigenvalues and eigenvectors are solved by solving this algebraic equa-
tion for non-zero v.
(i) An eigenvector v is a non-zero vector that is invariant under the action (multiplication)
of the matrix A. (v invariant under A implies Av ∈ v.)
(ii) An eigenvector v represents a whole line in the direction specified by v . Therefore, cv
for any constant c 6= 0 represents the same eigenvector as v.
(iii) The eigenvalue λ corresponding to the eigenvector v is a measure of the factor by which
v is changed by the multiplication of A.
(2) Eigenvectors must be non-zero (while eigenvalues can be zero). For (A − λI)v = 0 to have
non-zero solutions,
det(A − λI) = 0, =⇒
a11 − λ a12
= (a11 −λ)(a22 −λ)−a12 a21 = λ2 −(a11 +a22 )λ+a11 a22 −a12 a21 = 0, =⇒
a21 a22 − λ
λ2 − T rλ + Det = 0 (characteristic equation!)
where T r = trA = a11 + a22 is the trace (defined as the sum of the diagonal entries of A),
Det = det A = a11 a22 − a12 a21 .
(3) After solving λ2 −T rλ+Det = 0 for the two eigenvalues λ1 , λ2 , the corresponding eigenspaces
or nullspaces give the corresponding eigenvectors,
This is because solving (A−λI)v = 0 is finding any non-zero vector v such that (A−λI)v = 0.
This is exactly the same problem as finding the kernel of (A − λI). We shall demonstrate
below how ker(A − λI) is defined and calculated!
(4) An n × n square matrix can be expressed as a row of n column vectors as well as a column of
n row vectors. Rules of matrix multiplication hold for such expressions.
102
(i) Expressed as a row of column vectors:
1 1 1 1
A= = [c1 c2 ] , where c1 = , c2 = .
0 1 0 1
Thus,
2 2 1 1 5
A = [c1 c2 ] = 2c1 + 3c2 = 2 +3 = .
3 3 0 1 3
Let’s us verify the result using the normal matrix multiplication:
2 1 1 2 (1)(2) + (1)(3) 5
A = = = .
3 0 1 3 (0)(2) + (1)(3) 3
103
(5) The kernel of a matrix B is defined as the collection of all vectors k such that
It is also called the nullspace of B, denoted sometimes by N (B) and sometimes by ker(B).
(6) Theorem Rev1: The kernel of a matrix B is span by the linear relations between its column
vectors.
Example Rev1:
1 2 0
ker = = 0, (no linear relation, columns are LI, matrix is invertible!).
3 4 0
| {z }
0c1 +0c2 =0
Remark: This situation never happens in calculating eigenvectors because for the eigenvec-
tor problem (A − λI)v = 0, det(A − λI) = 0 which implies that the matrix A − λI can’t
be invertible! Whenever you see this in your eigenvector calculation, something must be wrong!
Example Rev2:
1 2 2
ker = , (Or any nonzero multiple of it. 1D!).
2 4 −1
| {z }
2c1 −c2 =0
104
Example Rev3:
1 0 0 0 0
ker 0 0 0 = 1 , 0 , (2D! Infinitely many choices exist!).
1 0 0 0 1
| {z }
0c1 +1c2 +0c3 =0 & 0c1 +0c2 +1c3 =0
105
0 1
Back to Example 5.1.1: Since A = , we find T r = −3, Det = 2. The ch. eq. is
−2 −3
λ2 + 3λ + 2 = (λ + 1)(λ + 2) = 0 =⇒ λ1 = −1, λ2 = −2.
The first row gives the solution to the original second-order ODE, while the second row gives
z = y 0 (t). Thus,
y(t) = c1 e−t + c2 e−2t .
0 1 1
Example 5.1.3: Find the general solution of the linear system y = Ay where A = .
4 1
106
4.2 Phase space, vector field, solution trajectories
y10 (t)
0 a11 a12 y1 (t)
y = Ay or = .
y20 (t) a21 a22 y2 (t)
If the coefficients aij , (i, j = 1, 2) are all constants, then the system is autonomous. Therefore, the
vector field is uniquely defined in the state space y1 − y2 plane, also referred to as the phase space.
Example 5.2.1: Turn the 2nd-order ODE for harmonic oscillations y 00 + y = 0 into a system of 1st-
order ODEs. Then, sketch its slope field and trace out one solution starting from y(0) = 2, y 0 (0) = 0.
Answer: Let v(t) = y 0 (t). Then, the ODE is turned into the following system
0
y 0 1 y
= .
v0 −1 0 v
v
2
y
0 1 2
Figure 22: Vector field (blue) of the system y 0 = v, v 0 = −y in the y −v phase space. One trajectory
for y(0) = 2, y 0 (0) = 0 (red) is traced out. It is tangent to the vector field at every point of the
phase space. Variable t is indirectly reflected in the (red) arrow direction traced out by the solution
trajectory. The red dot represents the trivial solution y = v = 0.
107
Example
5.2.2:
Sketch the vector field of the system y0 = Ay studied in Example 5.1.3, where
1 1
A= . In this phase space, clearly indicate the invariant sets (i.e. the eigenvector directions)
4 1
and draw the trajectory flows.
Remember that A has the two eigenvalues λ1 = −1 and λ2 = 3 with corresponding eigenvectors
1 1
v1 = , v2 = .
−2 2
v 1
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1 y
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Figure 23: Vector field of the system given in Example 5.2.2. The two invariant sets v1 = [1 −
2], v2 = [1 2] are clearly boundaries where the trajectories can never cross. They divide the phase
space into 4 distinct areas.
Definition of invariant set: A set of points S in the phase space of the system y0 = Ay is
invariant, if for any initial condition y(0) ∈ S, y(t) stays in S for all t ∈ (−∞, ∞).
Remark: Solution trajectories in phase space can only approach an invariant set but can never
cross it, making the invariant sets standout as important features of the phase diagram of a linear
system.
108
into the general solution, we obtain (notice that v1 , v2 are linearly independent)
c1 = c, c2 = 0.
Furthermore, the above result shows that the flow direction on the invariant set depends on the sign
of λ1 . If λ1 < 0, y(t) = cv1 eλ1 t → 0, the time arrow is pointed to the origin; if λ1 > 0, however,
y(t) = cv1 eλ1 t → (∞)cv1 , the time arrow points away from the origin.
y0 = Ay,
1 1
where A= ,
y(0) = y0 , 4 1
Answer: This system has already been solved in Example 5.1.3, where we found that λ1 =
−1, λ2 = 3 and that
1 1
v1 = , v2 = .
−2 2
109
Now, we need to calculate the inverse of a 2 × 2 matrix.
Remark:
Had we paid attention to IC(a), this result would have been expected since y0 =
−1
= (−1)v1 which means the system started from a point in the invariant set defined by
2
v1 . It surely is expected to remain in it for all t. As a matter of fact, c1 = −1 is exactly the factor
in the expression of y0 in terms of v1 .
Now, before we use the IC(b) to solve for c1 and c2 , let’s find out what is the relationship between
y0 and the eigenvectors. We realize that
2
y0 = = 2v2 .
4
Thus, we expect that c1 = 0, c2 = 2 for this IC based on what we learned above. Let’s verify it
using IC(b):
1 1 1 1 c1 2
y(0) = c1 + c2 = = =⇒
−2 2 −2 2 c2 4
−1
c1 1 1 2 1 2 −1 2 1 0 0
= = = = =⇒ c1 = 0, c2 = 2.
c2 −2 2 4 4 2 1 4 4 8 2
110
Theorem 5.3.1: If v1 , v2 are eigenvectors of A corresponding to two distinct eigenvalues λ1 6= λ2 ,
then the solution to the IVP 0
y = Ay,
y(0) = y0 ,
is
y(t) = m1 v1 eλ1 t + m2 v2 eλ2 t ,
where the constants m1 , m2 are obtained by expressing the IC as a linear combination of v1 , v2 :
y0 = m1 v1 + m2 v2 .
111
4.4 Complex eigenvalues and repeated eigenvalues
0 5 −6
Example 5.4.1: Find a fundamental set of the linear system y = Ay where A = .
3 −1
λ2 − 4λ + 13 = (λ − 2)2 + 9 = 0 =⇒ λ, λ̄ = 2 ± 3i.
For λ = 2 + 3i,
r2 =(−1+i)r2 +r1
3 − 3i −6 3 − 3i −6 1+i
v = ker(A − (2 + 3i)I) = ker = ker = .
z}|{
3 −3 − 3i 0 0 1
| {z }
(1+i)c1+c2=0
For λ̄ = 2 − 3i, the corresponding eigenvector should also be the complex conjugate of v
1+i 1−i
v= = .
1 1
Notice that
1+i 1 1
v= = +i = vr + i vi ,
1 1 0
where its real and imaginary parts are
1 1
vr = , vi = .
1 0
To find real-valued fundamental set, we express yc in terms of its real and imaginary parts:
1+i
yc (t) = e(2−3i)t = (vr + i vi ) e2t (cos 3t + i sin 3t)
1
112
= e2t [vr cos 3t − vi sin 3t] + ie2t [vr sin 3t + vi cos 3t] .
Therefore,
2t 2t cos 3t − sin 3t
y1 (t) = Re{yc (t)} = e [vr cos 3t − vi sin 3t] = e ,
cos 3t
2t 2t sin 3t + cos 3t
y2 (t) = Im{yc (t)} = e [vr sin 3t + vi cos 3t] = e
sin 3t
form a real-valued fundamental set.
Theorem 5.4.1: For the linear system y0 = Ay, if the ch. eq. λ2 − T rλ + Det = 0 yields a pair of
complex roots λ, λ = α ± iβ with the corresponding eigenvectors v, v, then
vr = Re{v}, vi = Im{v}
are, respectively, the real and imaginary parts of the complex-valued eigenvector v.
Proof: Since
1 1
y1 (t) = Re{yc (t)} = yc (t) + yc (t) , y2 (t) = Im{yc (t)} = yc (t) − yc (t)
2 2i
are both linear combinations of yc (t) and yc (t) which are known to be solutions of the linear system
y0 = Ay, they must also be solutions.
It is straightforward to show that they are indeed linearly independent of each other by calculating
the Wronskian of the two (will be introduced later!)
Example 5.4.2:
Find both complex- and real-valued fundamental sets of the linear system y0 = Ay
1 2
where A = . Then, sketch the phase digram of the system.
−4 −3
113
Answer: T r = −2, Det = 5. Thus, the ch. eq. is
λ2 + 2λ + 5 = (λ + 1)2 + 4 = 0 =⇒ λ, λ̄ = −1 ± 2i.
For λ = −1 + 2i,
r2 =(1+i)r1 +r2
2 − 2i 2 2 − 2i 2 1
v = ker(A − (−1 + 2i)I) = ker = ker = .
z}|{
−4 −2 − 2i 0 0 −1 + i
| {z }
c1 +(−1+i)c2 =0
Thus,
1 0
v = vr + ivi = +i .
−1 1
y2 1.5
0.5
-0.5
-1
-1.5
y1
-1.5 -1 -0.5 0 0.5 1 1.5
Figure 24: Phase space diagram for the system in Example 5.4.2. In this case, the fundamental
set does not distinguish themselves from other solution trajectories. All are spiralling into the origin
as time evolves.
114
0 0 −1
Example 5.4.3: Find a fundamental set of the linear system y = Ay where A = .
1 −2
Then, sketch the phase digram of the system.
λ2 + 2λ + 1 = (λ + 1)2 = 0 =⇒ λ1 = λ2 = −1 = λ.
For λ = −1,
1 −1 1
v = ker(A − (−1)I) = ker = .
1 −1 1
| {z }
c1 +c2 =0
y2 (t) = vteλt .
Notice that
y2 0 = veλt + λvteλt .
It is obvious that lhs 6= rhs, the two differ by the term y1 (t) = veλt .
lhs = y2 0 = veλt + λvteλt + λueλt , while rhs = Ay2 = Avteλt + Aueλt = λvteλt + Aueλt .
115
By requiring lhs = rhs, we obtain
(A − λI)u = v.
Since det(A − λI) = 0, there exist infinitely many solutions u 6= 0. Any one would make the
solution work.
Therefore,
1 −t 1 −t t+1
λt
y2 (t) = vte + ue =λt
te + e = e−t .
1 0 t
y2 3
-1
-2
-3 y1
-3 -2 -1 0 1 2 3
Figure 25: Phase space diagram for the system in Example 5.4.3. The invariant set v = [1 1]
separates the phase space into two regions.
Theorem 5.4.2: For the linear system y0 = Ay, if the ch. eq. λ2 − T rλ + Det = 0 yields repeated
real root λ = λ1 = λ2 with the corresponding eigenvector v, then
form a fundamental set, where the constant vector u is any one of the infinitely many solutions of
the algebraic equation
(A − λI)u = v.
Proof: See arguments in Example 5.4.3. The linear independence of the two shall be discussed
when the Wronskian is introduced.
116
4.5 Fundamental matrix and evaluation of eAt
Definition of fundamental matrix: If y1 (t), y2 (t) form a fundamental set for the linear system
y0 = Ay, then its fundamental matrix Y (t) is constructed by putting y1 (t) in its first column and
y2 (t) in its second column.
Y (t) = [y1 (t) y2 (t)] .
(In other words, the fundamental matrix is a matrix whose columns are formed by the fundamental
set!)
Theorem 5.5.1: Let y1 (t), y2 (t) be a fundamental set for the system y0 = Ay and Y (t) =
[y1 (t) y2 (t)] be the fundamental matrix. Then the solution for the following IVP
0
y = Ay,
y(0) = y0 ,
is
y(t) = eAt y0 = Y (t)Y −1 (0)y0 .
This expression implies that
eAt = Y (t)Y −1 (0).
Remark: Once the fundamental matrix is found, we can use this expression to evaluate eAt .
y0 = Ay,
y(0) = y0 ,
117
where
1 1 1
A= , y0 = .
4 1 −1
Answer: Note that matrix A is identical to that in Example 5.3.1, where we have already solved
the fundamental set
e−t
3t
1 −t 1 3t e
y1 (t) = e = , y2 (t) = e = .
−2 −2e−t 2 2e3t
Thus,
e−t
e3t
Y (t) = [y1 (t) y2 (t)] = .
−2e−t 2e3t
Therefore, −1
e−t
−1 e3t 1 1 1
y(t) = Y (t)Y (0)y0 =
−2e−t 2e3t −2 2 −1
e−t e−t
e3t 1 2 −1 1 e3t 1 3
= =
−2e−t 2e3t 4 2 1 −1 −2e−t 2e3t 4 1
3 −t 1 3t
3 1 −t 1 1 3t 4
e + 4e
= e + e = 3 −t .
4 −2 4 2 − 2
e + 12 e3t
y0 = Ay,
y(0) = y0 ,
where
1 2 1
A= , y0 = .
−4 −3 −1
Answer: Note that matrix A is identical to that in Example 5.4.2, where we have already solved
the fundamental set
−t cos 2t −t sin 2t
y1 (t) = e , y2 (t) = e .
− cos 2t − sin 2t − sin 2t + cos 2t
118
Thus,
−t cos 2t sin 2t
Y (t) = [y1 (t) y2 (t)] = e ,
− cos 2t − sin 2t − sin 2t + cos 2t
and that
1 0 −1 1 1 0 1 0
Y (0) = =⇒ Y (0) = = .
−1 1 1 1 1 1 1
Therefore,
−1 1 0 1
y(t) = Y (t)Y (0)y0 = [y1 (t) y2 (t)]
1 1 −1
1 −t cos 2t
= [y1 (t) y2 (t)] = y1 (t) = e .
0 − cos 2t − sin 2t
y0 = Ay,
y(0) = y0 ,
where
0 −1 1
A= , y0 = .
1 −2 −1
Answer: Note that matrix A is identical to that in Example 5.4.3, where we have already solved
the fundamental set
1 −t t+1
y1 (t) = e , y2 (t) = e−t .
1 t
Thus,
−t 1 t+1
Y (t) = [y1 (t) y2 (t)] = e ,
1 t
and that
1 1 −1 1 0 −1 0 1
Y (0) = =⇒ Y (0) = = .
1 0 (−1) −1 1 1 −1
Therefore,
−1 0 1 1 −1
y(t) = Y (t)Y (0)y0 = [y1 (t) y2 (t)] = [y1 (t) y2 (t)] =
1 −1 −1 2
t+1 −t 1 −t 2t + 1
= 2y2 (t) − y1 (t) = 2 e − e = e−t .
t 1 2t − 1
119
4.6 Wronskian and linear independence
Definition of Wronskian: For any two vector functions y1 (t), y2 (t) in 2D space, the Wronskian
is defined as
W [y1 (t), y2 (t)] = det [y1 (t) y2 (t)] .
If W [y1 (t), y2 (t)] 6= 0 in an open interval of t (it is still true if W [y1 (t), y2 (t)] = 0 for some but
not all values of t in the interval), then y1 (t), y2 (t) are linearly independent.
It is straightforward to verify (an exercise for you) that for each fundamental matrix we obtained
in previous examples
det Y (t) 6= 0.
Due to the result in Abel’s Theorem, det Y (t) 6= 0 for all values of t for linear systems ~y 0 = A~y in
which A is a constant matrix.
120
4.7 Nonhomogeneous linear systems
y0 = Ay + b,
where
c1
c= is a vector of arbitrary constants, and
c2
Z
u(t) = Y (t)−1 b(t)dt.
Proof: We only need to show that yp (t) = Y (t)u(t) for the vector of functions u(t) defined above.
Notice that
yp 0 = Y 0 (t)u(t) + Y (t)u0 (t).
which results in
Y 0 (t)−AY (t)=0 or Y 0 (t)=AY (t)
(Y 0 (t) − AY (t))u(t) + Y (t)u0 (t) = b =⇒ Y (t)u0 (t) = b.
Therefore, Z
0 −1
u (t) = Y (t) b, =⇒ u(t) = Y (t)−1 b(t)dt.
When deriving the previous result, we took the advantage of the fact that
Y 0 (t) = AY (t)
which implies that the fundamental matrix is also a solution of the linear system y’= Ay. This is
because Y (t) = [y1 (t) y2 (t)] where both y1 (t), y2 (t) are solutions of the system, i.e.,
y1 0 = Ay1 , y2 0 = Ay2 .
Now,
Y 0 (t) = [y1 (t) y2 (t)]0 = [y1 0 (t) y2 0 (t)] = [Ay1 (t) Ay2 (t)] = A [y1 (t) y2 (t)] = AY (t).
121
Example 5.7.1: Find the general solution for the nonhomogeneous linear system
y0 = Ay + b,
where
1 1 et
A= , b(t) = .
4 1 0
Answer: Note that matrix A is identical to that in Example 5.3.1, where we have already solved
the fundamental set
e−t
3t
1 −t 1 3t e
y1 (t) = e = −t , y2 (t) = e = .
−2 −2e 2 2e3t
Thus,
e−t
e3t
Y (t) = [y1 (t) y2 (t)] = −t , =⇒ det Y (t) = 4e2t .
−2e 2e3t
And that,
−1 1 2e3t −e3t 1 2et −et
Y (t) = = , =⇒
2e−t e−t
det Y (t) 4 2e−3t e−3t
t
−1 1 2et −et e 1 e2t
Y (t) b(t) = −3t −3t = .
4 2e e 0 2 e−2t
Therefore,
e2t e2t
Z Z
−1 1 1
u(t) = Y (t) b(t)dt = dt = .
2 e−2t 4 −e−2t
Now,
e−t
e3t 1 e2t 1 0 0
yp (t) = Y (t)u(t) = −t = = et .
−2e 2e3t 4 −e−2t 4 −4et −1
Example 5.7.2: Turn the nonhomogeneous second-order linear ODE y 00 + y = cos t into a system
of first-order ODEs. Then solve the system for y(t).
Answer: Introduce a new variable v(t) = y 0 (t), substitute into the equation we obtain
0 0
y (t) = v, y 0 1 y 0
=⇒ = + .
v 0 (t) = −y + cos t v0 1 0 v cos t
122
Therefore, for our linear system
0 1 0
A= , b(t) = .
−1 0 cos t
For this system, T r = 0, Det = 1 resulting in the ch. eq. λ2 + 1 = 0 ⇒ λ1, 2 = ±i.
−i 1 1
v = ker(A − iI) = ker = .
−1 −i i
| {z }
c1 +ic2 =0
The inverse
−1 1 cos t − sin t cos t − sin t
Y (t) = = .
det Y (t) sin t cos t sin t cos t
Now,
−1 cos t − sin t 0 − sin t cos t 1 − sin 2t
Y (t) b(t) = = = .
sin t cos t cos t cos2 t 2 1 + cos 2t
Integrate it to obtain
cos2 t − sin2 t
Z Z
−1 1 − sin 2t 1 cos 2t 1
u(t) = Y (t) b(t)dt = dt = = .
2 1 + cos 2t 4 2t + sin 2t 4 2t + 2 sin t cos t
123
5 Nonlinear Systems
E.g.1 The model of a pendulum. Consider the pendulum shown in the figure below. Assume
that the mass of the rod is negligible. Based on Newton’s law
mg
where ω 2 = g/L and γ = c/(mL). Introduce a new variable φ(t) = θ0 (t), we obtain the following
system of of two 1st-order ODEs
0
θ = φ,
(70)
φ0 = −ω 2 sin θ − γφ.
The problem is that now, this system is not a linear system because of the occurrence of the
nonlinear term sin θ.
x(t)=the size (i.e. the total number) of the prey population at time t;
y(y)=the size (i.e. the total number) of the predator population at time t.
Now, the time evolution of the two populations can be modelled based on the following statements:
124
x’(t)=net natural birth rate (birth rate-death rate)-rate of deadly encounter with predator;
y’(t)=net natural birth rate (birth rate-death rate)+rate of beneficiary encounter with prey.
It is assumed that, in the absence of the predator, the net natural birth rate of the prey is positive
but the net natural birth rate of the predator is negative. Therefore, a simple model of predator-prey
interaction, often called the Lotka-Volterra model, was proposed describe such kind of interaction
between the two.
0
x = ax − bxy,
(71)
y 0 = cxy − dy,
where a, b, c, d are all positive parameters. This is again a system of two 1st-order ODEs.
However, this system is again nonlinear because of the appearance of the term xy which describes
the probability of an encounter between a predator and a prey.
In this lecture, we will learn some skills often used in the study of nonlinear systems of 1st-order
ODEs of the general form
0
x = f (x, y),
(72)
y 0 = g(x, y),
where f (x, y), g(x, y) are differentiable functions of x and y that are nonlinear. We shall only
study the case when f and g do not have explicit dependence on time t. Thus, we shall only study
autonomous nonlinear systems of two nonlinear ODEs.
Remarks: Almost all systems of nonlinear ODEs cannot be solved in closed forms. Only in a few
very special cases, one can solve them analytically in closed forms. The study of nonlinear ODEs is
still the frontier in the study of differential equations and dynamical systems. The research remains
hot and active even today and will likely remain active in the foreseeable future.
(1) Qualitative methods such as phase-space analysis (partially covered here in this course).
(3) Numerical and/or computational methods (widely used today in all areas).
125
5.2 Phase-plane analysis of a nonlinear system of two nonlinear ODEs
Let us introduce the basics of phase-plane analysis by using the following Lotka-Volterra model of
predator-prey inteactions.
0
x = x(a − y) = f (x, y),
(73)
y 0 = sy(x − d) = g(x, y),
where a, d, s > 0 are parameters.
Here are the goals that we try to achieve with the phase-plane analysis.
(1) Understanding the behaviour of the system near some special points of interest, typically the
steady states.
(2) Using linearized approximation to sketch the phase portrait(s) of the system near the steady
states (local phase portraits).
(3) Combining the phase portraits near different steady states to generate a global phase portrait
and use it to predict the long term behaviour of the system, i.e. the state or states the system
approaches to as t → ∞.
126
1. Phase-space, nullclines, steady states, and vector field.
Def: Phase space. The space spanned by the unknown functions of the system. It is also referred
to as the state space.
In the pendulum system, θ − φ is the phase space; in the predator-prey system x − y is the phase
space.
Def: x-nullcline is the curve in phase space defined by x0 = f (x, y) = 0 on which the rate of change
in x (i.e. x0 ) vanishes.
Similarly, y-nullcline is the curve in phase space defined by y 0 = g(x, y) = 0 on which the rate of
change in y (i.e. y 0 ) vanishes.
a x’=0
y’=0
0 d x
x’=0 y’=0
Notice that The direction vectors on the x-nullclines are all vertical (because x0 = 0) and those
on the y-nullclines are all horizontal (because y 0 = 0).
127
Def: Steady state (Equilibrium, Critical point, Fixed point). A point in phase space where
x0 = 0 = y 0 , i.e. the rates of change in both variables are zero. That is all points in the phase space
(xs , ys ) where
x0 = f (x, y) = 0 and y 0 = g(x, y) = 0.
By definition, these are the points of intersection between an x-nullcline and a y-nullcline! For the
example above, (0, 0) and (d, a) are the two steady states.
Be careful: Point of intersection between two x-nullclines (or between two y-nullclines) is NOT a
critical point! (Check the figure above!)
Def: Direction field. For an autonomous system, x0 = f (x, y) and y 0 = g(x, y), the value of x0
and y 0 is fixed at each point in the phase space. A simple vector addition between x0 and y 0 at each
point determines the direction and magnitude of the vector field at each point in phase space.
Often, we ignore the magnitude of these vectors and pay attention only to the direction of these
vectors (i.e. the scaled direction field). Any solution trajectory in phase space must be tangent to
the vector field at each point in the phase space!
Asymptotic stability: A steady state is asymptotically stable if all nearby phase trajectories will
move closer to and eventually approach the state as t → ∞. A steady state is unstable if nearby
trajectories move away from it in at least one direction.
In cases when the trajectories neither diverge nor approach a steady state (e.g. in the case of a
centre), some use the word neutral others use the concept of Lyapunov stability (not asymptotic
stability). But in our notes here, we often use the word stable to mean asymptotically stable. When
we actually mean Lyapunov stable we would say that it is Lyapunov stable.
Linearization of nonlinear systems. To determine the stability of a steady state, sketching the
vector field often is not enough. Also, the local behaviour (i.e. the behaviour in close neighbourhood)
near a steady state of a nonlinear system is often qualitatively identical to that of a system that is a
linear approximation of a nonlinear system. Therefore, a local phase-portrait of the linearized system
often gives the correct portrait of the nonlinear system. Linearization helps both in determining
the stability of a steady state and obtaining a local phase portrait of the nonlinear system.
128
Three goals of linearization and stability analysis:
f (xs , ys ) = 0,
g(xs , ys ) = 0.
x(t) = xs + α(t),
y(t) = ys + β(t),
where (α(t), β(t)) (|α(t)|, |β(t)| << 1) measures the distance between (x(t), y(t)) and (xs , ys ).
Substitute
x(t) = xs + α(t),
y(t) = ys + β(t),
Ignore the higher order terms in (α(t), β(t)), we obtain the follow system of linearized equations.
α̇ fx fy α
In matrix form = , ⇒ ~v˙ = J(xs , ys )~v ,
β̇ gx gy (xs , ys )
β
α ∂f
where ~v = and J(xs , ys ) is the Jacobian matrix evaluated at (xs , ys ). fx = , fy =
β ∂x
∂f ∂g ∂g
, gx = , gy = are the partial derivatives of the functions f (x, y) and g(x, y).
∂y ∂x ∂y
129
Now, let us carry out the linearization of the predator-prey example studied above, where f (x, y) =
x(a − y), g(x, y) = sy(x − d). The Jacobian matrix is
fx fy a−y −x
J(x, y) = = .
gx gy sy s(x − d)
a 0
J(0, 0) = .
0 −sd
Thus,
1 0
λ1 = a, ~v1 = ; λ2 = −sd, ~v2 = ⇒ saddle.
0 1
0 −d
J(d, a) = .
as 0
√ √
λ1 = i ads; λ2 = −i ads ⇒ centre.
130
Local and Global Phase Portraits
y y
x x
At (0, 0) At (d, a)
a x’=0
0 d y’=0 x
x’=0 y’=0
131
Results obtained using a computer software
2 2
s=0.5, a=d=1 s=1, a=d=1
1.5 1.5
1 1
y y
0.5 0.5
0 0
-0.5 -0.5
-0.5 0 0.5 1 1.5 2 -0.5 0 0.5 1 1.5 2
x x
2
s=2, a=d=1
1.5
0.5
-0.5
-0.5 0 0.5 1 1.5 2
132
E.g.3 For the nonlinear system,
x0 = y 2 − x,
y 0 = x2 − y.
(a) Sketch the nullclines. Draw one arrow of the vector field in each distinct region of the phase
space and on each distinct segment of the nullclines. Then, find all critical points and mark
each by an open circle in the phase space.
(b) Calculate the Jacobian matrix of the system.
(c) Classify each critical point found in (a) and sketch a local phase portrait near each one.
(d) Based on results above, sketch the global phase portrait. Predict the long term behaviour for
two ICs: (i) (x(0), y(0)) = (0.5, 0.5); (ii) (x(0), y(0)) = (1.5, 1.5).
Answer:
(a) The critical points are (0, 0) and (1, 1). Nullclines and direction arrows are all sketched
below.
y’=0
y
x
0 1 2
x’=0
(b)
fx fy −1 2y
J(x, y) = = .
gx gy 2x −1
(c) For (0, 0):
−1 0
J(0, 0) = .
0 −1
Thus, λ = λ1 = λ2 = −1 is repeated eigenvalue. Based on what we learned previously,
(0, 0) is a stable node. However, this is a special case of stable node because of the repeated
eigenvalue.
We know that the corresponding eigenvalues are
1 0
~v1 = , ~v2 = .
0 1
133
Therefore, this is a case when the algebraic multiplicity and geometric multiplicity are both
equal to 2! Thus, we have repeated eigenvalue but we can find two linearly independent
eigenvectors for the same eigenvalue. Actually, any nonzero vector is an eigenvector! This is
because
0 0 a
E(λ) = ker(A − λI) = ker = , for any a, b that are not both zero!
0 0 b
Because all directions near (0, 0) is an eigenvector direction with λ = −1, no direction is
faster or slower than the others. Thus, the trajectories are not curved but strait lines. This
special case of a stable node is also called a stable star.
−2 2 1 2 2 1
~v1 = ker(A − λ1 I) = ker = , ~v2 = ker(A − λ2 I) = ker = .
2 −2 1 2 2 −1
Therefore, (1, 1) is a saddle point ~v2 is the stronger unstable direction. The local phase
portraits are sketched below.
y y
x x
At (0, 0) At (1, 1)
134
(d) Global phase portrait is sketched below.
y’=0
y
x
0 1 2
x’=0
The stable invariant set of the saddle point (green) serves as a separatrix that separates the
phase space into two regions.
Below it to the left, all trajectories will eventually approach the stable node (start) at (0, 0).
Thus, for IC (x(0), y(0)) = (0.5, 0.5), (x(t), y(t)) → (0, 0) .
Above it to the right, all trajectories will eventually approach infinity. Thus, (x(0), y(0)) =
(1.5, 1.5), (x(t), y(t)) → (∞, ∞).
Below is the global phase portrait computed by a computer software. The sketch obtained
above were done before we had a chance to see the computer generated portrait. Yet the
similarity is striking!
135
E.g.4 A modified predator-prey interaction model in which the prey population is governed by a
logistic growth model.
0
x = ax(1 − x) − xy,
y 0 = sy(x − d), (a, s > 0, 0 < d < 1 are parameter).
y-nullclines: y = 0 and x = d.
There are 3 steady states (0,0), (1,0), and (d, a(1-d)). The vector field is given below
y 0<d<1
y’=0
0 d 1 x
x’=0
x’=0 y’=0
a(1 − 2x) − y −x
J(x, y) = .
sy s(x − d)
136
Thus,
1 0
λ1 = a, ~v1 = ; λ2 = −sd, ~v2 = ⇒ saddle.
0 1
1
λ1 = −a, ~v1 = ; λ2 = s(1 − d) > 0 ⇒ saddle.
0
2
a2 d2
Tr
∆= − ads(1 − d) = − ads(1 − d) ⇒
2 4
ad
[ad − 4s(1 − d)]
∆=
4
Therefore, if ∆ > 0, it is a stable node; if ∆ < 0, it is a stable spiral. When a << s, ∆ < 0 and
(d, a(1 − d)) is a spiral.
137
Local and Global Phase Portraits in case ∆ < 0.
y y y
x x x
y 0<d<1
y’=0
0 d 1 x
x’=0
x’=0 y’=0
138
Results obtained using a computer software
2
s=25, a=2, d=0.5
1.5
y
1
0.5
1.5
y(t)
0.5
x(t)
0 5 10 15 20 25 30 35 40
139
5.3 Classification and stability of isolated critical points
(1) Saddle: real λ1 , λ2 6= 0 and λ1 λ2 < 0 (real and of opposite signs) (i.e. Det < 0). Unstable!.
(2) Node: real λ1 , λ2 6= 0 and λ1 λ2 > 0 (real and of identical signs). Stable if λ1 , λ2 < 0, unstable
if λ1 , λ2 > 0!
(3) Spiral/focus: When λ1 , λ2 = T r/2 ± ωi are complex (i.e. Det > 0 and Det > (T r/2)2 ) and
T r 6= 0. Stable if T r < 0, unstable if T r > 0.
√
(4) Center: When λ1 , λ2 = ±i Det, i.e. T r = 0 and Det > 0. Neutral.
(5) Stars, degenerate nodes: When λ1 = λ2 = T r/2 6= 0, i.e. Det = (T r/2)2 6= 0. Stable if
T r < 0, unstable if T r > 0.
140
5.4 Conservative systems that can be solved analytically in closed forms
where x = x(t) and f (x) is usually a nonlinear function of x. Such an equation typically arises
when describing the Newtonian motion of a mass in the absence of friction and other dissipative
forces. In this case, x(t) describes the location of the mass and x00 (t) is the acceleration. Examples
of such motions include the movement of planet under the influence of the gravitational force of the
sun. Or an electrically charged particle moving in a frictionless medium under the influence of an
electric potential. In case of such kind of Newtonian motion, Newton’s law yields
dV (x)
ma = F ⇒ mx00 = − .
dx
Assuming that m = 1 and dVdx(x) = f (x) , one obtains equation (74). Here, V (x) =
R
f (x)dx is often
referred to as the potential function.
dV (x)
Multiply both sides of (74) by x0 and express f (x) by dx
, we obtain
0
0 00dV (x) 0 1 0 2 1 0 2
xx + x =0 ⇒ (x ) + V (x) = 0 ⇒ (x ) + V (x) = C,
dx 2 2
where E(x, x0 ) = 21 (x0 )2 + V (x) and C is an arbitrary constant. E is referred to as the energy in
physics. So, the energy is conserved. This is why such a system is called a conservative system. In
this system,
Z
1 0 2
E = (x ) + V (x) = C, (V (x) = f (x)dx)
2
gives all possible solutions of the system in x vs x0 space, which is actually the phase space of the
corresponding system of 1st-order ODEs given below
0
x = y,
(75)
y 0 = −f (x).
141
E.g.5 Pendulum in frictionless medium. In absence of friction, γ = 0 in (70) we previously developed
for the motion of a pendulum. In this case, the equation becomes
g
θ00 + ω 2 sin θ = 0, (ω 2 = ).
L
Or equivalently, the following system of 1st-order ODEs
0
θ = φ,
φ0 = −ω 2 sin θ.
Based on the definition above, this is a conservative system of the form θ00 + f (θ) = 0. Therefore,
the total energy
1 2
E = (θ0 ) + V (θ)
2
must be conserved, where
Z
V (θ) = ω 2 sin θdθ = −ω 2 cos θ.
C = −ω 2 cos θ0 .
Question: How do these solutions look like in the phase space (i.e. θ − φ space)?
Answer: Phase-space analysis. In the absence of a graph plotting software, we show how to sketch
the phase portrait of these solutions using phase-space analysis.
142
Consider the system
θ0 = φ,
φ0 = −ω 2 sin θ, (ω 2 = Lg .)
φ−nullclines are given by sin θ = 0, thus θ = 0, ±π, ±π + 2πn (n is any integer). Thus, there are
infinitely many φ−nullclines that are evenly distributed in the horizontal axis. If we focus on the
interval −π ≤ θ ≤ π, then θ = 0, ±π. Thus, there are 3 critical points in this interval: (0, 0) and
(±π, 0).
θ’ = 0 θ
−2π −π 0 π 2π
−2
φ’ = 0 φ’ = 0 φ’ = 0 φ’ = 0 φ’ = 0
The Jacobian is
0 1
J(θ, φ) = 2 .
−ω cos θ 0
Thus,
0 1
J(0, 0) = ⇒ λ2 + ω 2 = 0 ⇒ λ1,2 = ±ωi, centre!
−ω 2 0
0 1
J(±π, 0) = ⇒ λ2 − ω 2 = 0 ⇒ λ1,2 = ±ω, saddle!
ω2 0
143
φ
θ
−2π −π 0 π 2π θ’ = 0
−2
φ’ = 0 φ’ = 0 φ’ = 0 φ’ = 0 φ’ = 0
144
E.g.6 Consider the conservative Newtonian motion in double-well potential is described by x00 =
4 2
x − x3 in which dVdx(x) = −(x − x3 ), thus V (x) = x4 − x2 whose graph has the double-well shape as
in the figure below.
The ODE can be transformed into the following system of nonlinear ODEs
0
x = y,
y 0 = x − x3 .
Sketch the phase portrait of representative solution curves in the phase space.
x-nullcline is y = 0 which is the horizontal axis; y-nullclines are x = −1, 0, 1 and the . Thus, there
are 3 critical points: (0, 0), (±1, 0). Nullclines, critical points, and vector field are sketched in the
figure below.
x’=0 x
−2 −1 0 1 2
−1
145
For (0, 0):
0 1
J(0, 0) = ⇒ T r = 0, Det = −1 ⇒ λ2 − 1 = 0
1 0
which yields two eigenvalues λ1 = −1, λ2 = 1. This is a saddle point. The corresponding eigenvec-
tors are
1 1 1 −1 1 1
~v1 = ker(A − λ1 I) = ker = , ~v2 = ker(A − λ2 I) = ker = .
1 1 −1 1 −1 1
x x
At (0, 0) At ( 1, 1)
1 2 x4 x2
y + − =0
2 4 2
√
which crosses the x-axis at 3 points: x = 0 and x = ± 2. Combining all the results above, one can
sketch the following global phase portrait.
y
x’=0 x
−2 −1 0 1 2
−1
146
6 Laplace Transform
• Laplace transform (LT) is an alternative method for solving linear ODEs with constant coef-
ficients.
• After solving the transformed unknown function, an inverse transform is required to obtain
the desired solution.
• Pros:
(i) ODE solved together with ICs, no extra step required for finding the integration constants.
(ii) For a nonhomogeneous ODE, no need to solve yh (t) and yp (t) separately, both are solved
in one step.
(iii) Particularly good when the nonhomogeneuos term g(t) is piecewise continuous, useful in
engineering.
(iv) Allows the introduction of more advanced techniques like “transfer function”, ...
• Cons:
is defined as the Laplace transform (LT) of f (t) for all values of s such that the improper integral
converges. The two functions f (t) ↔ F (s) form a Lapace transform pair, i.e. F (s) is the LT of f (t)
and f (t) is the inverse LT of F (s).
147
Remarks:
2. LT is defined by an improper integral - a definite integral with one or both limits being
unbounded. By definition,
Z ∞ Z T
−st
e f (t) dt ≡ lim e−st f (t)dt
0 T →∞ 0
3. The fact the limits of this integral is between 0 and ∞ indicates that LT emphasizes the
behaviour of f (t) in the future but ignores its behaviour in the past - the values of f (t) for
t < 0 plays no role in its LT.
Therefore,
1
L[1] = , (s > 0).
s
Z ∞ 1
−st integration by parts
s2
, if s > 0;
F (s) = L[f (t)] = L[t] = e tdt =
0 diverges, if s < 0.
Therefore,
1
L[t] = , (s > 0).
s2
148
Example 4.2.3: For f (t) = eat , find F (s).
Z ∞ Z ∞
∞ 1
at −st at −(s−a)t 1 −(s−a)t s−a
, if s > a;
F (s) = L[e ] = e e dt = e dt = − e =
0 0 s−a
t=0
diverges, if s < a.
Therefore,
1
L[eat ] = , (s > a).
s−a
1
1 s
, s>0
1
t s2
, s>0
1
eat s−a
, s>a
ω
sin(ωt) s2 +ω 2
, s>0
s
cos(ωt) s2 +ω 2
, s>0
149
Similarly, for f (t) = sin ωt
ω
L[sin ωt] = , (s > 0).
s2 + ω2
Laplace transforms and their inverses have been calculated for all frequently encountered functions.
Very often, we simply need to use a table to find the transforms that we need.
The table below contains most LTs that we need for the purpose of this course. More detailed forms
can be found if needed. For example, based on this table we find that
3! 6
L[t3 ] = 4
= 4, (s > 0).
s s
0
−1 2s −1 1
L =L − 2 = −(−t) sin t = t sin t.
(s2 + 1)2 s +1
150
Table 3: Elementary Laplace Transforms.
1
1 s
, s>0
1
eat s−a
, s>a
n!
tn , n≥0 sn+1
, s>0
ω
sin(ωt) s2 +ω 2
, s>0
s
cos(ωt) s2 +ω 2
, s>0
ω
sinh(ωt) s2 −ω 2
, s > |ω|
s
cosh(ωt) s2 −ω 2
, s > |ω|
ω
eat sin(ωt) (s−a)2 +ω 2
, s>a
s−a
eat cos(ωt) (s−a)2 +ω 2
, s>a
n!
tn eat , n≥0 (s−a)n+1
, s>a
eat f (t) F (s − a)
e−τ s
u(t − τ ) s
, τ > 0, s>0
1
f (at) a
F ( as ), a > 0, s > as0
151
6.4 More techniques involved in calculating Laplace transforms
Theorem 4.4.1 Laplace transform is linear: Suppose L[f1 (t)] and L[f2 (t)] are defined for
s > αj , (j = 1, 2). Let s0 = max{α1 , α2 } and let c1 , c2 be constants. Then,
L[c1 f1 + c2 f2 ] = c1 L[f1 ] + c2 L[f2 ], for s > s0 .
Proof: Z ∞ Z ∞
by def −st
L[c1 f1 + c2 f2 ] = e [c1 f1 + c2 f2 ]dt = [c1 e−st f1 + c2 e−st f2 ]dt
0 0
Z ∞ Z ∞
= c1 e−st f1 dt + c2 e−st f2 dt = c1 L[f1 ] + c2 L[f2 ].
0 0
Example 4.4.1: Use theorem 4.4.1 and the fact L[eat ] = 1/(s − a), (s > a), calculate L[cosh ωt]
and L[sinh ωt].
Answer:
1 1 1 1 1 s
L[cosh ωt] = L[ (eωt + e−ωt )] = L[eωt ] + L[e−ωt ] = + = , (s > |ω|).
2 2 2 s−ω s+ω s2 − ω2
Similarly,
1 1 1 1 1 ω
L[sinh ωt] = L[ (eωt − e−ωt )] = L[eωt ] − L[e−ωt ] = − = , (s > |ω|).
2 2 2 s−ω s+ω s2 − ω2
Theorem 4.4.2 First shifting theorem: If F (s) = L[f ] for s > s0 , then F (s − a) = L[eat f ] for
s > s0 + a.
Proof:
Z ∞ Z ∞
by definition −(s−a)t
by definition
e−st eat f (t) dt L[eat f ].
F (s − a) = e f (t)dt = =
0 0
n!
Example 4.4.2: Use the first shifting theorem and the fact L[tn ] = sn+1
= F (s), calculate L[eat tn ].
n!
L[eat tn ] = F (s − a) = , (s > a).
(s − a)n+1
s
Example 4.4.3: Use the first shifting theorem and the fact F (s) = L[cos ωt] = s2 +ω 2
, calculate
L[eat cos ωt].
s−a
L[eat cos ωt] = F (s − a) = , (s > a).
(s − a)2 + ω 2
152
6.5 Existence of Laplace transform
f(t)
f( t +0 )
f( t 0 )
−
t
t1 t0
(i) in [0, T ], if f (0), f (T ) are both finite, and that f (t) has a finite number of jump discontinuities
in this interval;
f(t)
t
0 T
g(t)
Figure 27: In [0, T ], f (t) is piecewise continuous although it has several jump discontinuities. g(t)
is not although it has no discontinuities at all. But its value at t = 0 is not finite.
153
Definition of exponential order: f (t) is of exponential order s0 as t → ∞, if
Remark: f (t) is of exponential order s0 simply means that f (t) cannot approach infinity faster
than the exponential function es0 t as t → ∞.
Theroem 4.4.3 Existence of LT: If f (t) is piecewise continuous in [0, ∞) and of exponential
order s0 , then F (s) = L[f (t)] exists for s > s0 .
f(t)
4
1
0 4 t
Figure 28:
Answer: Z ∞ Z 4 Z ∞
−st −st
L[f (t)] = e f (t)dt = e (1)dt + e−st tdt = I1 + I2 .
0 0 4
4
4
1 − e−4s
Z
−st 1 −st
I1 = e dt = − e = , (s 6= 0).
0 s 0 s
Z ∞ Z ∞ Z ∞
−st t=u+4 −s(u+4) −4s
I2 = e tdt = e (u + 4)d(u + 4) = e e−su (u + 4)du
4 0 0
Z ∞ Z ∞ Z ∞ Z ∞
−4s −su −su u=t −4s −st −st
=e e udu + 4 e du = e e tdt + 4 e dt
0 0 0 0
−4s −4s 1 4
=e (L[t] + 4L[1]) = e 2
+ , (s > 0).
s s
154
Therefore,
Remark: We shall demonstrate later that, using unit step functions, we can make calculating the
LT of piecewise continuous functions much easier.
Theroem 4.5.1: Suppose f (t) and f 0 (t) are continuous in [0, ∞) and are of exponential order
s0 , and that f 00 (t) is piecewise continuous in [0, ∞). Then, f, f 0 , f 00 have Laplace transform for
s > s0 ,
L[f 0 ] = sL[f ] − f (0);
L[f 00 ] = s2 L[f ] − f 0 (0) − sf (0).
Notice that the initial conditions f (0), f 0 (0) naturally show up in the LTs of derivatives.
Proof:
Z ∞ df
Z ∞ Z ∞
=f 0 or df =f 0 dt
R R
0 by definition −st 0 −st udv=uv− vdu −st
∞
L[f ] = e f dt dt
= e df = e f (t)0 − f (t)d(e−st )
0 0 0
Z ∞ Z ∞
= −f (0) − f (t)(−s)e−st dt = s f (t)e−st dt − f (0) = sL[f ] − f (0).
0 0
155
Answer: Apply LT to both sides of the ODE
y0
L[y 0 ] − aL[y] = 0 ⇒ sL[y] − y(0) − aL[y] = 0 ⇒ sY (s) − y0 − aY (s) = 0 ⇒ Y (s) = .
s−a
Now,
1
1 eat ↔ s−a
y(t) = L−1 [Y (s)] = y0 L−1 [ ] = y0 eat .
s−a
Example 4.6.2: Use Laplace transform to solve the following nonhomogeneous IVP. (Note this is
identical to forced vibration in the absence of damping!)
00
y + ω02 y = F cos(ωt), (ω 6= ω0 )
y(0) = 0, y 0 (0) = 0.
Now,
−1 F −1 s −1 s F
y(t) = L [Y (s)] = 2 2
L [ 2 2
]−L [ 2 2
] = 2 [cos ωt − cos ω0 t] ,
ω0 − ω s +ω s + ω0 ω0 − ω 2
where the fact L−1 is a linear operator was used. Note that this IVP consisting of a nonhomoeneous
ODE and two ICs is solved directly using Laplace transform method. No need to find yh (t) and
yp (t) sparately, no need to determine the integration constants.
where the integration is done along the vertical line Re(s) = γ in the complex plane such that γ
is greater than the real part of all singularities of F(s). Since L−1 is also an integral operator, it is
also linear as can be proofed in a similar way as in Theorem 4.4.1. Therefore,
156
Remark: We almost never use this definition to calculate L−1 [F (s)]. Such a path integral in com-
plex plane is beyond most students in second year. Since each time we calculate one L[f (t)] = F (s),
it establishes a pairwise relationship between f (t) and F (s). We then can use the result for finding
f (t) given F (s) as well as finding F (s) given f (t).
Answer: Laplace transform both sides of the ODE and let Y (s) = L[y]:
L[3e2t ]
L[y 00 ] L[y 0 ]
z }| {
z
2
}| {
0
z }| { 3
s Y − sy0 − y0 −6 (sY − y0 ) +5Y = =⇒
s−2
g(t)
y 00 −6y 0 +5y
3 z }| { z ICs
}| {
z }| {
3
s2 Y − 2s − 3 − 6sY + 12 + 5Y = =⇒ 2
(s − 6s + 5)Y = 2s − 9 + .
s−2 s−2
A little bit of algebra allows us to combine the two terms into the following one rational function
(s − 3)(2s − 7)
Y (s) = .
(s − 1)(s − 2)(s − 5)
It was relatively easy for us to solve for Y (s) but the solution we are looking for is
Unless we can find a way to break Y (s) into partial fractions, we do not know L−1 [Y (s)] =?
157
where the constants A, B, C can be determined by Heaviside’s method:
(s − 3)(2s − 7) (−2)(−5) 10
A = (s − 1)Y (s)|s=1 = = = = 2.5.
(s − 2)(s − 5) s=1 (−1)(−4)
4
(s − 3)(2s − 7) (−1)(−3) 3
B = (s − 2)Y (s)|s=2 = = = − = −1.
(s − 1)(s − 5) s=2
(1)(−3) 3
(s − 3)(2s − 7) (2)(3) 6
C = (s − 5)Y (s)|s=5 = = = = 0.5.
(s − 1)(s − 2) s=5 (4)(3)
12
Thus,
2.5 (−1) 0.5
Y (s) = + +
s−1 s−2 s−5
Therefore,
−1 −1 2.5 (−1) 0.5 −1 1 −1 1 −1 1
y(t) = L [Y (s)] = L + + = 2.5L −L +0.5L
s−1 s−2 s−5 s−1 s−2 s−5
P (s)
Y (s) = ,
(s − s1 )(s − s2 ) · · · (s − sn )
where the degree of P (s) is smaller than n and s1 , s2 , · · · , sn are distinct numbers, then
A1 A2 An
Y (s) = + + ··· + ,
s − s1 s − s2 s − sn
where
Aj = (s − sj )Y (s)|s=sj , (for all j = 1, 2, · · · , n).
Proof:
A1 Aj−1 Aj+1 An
(s − sj )Y (s) = (s − sj ) + · · · + (s − sj ) + Aj + (s − sj ) + · · · + (s − sj ) .
s − s1 s − sj−1 s − sj+1 s − sn
158
Since all sj are distinct, only in the j − th term, this multiplicative factor cancels the denominator
leaving Aj by itself. Use sigma sum notation
n
X Ak
(s − sj )Y (s) = Aj + (s − sj ) .
k=1, k6=j
s − sk
Aj = (s − sj )Y (s)|s=sj .
6.7.2 Solving general IVPs involving a linear ODE with constant coefficients
Laplace transform both sides of the ODE and let Y (s) = L[y], G(s) = L[g]:
L[y 00 ] L[y 0 ]
z }| { z }| {
a (s2 Y − sy0 − y00 ) +b (sY − y0 ) +cY = G =⇒
ay 00 +by 0 +cy ICs g(t)
z }| { z }| { z}|{
2
as Y − ay0 s − ay00 + bsY − by0 + cY = G =⇒ 2 0
(as + bs + c)Y = (as + b)y0 + ay0 + G .
Therefore,
h y with ICs yp
z }| { z }| {
0
(as + b)y0 + ay0 + G(s) (as + b)y0 + ay00 G(s)
Y (s) = = + 2 .
as2 + bs + c as2 + bs + c as + bs + c
159
Using Heaviside’s method
1 1 1 1 1 1 1 3 1 1 1 1
Y (s) = − + − + = −2 + .
s+1 s+2 2s+1 s+2 2s+3 2s+1 s+2 2s+3
Therefore,
3 1 1 1 1 3 1
y(t) = L−1 [Y (s)] = L−1 [ ] − 2L−1 [ ] + L−1 [ ] = e−t − 2e−2t + e−3t .
2 s+1 s+2 2 s+3 2 2
Remark: Heaviside’s method applies because in this example the ch. eq. as2 + bs + c = 0 gives
two distinct real roots, and that g(t) is not identical to either of the homogeneous solutions.
Now, the inverse of first term can be readily solved but Heaviside’s method does not apply to the
second term. Notice that
1 1 1 2s 1 1 1 1 1
= 2− − = 2− −2 − −
s2 (s + 1)2 s (s + 1)2 s2 (s + 1)2 s (s + 1)2 s s + 1 (s + 1)2
Thus
1 1 1 1
Y (s) = 2
−2 +2 +2 .
s s s+1 (s + 1)2
Therefore,
y(t) = L−1 [Y (s)] = t − 2 + 2e−t + 2te−t .
Remarks:
• Heaviside’s method does not apply because in this example the ch. eq. as2 + bs + c = 0 gives
two repeated real roots.
160
• We shall demonstrate later that using the convolution theorem, we can calculate the inverse
LT of this kind without having to solve the partial fractions.
Now, the inverse of first term can be readily solved but Heaviside’s method does not apply to the
second term. Notice that
2 A B(s + 1) + C
2
= − ,
s[(s + 1) + 1] s (s + 1)2 + 1
where A = B = C = 1.
Thus,
1 s+1
Y (s) = − .
s (s + 1)2 + 1
Therefore,
y(t) = L−1 [Y (s)] = 1 − e−t cos t.
Remarks:
• Heaviside’s method does not apply because in this example the ch. eq. as2 + bs + c = 0 gives
two complex roots, −α ± iβ. In this case, as2 + bs + c = (s + α)2 + β 2 .
• We shall demonstrate later that using the convolution theorem, we can calculate the inverse
LT of this kind without having to solve the partial fractions.
161
6.7.3 Some inverse LTs where Heaviside’s method fails to apply
8 − (s + 2)(4s + 10)
Y (s) = .
(s + 1)(s + 2)2
Therefore
2 6 8
y(t) = L−1 [Y (s)] = L−1 [ ] − L−1 [ ] − L−1 [ ]
s+1 s+2 (s + 2)2
= 2e−t − 6e−2t − 8te−2t .
s2 − 5s + 7
Y (s) = .
(s + 2)3
162
An alternative way is to express the numerator as a polynomial of (s + 2):
s2 − 5s + 7 (s + 2)2 − 9(s + 2) + 21 1 9 21
Y (s) = 3
= 3
= − 2
+ .
(s + 2) (s + 2) s + 2 (s + 2) (s + 2)3
Therefore
1 9 21
y(t) = L−1 [Y (s)] = L−1 [ ] − L−1 [ 2
] + L−1 [ ]
s+2 (s + 2) (s + 2)3
21 2 −2t
= e−2t − 9te−2t + te .
2
1 − s(5 + 3s)
Y (s) = .
s[(s + 1)2 + 1]
Now,
1
s = 0, =⇒ 2A = 1 =⇒ A= .
2
5
s = −1, =⇒ A−C =3 =⇒ C=− .
2
7
s = 1, =⇒ 5A + 2B + C = −7 =⇒ B=− .
2
Thus,
1
2
− 72 (s + 1) − 52 11 7 s+1 5 1
Y (s) = + 2
= − − .
s (s + 1) + 1 2 s 2 (s + 1) + 1 2 (s + 1)2 + 1
2
163
Therefore
1 1 7 s+1 5 −1 1
y(t) = L−1 [Y (s)] = L−1 [ ] − L−1 [ ] − L [ ]
2 s 2 (s + 1)2 + 1 2 (s + 1)2 + 1
1 7 −t 5
= − e cos t − e−t sin t.
2 2 2
Therefore
−1 −1 2 2 −1 s s
y(t) = L [Y (s)] = L − +L −
s2 + 1 s2 + 4 s2 + 1 s2 + 4
= 2 sin t − sin 2t + cos t − cos 2t.
164
6.8 Unit step/Heaviside function
Unit step (Heaviside) function is a function whose value steps up by a unit at t = 0 from 0 to
1. It is defined as
0, t < 0;
u(t) = (77)
1, t ≥ 0.
u(t) u(t−τ )
1 1
0 t 0 τ t
Figure 29: Heaviside function u(t) and shifted Heaviside function u(t − τ ).
A shifted unit step function is a function whose value steps up by a unit at t = τ from 0 to 1
(τ > 0).
0, t < τ ;
u(t − τ ) = (78)
1, t ≥ τ.
Based on definition,
Z ∞ Z ∞
−st 1
L[u(t)] = e u(t) dt = e−st dt = L[1] = , (s > 0).
0 0 s
∞ ∞ ∞ ∞
e−sτ
Z Z Z Z
−st −st −s(t+τ ) −sτ
L[u(t−τ )] = e u(t−τ ) dt = e dt = e d(t+τ ) = e e−st dt = , (s > 0).
0 τ 0 0 s
Second Shifting Theorem: Suppose that τ > 0 and F (s) = L[f (t)] exists for s > s0 . Then,
L[u(t − τ )f (t − τ )] = e−sτ F (s), (s > s0 ).
165
6.8.2 Expressing piecewise continuous function in terms of u(t − τ )
f(t) f1(t)
f0(t)
0 τ t
Figure 30: Piecewise continuous function differentially defined on two intervals [0, τ ) and [τ, ∞).
Using unit step function u(t − τ ), we can express this function in the following form
f0 (t), if 0 ≤ t < τ ;
f (t) = f0 (t) + u(t − τ )(f1 (t) − f0 (t)) =
f0 (t) + f1 (t) − f0 (t) = f1 (t), if τ ≤ t.
Example 4.8.1: Express the following function in terms of unit step function, then calculate its
LT.
2t + 1, if 0 ≤ t < 2;
f (t) =
3t, if 2 ≤ t.
Answer:
f (t) = f0 (t) + u(t − 2)(f1 (t) − f0 (t)) = 2t + 1 + u(t − 2)(3t − (2t + 1))
= 2t + 1 + u(t − 2)(t − 2 + 1) = 2t + 1 + u(t − 2)(t − 2) + u(t − 2).
The LT is
2 1 1 1
L[f (t)] = L[2t + 1] + L[u(t − 2)(t − 2)] + L[u(t − 2)(1)] = 2
+ + e−2s 2 + e−2s .
s s s s
166
Back to Example 4.5.1: Find the LT of the following piecewise continuous function
1, if 0 ≤ t < 4;
f (t) =
t, if 4 ≥ t.
f(t)
4
1
0 4 t
Answer:
f (t) = f0 (t) + u(t − 4)(f1 (t) − f0 (t)) = 1 + u(t − 4)(t − 1) = 1 + u(t − 4)(t − 4) + u(t − 4)(3).
The LT is
1 e−4s e−4s
L[f (t)] = L[1] + L[u(t − 4)(t − 4)] + 3L[u(t − 4)] = + 2 +3 .
s s s
Example 4.8.2: Express the following function in terms of unit step function, then calculate its
LT.
sin t, if 0 ≤ t < π;
f (t) =
0, if π ≤ t.
f(t) f (t)=sin t
0
f1(t)=0
0 π t
Figure 32: A sine function with only half a cycle.
Answer:
f (t) = f0 (t) + u(t − π)(f1 (t) − f0 (t)) = sin t + u(t − π)(0 − sin t) = sin t − u(t − π) sin t.
167
Notice that
sin t = sin(t − π + π) = sin(t − π) cos π + cos(t − π) sin π = − sin(t − π).
We now have
f (t) = sin t − u(t − π) sin t = sin t + u(t − π) sin(t − π).
The LT is
1 1 1
L[f (t)] = L[sin t] + L[u(t − π) sin(t − π)] = + e−πs 2 = (1 + e−πs ) 2 .
s2 +1 s +1 s +1
Substitute the ICs and the expression for G(s) into this equation, we obtain
1 G(s) 1 1 + e−πs
Y (s) = + = + .
s2 + 4 s2 + 4 s2 + 4 (s2 + 1)(s2 + 4)
Notice that
1 1 1 1
= − .
(s2 + 1)(s2 + 4) 3 s2 + 1 s2 + 4
168
Plug it back,
e−πs
1 1 1 1 1 1
Y (s) = 2 + − + −
s + 4 3 s2 + 1 s2 + 4 3 s2 + 1 s2 + 4
e−πs
1 2 1 1 2 2
= + + −
3 s2 + 4 3 s2 + 1 6 s2 + 1 s2 + 4
169
6.9 Laplace transform of integrals and inverse transform of derivatives
Z t
F (s)
L f (τ )dτ = .
0 s
Proof. Let A(t) be the anti-derivative of f (t) such that A0 (t) = f (t) and that A(0) = 0. Thus,
Z t
f (τ )dτ = A(τ )|t0 = A(t) − A(0) = A(t) ⇒
0
Z t ∞ ∞
−1
Z Z
−st
A(t)d e−st
L f (τ )dτ = L [A(t)] = e A(t)dt =
0 0 s 0
Z ∞
1 ∞ −st 0 1 ∞ −st
−1
Z Z
−st ∞ −st F (s)
= A(t)e |0 − e dA(t) = e A (t)dt = e f (t)dt = .
s 0 s 0 s 0 s
−1 1
E.g. Find L . We can use the Laplace transform pair in the opposite direction.
s(s2 + 4)
Ans:
1 t
−1
Z
−1 1 1 −1 1 2 1
L 2
= L 2 2
= sin(2τ )dτ = cos(2τ )|t0 = [1 − cos(2t)].
s(s + 4) 2 ss +2 2 0 4 4
170
h i
−1 2s
E.g. Calculate L (s2 +1)2
.
Ans:
0 0
−1 2s −1 1 −1 1
L =L − 2 = −L = − [(−t) sin t] = t sin t.
(s2 + 1)2 s +1 s2 + 1
171
6.10 Impulse/Dirac delta function
Let’s define a square-wave function centred at t = 0 with unit area under it.
1
2a
, if |t| < a,
Ia (t) =
0, if |t| ≥ a,
where a > 0.
1
2a
−a a
1
Figure 33: An impulse of width 2a and height 2a
for 0 < a << 1.
Notice that Z ∞ Z a
1
The area under the curve = Ia (t)dt = dt = 1.
−∞ −a 2a
A Dirac delta/impulse function is defined by making such a function infinitely high and in-
finitely narrow such that the area under the curve remains one.
∞, if t = 0,
δ(t) ≡ lim Ia (t) = (79)
a→0 0, elsewhere.
172
(3) Z ∞
δ(t − τ )f (t)dt = f (τ ), (f (t) is continuous).
−∞
Proof: Z ∞ Z τ +a Z τ +a
1 1
δ(t − τ )f (t)dt = lim f (t)dt = lim f (t)dt
−∞ τ −a a→0 2a a→0 2a τ −a
F 0 (t)=f (t) or F (t)= f (t)dt
R
1
= lim [F (τ + a) − F (τ − a)]
z}|{
a→0 2a
1
= lim[F (τ + a) − F (τ ) + F (τ ) − F (τ − a)]
a→0 2a
1 F (τ + a) − F (τ ) F (τ ) − F (τ − a) 1
= lim + = [F 0 (τ ) + F 0 (τ )] = f (τ ).
2 a→0 a a 2
(4)
u0 (t − τ ) = δ(t − τ ).
Proof: It is easy to show that u0 (t − τ ) satisfies all the properties of δ(t − τ ) listed above.
(5) Any function centred at t = 0 with a unit area under it approaches a Dirac delta function
when its height approaches infinity and width approaches zero. For example, when a Gaussian
function becomes infinitely high and infinitely narrow, it becomes a Dirac delta function.
1 (t−τ )2
δ(t − τ ) = lim √ e− 2σ 2 .
σ→0 2πσ 2
173
where H(s) is referred to as the transfer function. h(t) = L−1 [H(s)] is the solution to the IVP
above.
which leads to
ICs g(t) ICs g(t)
1 z }| { z}|{ z }| { z}|{
(as + b)y0 + ay00 + G(s) = H(s) (as + b)y0 + ay00 + G(s)
Y (s) =
as2 + bs + c
Definition of convolution: Suppose that f (t), g(t), h(t) are continuous function and c is a
constant, convolution product is defined as
Z t Z t
f (t) ∗ g(t) ≡ f (τ )g(t − τ )dτ = f (t − τ )g(τ )dτ.
0 0
(i) f ∗ g = g ∗ f, (commutative);
(ii) f ∗ (g + h) = f ∗ g + f ∗ h, (distributive);
(iii) f ∗ (g ∗ h) = (f ∗ g) ∗ h, (associative);
(v) f ∗ 0 = 0 ∗ f = 0.
174
Theorem 4.10.1 Laplace transform of convolution product:
If F (s) = L[f (t)] and G(s) = L[g(t)] both exist for s > a ≥ 0, then for
h(t) = f (t) ∗ g(t), H(s) = L[h(t)] = L[f (t) ∗ g(t)] = L[f (t)]L[g(t)] = F (s)G(s), (s > a).
Remark: The LT transform of the convolution product f ∗ g is the product of the respective
transforms F (s)G(s), the inverse transform of the product F (s)G(s) is the convolution product of
the respective inverses f ∗ g. Besides many other uses of this theorem, it definitely makes many
tough inverse LTs easier to calculate.
1 1
0
Example 4.10.1: Given that H(s) = (s2 +1) 2 (the partial fraction (s2 +1)2
= 12 [ s21+1 + s
s2 +1
] is not
−1
easy to figure out), find L [H(s)] using the convolution product.
Answer:
Z t
−1 1−1 1 1 1
L [H(s)] = L [ 2 2
] = L−1 [ 2 ] ∗ L−1 [ 2 ] = sin t ∗ sin t = sin τ sin(t − τ )dτ
s +1s +1 s +1 s +1 0
Notice that
Z t Z t
1 3 1 1
sin t ∗ sin t = sin t sin τ cos τ dτ − cos t sin2 τ dτ = sin t − t cos t + cos t sin 2t.
0 0 2 2 4
Further simplification might be possible using trig identities (exercise for you). This is the solution
of the following IVP: 00
y + y = sin t,
y(0) = 0, y 0 (0) = 0,
where the forcing term resonates with the intrinsic harmonic oscillations.
Remark: Many inverse Laplace transforms that were difficult to solve using partial fractions can
be solved with convolution product. The trade off is to calculate the convolution integral which
could be challenging for some problems.
175
h i
−1 1
Example 4.10.2: Use convolution product to solve for L (s+1)2 (s−1)
.
which is not really easy for some students. Now, using convoltion
Z t Z t
−1 1
−1 1
y(t) = L [Y (s)] = L [ 2
] ∗ L−1 [ ] = te−t ∗ et = −τ t−τ
τe e t
dτ = e τ e−2τ dτ
(s + 1) s−1 0 0
integration by parts
1 t −2t 1 −2t 1 1 t
e − e−t − 2te−t .
= − e te + e − =
z}|{
2 2 2 4
Remark: In this question, the convolution integral is not difficult at all. Whether you choose
partial fractions or convolution is your own choice based on your preferences.
Example 4.10.3: Solve the following IVP for any function g(t) whose LT G(s) = L[g(t)] exists.
00
y + y = g(t)
y(0) = 3, y 0 (0) = −1.
Remark: Now, changing the forcing term does not change the parts of the solution that is related
to the ICs. For any given g(t), we just need to calculate the corresponding convolution integral to
get the answer.
176