Академический Документы
Профессиональный Документы
Культура Документы
Convolution
Convolution is an operation involving two functions that turns out to be rather useful in many
applications. We have two reasons for introducing it here. First of all, convolution will give
us a way to deal with inverse transforms of fairly arbitrary products of functions. Secondly, it
will be a major element in some relatively simple formulas for solving a number of differential
equations.
Let us start with just seeing what convolution is. After that, well discuss using it with the
Laplace transform and in solving differential equations.
26.1
t
x=0
f (x)g(t x) d x
Let
f (t) = e3t
g(t) = e7t
and
Since we will use f (x) and g(t x) in computing the convolution, let us note that
f (x) = e3x
So,
f g(t) =
=
=
g(t x) = e7(tx)
x=0
Z t
and
f (x)g(t x) d x
e3x e7(tx) d x
x=0
t
517
518
= e
7t
e4x d x
x=0
1 4x t
e
4
x=0
1 7t 4t
1 7t 40
e e
e e
4
4
= e7t
=
1 3t
e
4
f (t) = e3t
when
1 7t
e
4
and
g(t) = e7t
It is common practice to also denote the convolution f g(t) by f (t) g(t) where, here,
f (t) and g(t) denote the formulas for f and g . Thus, instead of writing
1 7t
f g(t) =
e e3t
when f (t) = e3t and g(t) = e7t ,
4
1
4
e3t e7t =
7t
e e3t
This simplifies notation a little, but be careful t is being used for two different things in this
equation: On the left side, t is used to describe f and g ; on the right side, t is the variable in
the formula for the convolution. By convention, if we assign t a value, say, t = 2 , then we are
setting t = 2 in the final formula for the convolution. That is,
e3t e7t
with
t =2
means compute the convolution and replace the t in the resulting formula with 2 , which, by the
above computations, is
1 72
1 14
e e32 =
e e6
.
4
e32 e72
x=0
!Example 26.2:
Let us find
1
t2
t
when t = 4 .
Here,
1
t
and
f (t) =
So
1
x
f (x) =
1
t 2 = f g(t) =
t
g(t x) = (t x)2
and
and
g(t) = t 2
Z
Z
t
x=0
t
x=0
1
(t x)2 d x
x
1
x /2 t 2 2t x + x 2 d x
519
t
x=0
= t 2 2x
2 1/
1
3
t x 2 2t x /2 + x /2 d x
1/
2
= 2t 2 t
2 3/2
2 5 t
+ x /2
3
5
x=0
3/
5/
4
2
t t 2 + t 2 .
3
5
2t x
1/
2
(26.1)
Thus, to compute
1
t2
t
when t = 4 ,
we actually compute
16 5/2
t
15
with t = 4 ,
obtaining
16 5/2
4
15
16 5
2
15
512
15
Basic Identities
Let us quickly note a few easily verified identities that can simplify the computation of some
convolutions.
The first identity is trivial to derive. Let be a constant, and let f and g be two functions.
Then, of course,
Z t
Z t
Z t
[ f (x)]g(t x) d x =
f (x)[g(t x)] d x =
f (x)g(t x) d x ,
x=0
x=0
x=0
f g(t) =
t
x=0
f (x)g(t x) d x
f g(t) =
=
x=0
Z tt
f (x)g(t x) d x
y=t0
f (t y)g(y)(1) dy
y=t
g(y) f (t y) dy =
t
y=0
g(y) f (t y) dy
520
The last integral is exactly the same as the integral for computing g f (t) , except for the cosmetic
change of denoting the variable of integration by y instead of x . So that integral is the formula
for formula for g f (t) , and our computations just above reduce to
f g(t) = g f (t) .
(26.2)
t2
1
t
t2 = t2
What an incredible stroke of luck! Weve already computed the convolution on the right in
example 26.2. Checking back to equation (26.1), we find
1
16 5/2
t
t2 =
15
t
Hence,
1
t
1
t
t2 = t2 =
16 5/2
t
15
and
[ f + g] h = [ f h] + [g h] ,
(26.3)
f [g + h] = [ f g] + [ f h]
(26.4)
f [g h] = [ f g] h
(26.5)
The first and second equations are that addition distributes over convolution. They are easily
confirmed using the basic definition of convolution. For the first:
[ f + g] h(t) =
=
=
x=0
Z t
x=0
Z t
x=0
[ f (x) + g(x)]h(t x) d x
[ f (x)h(t x) + g(x)h(t x)] d x
f (x)h(t x) d x +
t
x=0
g(x)h(t x)] d x
= [ f g] + [g h] .
The second, equation (26.4) follows in a similar manner or by combining (26.3) with the commutativity of the convolution. The last equation in the list, equation (26.5), states that convolution
521
is associative, that is, when convolving three functions together, it does not matter which two
you convolve first. Its verification requires showing that the two double integrals defining
and
f [g h]
[ f g] h
are equivalent. This is a relatively straightforward exercise in substitution, and will be left as a
challenge for the interested student (exercise 26.3 on page 530).
Finally, just for fun, lets make a few more simple observations:
Z t
0 g(t) = g 0(t) =
0 g(t x) d x = 0 .
x=0
f 1(t) = 1 f (t) =
f g(0) =
t
x=0
f (s) 1 d x =
f (s) d x
x=0
0
x=0
f (x)g(0 x) d x = 0
is well defined and finite for every positive value of t . In other words, f g is a well-defined
function on (0, ) , at least whenever f and g are both piecewise continuous on (0, ) . (In
fact, it can then even be shown that f g(t) is a continuous function on [0, ) .)
But now observe that one of the functions in example 26.2, namely t 1/2 , blows up at
t = 0 and, thus, is not piecewise continuous on (0, ) . So that example also demonstrates that,
sometimes, f g is well defined on (0, ) even though f or g is not piecewise continuous.
26.2
To see one reason convolution is important in the study of Laplace transforms, let us examine the
Laplace transform of the convolution of two functions f (t) and g(t) . Our goal is a surprisingly
simple formula of the corresponding transforms,
Z
F(s) = L[ f (t)]|s =
f (t)est dt
0
and
G(s) = L[g(t)]|s =
g(t)est dt
522
(The impatient can turn to theorem 26.1 on page 523 for that formula.)
Keep in mind that we can rename the variable of integration in each of the above integrals.
In particular, note (for future reference) that
Z
Z
sx
F(s) =
e
f (x) d x
and
G(s) =
esy g(y) dy .
0
Now, simply writing out the integral formulas for the Laplace transform and for the convolution yields
Z
L[ f g(t)]|s =
t=0
Z
t=0
Z
t=0
est f g(t) dt
e
st
x=0
x=0
f (x)g(t x) d x dt
est f (x)g(t x) d x dt
t=0
t=0
Z
Z
t
x=0
t
(26.6)
K (x, t) d x dt
x=0
where, simply to simplify expressions in the next few lines, weve set
K (x, t) = esx f (x) es(tx) g(t x)
It is now convenient to switch the order of integration in the last double integral. According
to the limits in that double integral, we are integrating over the region R in the X T plane
consisting of all (x, t) for which
0<t <
and, for each of these values of t ,
0<x<t
As illustrated in figure 26.1, region R is the portion of the first quadrant of the X T plane to
the left of the line t = x . Equivalently, as can also be seen in this figure, R is the portion of
the first quadrant above the line t = x . So R can also be described as the set of all (x, t) for
which
0<x <
and, for each of these values of x ,
x <t < .
Thus,
Z
t=0
t
x=0
K (x, t) d x dt =
ZZ
K (x, t) d A =
x=0
t=x
K (x, t) dt d x
523
t=
R
(x 0 , t0 )
t = t0
t = x0
(t0 , t0 )
(x 0 , x 0 )
x = x0
x = t0
Figure 26.1: The region R for the transform of the convolution. Note that the coordinates
of any point (x 0 , t0 ) in R must satisfy 0 < x 0 < t0 < .
x=0
t=x
t=x
sx
x=0
Z
s(tx)
t=x
g(t x) dt d x
Let us simplify the inner integral with the substitution y = t x (remembering that t is the
variable of integration in this integral):
Z
Z x
Z
s(tx)
sy
e
g(t x) dt =
e g(y) dy =
esy g(y) dy = G(s) !
t=x
y=xx
y=0
x=0
x=0
Thus,
L[ f g(t)]|s = F(s)G(s) .
Equivalently,
f g(t) = L1 [F(s)G(s)]|t
If we had been a little more complete in our computations, we would have kept track of
the exponential order of all the functions involved (see exercise 26.9), and obtained all of the
following theorem.
and
G(s) = L[g(t)]|s
524
Then the convolution f g(t) is of exponential order s1 for any s1 > s0 . Moreover,
for s > s0
L[ f g(t)]|s = F(s)G(s)
and
(26.7)
L1 [F(s)G(s)]|t = f g(t) .
(26.8)
Do remember that identities (26.7) and (26.8) are equivalent. It is also worthwhile to rewrite
these identities as
L[ f g(t)]|s = L[ f ]|s L[g(t)]|s
(26.7 )
and
(26.8 )
respectively. These forms, especially the latter, are sometimes a little more convenient in practice.
!Example 26.4:
1
10s + 21
1
1
= L1
2
(s 3)(s 7) t
s 10s + 21 t
1
= L
= L1
1
1
s 3 s 7 t
1
L1 1 = e3t e7t
s3 t
s 7 t
As luck would have it, this convolution was computed in example 26.1 on page 517),
e3t e7t =
Thus,
1
1
4
e7t e3t
1
= e3t e7t = 1 e7t e3t
4
s 2 10s + 21 t
The inverse transform in the last example could also have been computed using partial
fractions. Indeed, many of the inverse transforms we computed using partial fractions can also
be computed using convolution. Whether one approach or the other is preferred depends on the
opinion of the person doing the computing. However, as the next example shows, there are cases
where convolution can be applied, but not partial fractions. We will also use this example to
demonstrate how convolution naturally arises when solving differential equations.
!Example 26.5:
y + 9y =
with
y(0) = 0
and
y (0) = 0
525
L y s + 9L[y]|s =
s
t
1
L y + 9y s = L
s + 9 Y (s) =
s
Y (s) = 2
s s +9
2
s s +9
Because the denominator does not factor into two polynomials, we cannot use partial fractions
we must use convolution,
1
1
1
1
1
= L
.
= L
L
2
L
2
2
s s +9
s +9
s +9
Reversing the transform made on the right side of the above equations, we have
1
1
L
.
=
s
Combining the above and recalling that constants factor out, we then obtain
1
1
1
1
= L
y(t) = L
L
2
2
s + 9 t
s t
s s +9 t
1
1
=
sin(3t)
t
1 1
sin(3t)
3 t
1
sin(3[t x]) d x
x
That is,
1
y(t) =
3
t
x=0
Admittedly, this last integral is not easily evaluated by hand. But it is something that can
be accurately approximated for any specific (nonnegative) value of t using routines found in
many computer math packages. So it is still a usable formula.
526
26.3
As illustrated in our last example, convolution has a natural role in solving differential equations
when using the Laplace transform. However, if we look a little more carefully at the process
of solving differential equations using the Laplace transform, we will find that convolution can
play an even more significant role.
!Example 26.6:
with
y(0) = 0
y (0) = 0
and
where f = f (t) is any Laplace transformable function. Naturally, we will use the Laplace
transform. So let
Y (s) = L[y(t)]|s
and
F(s) = L[ f (t)]|s
Because of our initial conditions, the transform of the derivatives identities simplify considerably:
L y s = s 2 Y (s) s y(0) y (0) = s 2 Y (s)
|{z}
| {z }
0
and
L y s = sY (s) y(0) = sY (s)
|{z}
Consequently,
L y 10y + 21y s = L[ f (t)]|s
L y s 10L y s + 21L[y]|s = F(s)
s 2 Y (s) 10sY (s) + 21Y (s) = F(s)
s 2 10s + 21 Y (s) = F(s) .
where
H (s) =
1
s 2 10s + 21
Thus,
y(t) = L1 [Y (s)]|t = L1 [H (s)F(s)]|t
y(t) = h f (t)
where
1
h(x) = L [H (s)]|x = L
1
s 2 10s + 21 t
(26.9b)
527
With this and our chosen integral formula for h f , formula (26.9a), the solution to our
initial-value problem, becomes
Z t
1 7x
y(t) =
(26.10)
e e3x f (t x) d x .
0 4
Formula (26.10) is a convenient way to describe the solutions to our initial-value problem,
especially if we want to solve this problem for a number of different choices of f (t) . Using
it, we can quickly write out a relatively simple integral formula for the solution corresponding
to each f (t) . For example:
If f (t) = e4t , then f (t x) = e4(tx) and formula (26.10) yields
Z t
1 7x
y(t) =
e e3x e4(tx) d x .
0 4
1 7t
e
2
1 3t
e
4
1 4t
e
3
and
y(t) =
1 7t
e
28
1 3t
e
12
1
21
1
4 0
7x
e e3x 3 t x d x
is not easily evaluated by hand, but can be accurately approximated for any value of t using
routines found in our favorite computer math package.
?Exercise 26.1:
with
y(0) = 0
and
y (0) = 0
528
Generalizing what we just derived in the last example is easy. Suppose we have any secondorder initial-value problem of the form
ay + by + cy = f (t)
with
y(0) = 0
y (0) = 0
and
where a , b and c are constants, and f is any Laplace transformable function. Then, taking
the Laplace transform of both sides of the differential equation, letting
and
Y (s) = L[y(t)]|s
F(s) = L[ f (t)]|s
and noting that, because of our initial conditions, the transform of the derivatives identities
simplify to
L y s = s 2 Y (s) s y(0) y (0) = s 2 Y (s)
|{z}
| {z }
0
and
L y s = sY (s) y(0) = sY (s) ,
|{z}
0
we see that
L ay + by + cy s = L[ f (t)]|s
aL y s + bL y s + cL[y]|s = F(s)
as 2 Y (s) + bsY (s) + cY (s) = F(s)
2
as + bs + c Y (s) = F(s) .
where
Y (s) = H (s)F(s)
H (s) =
1
as 2 + bs + c
So,
y(t) = L1 [Y (s)]|t = L1 [H (s)F(s)]|t
y(t) = h f (t)
where
h(x) = L1 [H (s)]|x = L1
1
as 2 + bs + c t
(26.11b)
The fact that the formula for y in equation set (26.11) is the solution to
ay + by + cy = f (t)
with
y(0) = 0
and
y (0) = 0
is often called Duhamels principle. The function H (s) is usually referred to as the transfer
function, and its inverse transform, h(t) , is usually called the impulse response function.1 Keep
in mind that a , b and c are constants, and that we assumed f is Laplace transformable.
1 The reason why h is called the impulse response function will be revealed in chapter 28. A few authors also
529
As illustrated in our example, Duhamels principle makes it easy to write down solutions to
the given initial-value problem once we have found h . This is especially useful if we need to
find solutions to
ay + by + cy = f (t)
with
y(0) = 0
y (0) = 0
and
y (0) = 0
y (0) = 0
...
and
y N 1 (0) = 0
is given by
y(t) = h f (t)
where
h(x) = L1 [H (s)]|x
and
H (s) =
1
a0
sn
+ a1
s n1
+ + an
As noted a few pages ago, the convolution h f can be computed using either
Z t
Z t
h(x) f (t x) d x
or
h(t x) f (x) d x .
0
In practice, use whichever appears easier to compute given the h and f involved. In the
examples here, we used the first. Later, when we re-examine resonance in mass/spring
systems (section 27.7), we will use the other integral formula.
2.
It turns out that the f (t) in Duhamels principle (as described above) does not have to
be Laplace transformable. By applying the above theorem with
(
f (t)
if t < T
f T (t) =
,
0
if t T
and then letting T , you can show that y = h f is well defined and satisfies the
initial-value problem even when f is merely piecewise continuous on (0, ) .
3.
530
but satisfying nonzero initial conditions, then you simply need to add the solution obtained
by Duhamels principle to the solution to the corresponding homogeneous differential
equation
a0 y (N ) + a1 y (N 1) + + a N 2 y + a N 1 y + a N y = 0
that satisfies the desired initial conditions.
Additional Exercises
26.2. Compute the convolution f g(t) of each of the following pairs of functions:
1
t
a. f (t) = e3t
and
g(t) = e5t
b. f (t) =
and
g(t) = 6
d. f (t) = t
c. f (t) =
e. f (t) = t 2
and
g(t) = t 2
g(t) = t 2
and
and
g(t) = e3t
g(t) = t
g(t) = sin(t)
g(t) = e2t
g(t) = t 2
26.3. Verify the associative property of convolution. That is, verify equation (26.5) on page
520.
26.4. Using convolution, compute the inverse Laplace transform of each of the following:
a.
1
(s 4)(s 3)
b.
1
s(s 3)
c.
1
s(s 2 + 4)
d.
1
(s 3)(s 2 + 1)
e.
1
(s 2 + 9)2
f.
s2
(s 2 + 4)2
g.
e4s
s(s 2 + 1)
h.
1
s(s 3)
26.5. For each of the following initial-value problems, find the corresponding transfer function
H and the impulse response function h , and write down the corresponding convolution
integral formula for the solution:
a. y + 4y = f (t)
with
y(0) = 0
and
y (0) = 0
b. y 4y = f (t)
with
y(0) = 0
and
y (0) = 0
c. y 6y + 9y = f (t)
with
y(0) = 0
d. y 6y + 18y = f (t)
with
e. y + 16y = f (t)
y(0) = 0
with
y(0) = 0
and
and
and
y (0) = 0
y (0) = 0
y (0) = 0
Additional Exercises
531
26.6. Using the results from exercise 26.5 a, find the solution to
y + 4y = f (t)
with
y(0) = 0
y (0) = 0
and
a. f (t) = 1
b. f (t) = t
d. f (t) = sin(2t)
26.7. Using the results from exercise 26.5 c, find the solution to
y 6y + 9y = f (t)
with
y(0) = 0
and
y (0) = 0
b. f (t) = t
d. f (t) = e3t
e. f (t) = et
c. f (t) = e3t
where 6= 3
26.8. Using the results from exercise 26.5 e, find the solution to
y + 16y = f (t)
with
y(0) = 0
and
y (0) = 0
a. f (t) = 1
b. f (t) = t
d. f (t) = sin(4t)
26.9. Let f and g be two piecewise continuous functions on the positive real line satisfying,
for all t > 0 ,
| f (t)| < M f es0 t
|g(t)| < Mg es0 t
and
for some constants M f , Mg and s0 .
a. Show that | f g(t)| < M f Mg es0 t t whenever t > 0 .
b. Why does this tell us that f g is of exponential order s1 for any s1 > s0 ?