Вы находитесь на странице: 1из 23

Chapter 4

Integral transforms
In mathematics, an integral transform is any transform T of a given function f of
the following form:
Tf(s) =
_
x
2
x
1
K(x, s)f(x)dx. (4.1)
The input is a function f(x) and the output is another function Tf(s). There
are dierent integral transforms, depending on the kernel function K(x, s). The
transforms we consider in this chapter are the Laplace transform and the Fourier
transform.
4.1 Laplace transform
4.1.1 Basic denition and properties
To obtain the Laplace transform of a given function f(x) we use the kernel K(x, s) =
e
sx
, namely:
L{f} = F(s) =
_

0
f(x)e
sx
dx. (4.2)
Here s can also be a complex variable, namely the Laplace transform maps a real
function to a complex one. For our purposes it is enough to consider for the moment
s real. We can easily verify that L is a linear operator. In fact:
L{af + bg} =
_

0
[af(x) + bg(x)]e
sx
dx = a
_

0
f(x)e
sx
dx + b
_

0
g(x)e
sx
dx
L{af + bg} = aL{f} + bL{g}. (4.3)
123
124 CHAPTER 4. INTEGRAL TRANSFORMS
Example 4.1.1 Find the Laplace transform of the function f(x) = 1.
It is
L{1} =
_

0
e
sx
dx =
1
s
_
e
sx

0
=
1
s
,
provided that s > 0 (this ensures us that lim
x
e
sx
= 0, namely that the integral
_

0
e
sx
dx converges).
Example 4.1.2 Find the Laplace transform of f(x) = x
n
, with n positive integer.
We integrate by parts and obtain:
L{x
n
} =
_

0
x
n
e
sx
dx =
1
s
_
x
n
e
sx

0
+
n
s
_

0
x
n1
e
sx
dx =
n
s
L{x
n1
}.
We have assumed also in this case that s > 0 (otherwise the integral
_

0
x
n
e
sx
dx
does not converge). To obtain L{x
n1
} we proceed the same way and obtain L{x
n1
} =
n1
s
L{x
n2
}. We iterate n times and obtain:
L{x
n
} =
n(n 1)(n 2) . . .
s
n
L{1} =
n!
s
n+1
.
Example 4.1.3 Find the Laplace transform of f(x) = sin(mx).
It is L{f(x)} =
_

0
e
sx
sin(mx)dx. By using the relation sin(mx) =
e
imx
e
imx
2i
we
obtain:
L{f(x)} =
1
2i
__

0
e
(ims)x
dt
_

0
e
(im+s)x
dx
_
=
1
2i
__
e
(ims)x
ims
_

_
e
(im+s)x
ims
_

0
_
=
1
2i
_
1
s im

1
s + im
_
=
m
s
2
+ m
2
,
for s > 0. In fact, the terms e
(ims)x
and e
(im+s)x
can be written as e
sx
[cos(mx) sin(mx)].
In the limit for x , only the term e
sx
matters (the term [cos(mx) sin(mx)]
oscillates) and it tends to zero for any s > 0.
In these three simple cases we have seen that the integral 4.2 was convergent
for any possible value of s > 0. This is not always the case, as the two following
examples show.
4.1. LAPLACE TRANSFORM 125
Example 4.1.4 Find the Laplace transform of f(x) = e
ax
.
L{e
ax
} =
_

0
e
ax
e
sx
dx = lim
A
_
A
0
e
(as)x
dx = lim
A
_
e
(as)x
a s
_
A
0
.
It is clear that this limit exists and is nite only if a < s (a < Re (s) if s C),
namely we can dene the Laplace transform of the function f(x) = e
ax
only if Re
(s) > a. In this case it is:
L{e
ax
} =
1
s a
.
Example 4.1.5 Find the Laplace transform of the function f(x) = cosh(mx).
It is L{f(x)} =
_

0
e
sx
cosh(mx)dx. By using the relation cosh(mx) =
e
mx
+e
mx
2
we obtain:
L{f(x)} =
1
2
__

0
e
(ms)x
dt +
_

0
e
(m+s)x
dx
_
=
1
2
__
e
(ms)x
ms
_

_
e
(m+s)x
m + s
_

0
_
=
1
2
_
1
s m
+
1
s + m
_
=
s
s
2
m
2
.
This result holds as long as e
(ms)x
and e
(m+s)x
tend to zero for x , namely it
must be s > |m|.
There are a few properties of the Laplace transform that help us nding the
transform of more complex functions. If we know that F(s) is the Laplace transform
of f(x), namely that L{f(x)} = F(s), then:

L
_
e
cx
f(x)
_
= F(s c) (4.4)
This property comes directly from the denition of Laplace transform, in fact:
L
_
e
cx
f(x)
_
=
_

0
f(x)e
cx
e
sx
dx =
_

0
f(x)e
(sc)x
dx = F(s c).

L{f(cx)} =
1
c
F
_
s
c
_
, (c > 0) (4.5)
To show that it is enough to substitute cx with t. In this way is x =
t
c
, dx =
dt
c
and therefore:
126 CHAPTER 4. INTEGRAL TRANSFORMS
L{f(cx)} =
_

0
e
sx
f(cx)dx =
1
c
_

0
e

s
c
t
f(t)dt =
1
c
F
_
s
c
_
.

L{u
c
(x)f(x c)} = e
sc
F(s) (4.6)
Here is u
c
(x) the Heaviside or step function, namely:
u
c
(x) =
_
_
_
0 x < c
1 x c
(4.7)
The function u
c
(x)f(x c) is thus given by:
u
c
(x)f(x c) =
_
_
_
0 x < c
f(x c) x c
We have thus:
L{u
c
(x)f(x c)} =
_

c
e
sx
f(x c)dx.
With the substitution t = x c we obtain:
L{u
c
(x)f(x c)} =
_

0
e
s(c+t)
f(t)dt = e
sc
F(s).

L{x
n
f(x)} = (1)
n
F
(n)
(s) (4.8)
It is enough to derive F(s) with respect to s, to obtain:
F

(s) =
d
ds
_

0
e
sx
f(x)dx =
_

0
xe
sx
f(x)dx = L{xf(x)}.
If we now dierentiate n times F(s) with respect to s we obtain:
F
(n)
(s) = (1)
n
L{x
n
f(x)}.
From it, Eq. 4.8 is readily obtained.

L{f

(x)} = f(0) + sF(s) (4.9)


This property can be obtained integrating e
sx
f

(x) by parts, namely:


L{f

(x)} =
_

0
e
sx
f

(x)dx =
_
f(x)e
sx

0
+s
_

0
e
sx
f(x)dx = f(0)+sF(s),
provided that lim
x
f(x)e
sx
= 0.
4.1. LAPLACE TRANSFORM 127
Example 4.1.6 Find the Laplace transform of cos(mx).
We could calculate this transform directly but it is easier to use the Laplace transform
of sin(mx) that we have calculated in Example 4.1.3 (L{sin(mx)} =
m
s
2
+m
2
). From
Eq. 4.9 (and reminding that L is a linear operator) we have:
L
_
d
dx
sin(mx)
_
= mL{cos(mx)} = sin(0) + s
m
s
2
+ m
2
.
L{cos(mx)} =
s
s
2
+ m
2
.
Example 4.1.7 Find the Laplace transform of xcosh(mx).
We remind from Example 4.1.5 that L{cosh(mx)} = F(s) =
s
s
2
m
2
(s > |m|). Eq.
4.8 tells us that F

(s) is the Laplace transform of xcosh(mx). We have therefore:


L{xcosh(mx)} = F

(s) =
s
2
m
2
2s
2
(s
2
m
2
)
2
=
s
2
+ m
2
(s
2
m
2
)
2
.
Example 4.1.8 Find the Laplace transform of the function f(x) dened in this way:
f(x) =
_
_
_
x x <
x cos(x ) x
By means of the step function (Eq. 4.7) we can rewrite f(x) as f(x) = x
u

(x) cos(x). The Laplace transform of this function can be found by means of Eq.
4.6 and of the known results L{x
n
} =
n!
s
n+1
(Example 4.1.2) and L{cos(mx)} =
s
s
2
+m
2
(Example 4.1.6).
L{f(x)} = L{x} L{u

(x) cos(x )} =
1
s
2
e
s
L{cos x} =
1
s
2

se
s
s
2
+ 1
.
128 CHAPTER 4. INTEGRAL TRANSFORMS
4.1.2 Solution of initial value problems by means of Laplace
transforms
We have seen (Eq. 4.9) that the Laplace transform of the derivative of a function
is given by L{f

(x)} = f(0) + sF(s), where F(s) = L{f(x)}. If we consider the


Laplace transform of higher order derivatives we obtain (always integrating by parts):
L{f

(x)} =
_

0
e
sx
f

(x)dx =
_
e
sx
f

(x)

0
+ s
_

0
e
sx
f

(x)dx
= f

(0) sf(0) + s
2
F(s)
L{f

(x)} =
_

0
e
sx
f

(x)dx =
_
e
sx
f

(x)

0
+ s
_

0
e
sx
f

(x)dx
= f

(0) sf

(0) s
2
f(0) + s
3
F(s)
.
.
.
L{f
(n)
(x)} = s
n
F(s) s
n1
f(0) s
n2
f

(0) sf
(n2)
(0) f
(n1)
(0), (4.10)
provided that lim
x
f
(m)
(x)e
sx
= 0, with m = 0 . . . n 1. This result allows us to
simplify considerably linear ODEs. Let us take for instance an initial value problem
consisting of a second-order inhomogeneous ODE with constant coecients (but the
method can be applied also to more complex ODEs):
_

_
a
2
y

(x) + a
1
y

(x) + a
0
y(x) = f(x)
y(0) = y
0
y

(0) = y
0

If we now make the Laplace transform of both members of this equation (calling Y (s)
the Laplace transform of y(x) and F(s) the Laplace transform of f(x)), we obtain:
a
2
_
s
2
Y (s) sy
0
y
0

+ a
1
[sY (s) y
0
] + a
0
Y (s) = F(s)
Y (s)(a
2
s
2
+ a
1
s + a
0
) = F(s) + a
1
y
0
+ a
2
(sy
0
+ y
0

)
Y (s) =
F(s) + a
1
y
0
+ a
2
(sy
0
+ y
0

)
a
2
s
2
+ a
1
s + a
0
. (4.11)
Namely, we have transformed an ODE into an algebraic one, which is of course easier
to solve. Moreover, the particular solution (satisfying the given initial conditions)
is automatically found, without need to search rst the general solution and the
look for the coecients that satisfy the initial conditions. Further, homogeneous
and inhomogeneous ODEs are handled in exactly the same way; it is not necessary
to solve the corresponding homogeneous ODE rst. The price to pay for these
4.1. LAPLACE TRANSFORM 129
advantages is that Eq. 4.11 is not yet the solution of the given ODE; we should
invert this relation and nd the function y(x) whose Laplace transform is given by
Y (s). This function is called the inverse Laplace transform of Y (s) and it is indicated
with L
1
{Y (s)}.
Since the operator L is linear, it is easy to show that also the inverse operator
L
1
is linear. In fact, given two functions f
1
(x) and f
2
(x) whose Laplace transforms
are F
1
(s) and F
2
(s), respectively, the linearity of the operator L ensures us that:
L{c
1
f
1
(x) + c
2
f
2
(x)} = c
1
F
1
(s) + c
2
F
2
(s).
If we apply now the operator L
1
to both members of this equation we obtain:
L
1
L{c
1
f
1
(x) +c
2
f
2
(x)} = L
1
{c
1
F
1
(s) +c
2
F
2
(s)} = c
1
L
1
{F
1
(s)} +c
2
L
1
{F
2
(s)}.
To invert the function F(s) it is therefore enough to split it into many (possibly
simple) addends and nd for each of them the inverse Laplace transform. Based
on the examples in Sect. 4.1.1 (and others that we do not have time to calculate,
but that can be found in the mathematical literature) it is possible to construct a
dictionary of basic functions/expressions and corresponding Laplace transforms,
as in Table 4.1. Any time we face a particular F(s), we can look at the dictionary and
check whether it is possible to recover the function f(x) whose Laplace transform is
F(s).
Since the Laplace transform of the solution y(x) is always in the form of a
fraction (see Eq. 4.11), the method we will always use to split a function F(s) into
simple factors is the method of the partial fractions. It is worth reminding it briey.
We assume that F(s) =
Pn(s)
Qm(s)
is the quotient between two polynomials P
n
(s) and
Q
m
(s), with degrees n and m respectively. We will also assume m > n. It is always
possible to factorize the polynomial Q
m
(s) at the denominator into factors of the
type as +b or factors of the type cs
2
+ds +e. Sometimes, when we factorize Q
m
(s),
we obtain factors of the type (as + b)
k
(that means that s =
b
a
is a root with
multiplicity k of the polynomial Q
m
(s)) or of the type (cs
2
+ds+e)
k
. The methos of
the partial fractions consists in writing the fraction P
n
(s)/Q
m
(s) as sum of simpler
fractions of the type
A
(as+b)
k
or
As+B
(cs
2
+ds+e)
k
. The partial fractions we seek depend on
the factor at the denominator, namely:
If a factor (as+b) is present at the denominator, then we seek a partial fraction
of the type:
A
as + b
130 CHAPTER 4. INTEGRAL TRANSFORMS
Table 4.1: Summary of elementary Laplace transforms
f(x) = L
1
{F(s)} F(s) = L{f(x)} Convergence
1
1
s
s > 0
e
mx 1
sm
s > m
x
n n!
s
n+1
s > 0
sin(mx)
m
s
2
+m
2
s > 0
cos(mx)
s
s
2
+m
2
s > 0
sinh(mx)
m
s
2
m
2
s > m
cosh(mx)
s
s
2
m
2
s > m
e
mx
sin(px)
p
(sm)
2
+p
2
s > m
e
mx
cos(px)
sm
(sm)
2
+p
2
s > m
x
n
e
mx n!
(sm)
n+1
s > m
x
1/2
_

s
s > 0

x
1
2
_

s
3
s > 0
(x c) e
cs
c > 0
u
c
(x)
e
cs
s
s > 0
u
c
(x)f(x c) e
cs
F(s)
e
cx
f(x) F(s c)
f(cx)
1
c
F
_
s
c
_
c > 0
_
x
0
f( x)d x
F(s)
s
_
x
0
f(x )g()d F(s)G(s)
(1)
n
x
n
f(x) F
(n)
(s)
f
(n)
(x) s
n
F(s) s
n1
f(0) f
(n1)
(0)
4.1. LAPLACE TRANSFORM 131
If a factor (as + b)
k
is present at the denominator, then we seek k partial
fractions of the type:
A
i
(as + b)
i
, i = 1 . . . k
If a factor (cs
2
+ ds + e) is present at the denominator, then we seek a partial
fraction of the type:
As + B
cs
2
+ ds + e
If a factor (cs
2
+ds +e)
k
is present at the denominator, then we seek k partial
fractions of the type:
A
i
s + B
i
(cs
2
+ ds + e)
i
, i = 1 . . . k
Example 4.1.9 Use the method of the partial fraction to split the function
F(s) =
s
3
+ s
2
+ 1
s
2
(s
2
+ s + 1)
.
We must determine the coecients A, B, C, D so that:
A
s
+
B
s
2
+
Cs + D
s
2
+ s + 1
=
s
3
+ s
2
+ 1
s
2
(s
2
+ s + 1)
.
If we multiply this equation by s
2
(s
2
+ s + 1), we obtain:
As(s
2
+ s + 1) + B(s
2
+ s + 1) + Cs
3
+ Ds
2
= s
3
+ s
2
+ 1.
We must now compare the coecients with like power of s, obtaining the system of
equations:
_

_
A + C = 1
A + B + D = 1
A + B = 0
B = 1
,
_

_
A = 1
B = 1
C = 2
D = 1
.
The given fraction can be thus decomposed in this way:
s
3
+ s
2
+ 1
s
2
(s
2
+ s + 1)
=
1
s
+
1
s
2
+
2s + 1
s
2
+ s + 1
.
132 CHAPTER 4. INTEGRAL TRANSFORMS
Example 4.1.10 Find the inverse Laplace transform of the function
F(s) =
s
2
+ 5
s
3
9s
We can write the given function as:
F(s) =
s
2
+ 5
s(s
2
9)
=
s
2
+ 5
s(s 3)(s + 3)
.
To invert this function we have to apply the method of the partial fractions, namely:
s
2
+ 5
s(s 3)(s + 3)
=
A
s
+
B
s 3
+
C
s + 3
=
As
2
9A + Bs
2
+ 3Bs + Cs
2
3Cs
s(s 3)(s + 3)
.
Now we can compare terms with like power of s, obtaining the following system of
equations:
_

_
A + B + C = 1
3B 3C = 0
9A = 5
From the second we obtain B = C, from the last A =
5
9
. From the rst equation:
2B =
14
9
B = C =
7
9
.
Now we can invert all the terms of the given function and obtain:
f(x) = L
1
_
s
2
+ 5
s
3
9s
_
= L
1
_

5
9
1
s
+
7
9
_
1
s 3
+
1
s + 3
__
=
5
9
+
14
9
L
1
_
s
s
2
9
_
=
5
9
+
14
9
cosh(3x).
Example 4.1.11 Solve the initial value problem
_

_
y

(x) + 4y(x) = e
x
y(0) = 0
y

(0) = 1
4.1. LAPLACE TRANSFORM 133
We have to apply the operator L to both members of the given ODE. Since this is
a second-order ODE with constant coecients, we can apply directly Eq. 4.11 to
obtain:
Y (s) =
F(s) 1
s
2
+ 4
=
1
s1
1
s
2
+ 4
=
2 s
(s 1)(s
2
+ 4)
.
We apply now the method of the partial fractions to decompose this function:
A
s 1
+
Bs + C
s
2
+ 4
=
2 s
(s 1)(s
2
+ 4)
As
2
+ 4A + Bs
2
+ Cs Bs C = 2 s.
By equating the terms with like power of s we obtain the system of equations:
_

_
A + B = 0
C B = 1
4AC = 2

_
A = B
C = B 1
4B B = 1

_
B =
1
5
A =
1
5
C =
6
5
The decomposed Y (s) is thus given by:
Y (s) =
1
5
1
s 1

1
5
s
s
2
+ 4

6
5
1
s
2
+ 4
.
With the help of Table 4.1 we can easily identify the inverse Laplace transforms of
these addends, obtaining therefore:
y(x) =
e
x
5

cos(2x)
5

3 sin(2x)
5
.
The method of the Laplace transform is sometimes more convenient, sometimes less
convenient compared to traditional methods of ODE resolution. It proves however
to be always more convenient in the case in which the inhomogeneous function is
a step function. In fact, in this case the only available traditional method is the
laborious variation of constants, whereas the Laplace transform of the step function
can be readily found.
Example 4.1.12 Find the solution of the initial value problem:
_

_
y

(x) + y(x) = g(x)


y(0) = 0
y

(0) = 0
134 CHAPTER 4. INTEGRAL TRANSFORMS
where g(x) is given by:
g(x) =
_

_
0 0 x < 1
x 1 1 x < 2
1 x 2
(also known as ramp loading).
The function g(x) can be written as:
g(x) = u
1
(x)(x 1) u
2
(x)(x 2),
where u
c
(x) is the Heaviside function (Eq. 4.7). In fact, for x < 1 both u
1
and u
2
are
zero. For x between 1 and 2 is u
1
= 1 but u
2
is still zero. For x 2 both functions
are 1 and therefore u
1
(x)(x 1) u
2
(x)(x 2) = x 1 x +2 = 1. If we make the
Laplace transform of both members of the given ODE we obtain:
s
2
Y (s) + Y (s) =
e
s
s
2

e
2s
s
2
Y (s) =
_
e
s
e
2s
_
1
s
2
(s
2
+ 1)
=
_
e
s
e
2s
_
1 + s
2
s
2
s
2
(s
2
+ 1)
=
e
s
e
2s
s
2

e
s
e
2s
s
2
+ 1
.
To invert this function Y (s) we use again the relation L{u
c
(x)f(x c)} = e
cs
F(s)
(and therefore L
1
{e
cs
F(s)} = u
c
(x)f(x c)) to obtain:
f(x) = u
1
(x)(x 1) u
2
(x)(x 2) u
1
(x) sin(x 1) + u
2
(x) sin(x 2).
Among the results presented in Table 4.1 very signicant is the one concerning the
Dirac delta function (xc). We remind here briey what is the Dirac delta function
and what are its properties. Given a function g(x) dened in the following way:
g(x) = d

(x) =
_
_
_
1
2
< x <
0 x or x
(4.12)
it is clear that the integral of this function is 1 for any possible choice of , in fact:
_

g(x)dx =
_

1
2
dx = 1.
It is also clear that if tends to zero, the interval of values of x in which g(x) is
dierent from zero becomes narrower and narrower until it disappears. Analogously,
the function g(x c) = d

(x c) is non-null only in a narrow interval of x centered


4.1. LAPLACE TRANSFORM 135
on c that disappears for tending to zero. The limit of the function g(x) = d

(x)
for 0 is called Dirac delta function and is indicated with (x). It is therefore
characterized by the properties:
(x c) = 0 x = c (4.13)
_

(x)dx = 1. (4.14)
Given a generic function f(x), if we integrate f(x)(x c) between and we
obtain:
_

f(x)(x c)dx = lim


0
1
2
_
c+
c
f(x)dx = lim
0
1
2
[2f( x)] , x [c , c + ].
The last step is justied by the mean value theorem for integrals. But the interval
of values in which x must be taken collapses to the point c for 0, therefore we
obtain the important property of the Dirac delta function:
_

f(x)(x c)dx = f(c). (4.15)


To calculate the Laplace transform of (x c) (with c 0) it is conveninet to
calculate rst the Laplace transform of the function d

(xc) and then take the limit


0, namely:
L{(x c)} = lim
0
_

0
e
sx
d

(x c)dx = lim
0
_
c+
c
e
sx
2
dx
= lim
0
e
s(c+)
e
s(c+)
2s
= e
sc
lim
0
e
s
e
s
2s
= e
sc
lim
0
(e
s
+ e
s
)
2
= e
sc
.
The last step is justied by the de lHopitals rule for limits. In this way we have
found the result reported in Table 4.1 about the Laplace transform of (x c). In
the case that c = 0 we have L{(x)} = 1.
4.1.3 The Bromwich integral
Although for most of the practical purposes the inverse Laplace transform of a given
function F(s) can be found by means of the dictionary provided by Tab. 4.1 (or
of more extended tables that can be found in the literature), a general formula for
136 CHAPTER 4. INTEGRAL TRANSFORMS
Figure 4.1: The innite line L along which the Bromwich integral must be performed.
the inversion of F(s) can be found treating F(s) as a complex function and is given
by the so-called Bromwich integral:
f(x) = L
1
{F(s)} =
1
2i
_
+i
i
e
sx
F(s)ds, (4.16)
where is a real positive number and is larger that the real parts of all the singu-
larities of e
sx
F(s). Since F(s) has been dened as the integral of e
sx
f(x) between
x = 0 and x = , we will consider in this formula only positive values of x, as well.
In practice, the integral must be performed along the innite line L, parallel to the
imaginary axis, indicated in Fig. 4.1. At this point, a curve must be chosen in order
to close the contour C. Possible completion paths are for instance the curves
1
or

2
indicated in Fig. 4.2, namely the half-circles with radius R on the left and on
the right of L, respectively. For R these curves make with L a closed contour.
The Bromwich integral can be evaluated by means of the residue theorem provided
that the integral of the function e
sx
F(s) tends to zero for R (radius of the chosen
half-circle) tending to innity. If we choose the completion path
1
, then the residue
theorem ensures us that:
f(x) =
1
2i
2i

C
R
j
=

C
R
j
, (4.17)
4.1. LAPLACE TRANSFORM 137
Figure 4.2: Possible contour completions for the integration path L to use in the
Bromwich integral.
where the sum is extended to all the residues of the function e
sx
F(s) in the complex
plane. In fact, by construction L lies on the right of each singularity of e
sx
F(s) and
on the limit R the closed curve C = L+
1
will enclose them all (including for
instance the singularity z
1
that in Fig. 4.2 is not yet enclosed in C). If we instead
have to choose the completion path
2
, then the closed curve L +
2
will enclose no
singularities and therefore f(x) will be zero.
Example 4.1.13 Find the inverse Laplace transform of the function
F(s) =
2e
2s
s
2
+ 4
.
From the relation L{u
c
(x)f(x c)} = F(s)e
cs
we can already derive the inverse
Laplace transform of the given function, namely u
2
(x) sin[2(x 2)]. We check if we
can obtain the same result be means of the Bromwich integral. We have to evaluate
the integral
1
2i
_
+i
i
2e
s(x2)
s
2
+ 4
ds.
We notice rst that the given function has two simple poles at s = 2i and s = 2i
(in fact it is s
2
+ 4 = (s + 2i)(s 2i)), both of which have Re (z) = 0. We can
138 CHAPTER 4. INTEGRAL TRANSFORMS
therefore take an arbitrarily (but positive) small value of . We can distinguish two
cases: i) x < 2 and ii) x > 2. For x < 2 the exponent s(x 2) has negative real
part if Re (s) > 0. We notice here that e
s(x2)
= e
(x2)Re(s)
e
i(x2)Im(s)
, therefore
what determines the behavior of this function at innity is e
(x2)Re(s)
(e
i(x2)Im(s)
has
modulus 1 and does not create problems). That means that, for Re (s) + the
function e
s(x2)
tends to zero. At the same time the denominator s
2
+ 4 diverges as
Re (s) + and this means that also the term
1
s
2
+4
that multiplies e
s(x2)
tends to
zero along the curve
2
for R . Therefore the integral of the function F(s)e
sx
tends to zero along the curve
2
of Fig. 4.2 (for R ) and we can calculate the
Bromwich integral by means of the contour C = L +
2
. For what we have learned,
since the given closed contour does not enclose the poles, the function f(x) is zero.
For x > 2, the function e
s(x2)
tends to zero for Re (s) . That means
that the integral of the function
e
s(x2)
s
2
+4
tends to zero (for R ) along the curve
1
of Fig. 4.2 and we take therefore
1
as a completion of L to calculate the Bromwich
integral. For the residue theorem, this integral is given by the sum of the residues of
the function e
sx
F(s) at all the poles, namely:
f(x) = Res(2i) + Res(2i).
We have:
Res(2i) = lim
s2i
(s 2i)
2e
s(x2)
s
2
+ 4
= lim
s2i
2e
s(x2)
s + 2i
=
e
2i(x2)
2i
Res(2i) = lim
s2i
(s + 2i)
2e
s(x2)
s
2
+ 4
= lim
s2i
2e
s(x2)
s 2i
=
e
2i(x2)
2i
By summing up these two residues we obtain:
f(x) =
1
2i
_
e
2i(x2)
e
2i(x2)

= sin[2(x 2)].
This is what we obtain if x > 2 whereas, as we have seen, if x is smaller than 2
the function is zero. Recalling the denition of the Heaviside function u
c
(x) we can
conclude that the inverse Laplace transform of the given function is:
f(x) = L
1
_
2e
2s
s
2
+ 4
_
= u
2
(x) sin[2(x 2)].
Example 4.1.14 Find the inverse Laplace transform of the function:
F(s) =

s a,
with a R.
4.1. LAPLACE TRANSFORM 139
The function e
sx

s a has no poles, but the function



z is multiple-valued in the
complex plane, therefore, as we have seen, a branch point is present at the point
z = 0, namely at s = a. This is the only singularity of our F(s)e
sx
and therefore,
in order to evaluate the Bromwich integral, we have to take larger than a. The
integral to calculate will be:
L
1
{

s a} =
1
2i
_
+i
i

s ae
sx
ds.
By means of the substitution z = s a we obtain:
L
1
{

s a} =
1
2i
_
+i
i

ze
(z+a)x
dz =
e
ax
2i
_
+i
i

ze
zx
dz.
In this case, the branch point is at zero, therefore can be arbitrarily small (but
always larger than zero). Since z = 0 is a branch point of the function to integrate,
we have to introduce a branch cut to evaluate the integral. Although we have taken
so far the positive real axis as a branch cut, we have also said that this choice is
arbitrary and to make the function

z singe value it is enough that closed curves are
not allowed to enclose the origin. We can therefore take as branch cut the negative
real axis. In Fig. 4.3 we indicate the contour we must use to integrate the given
function. Since the closed contour C = L +
1
+ r
1
+ + r
2
+
2
does not enclose
singularities, its integral is zero. To evaluate the Bromwich integral (namely the
integral along L) we have to calculate the integral along the arcs
1
and
2
, along
the straight lines r
1
and r
2
and along the circumference .
Since the function

ze
zx
tends to zero for Re (z) (the term

z cannot
contrast the exponential decay of e
zx
; remind that x must be positive), the integral
along the arcs
1
and
2
disappears.
To evaluate the integral along we take as usual z = e
i
and we take the limit
for 0. The interval of values of is [, ], in fact, as we arrive at the rst
argument will be . Then, we rotate clockwise around the origin and after a whole
circuit the argument will be . Since dz = ie
i
d we have:
_

ze
zx
dz =
_

e
i

2
e
xe
i
ie
i
d.
The integrating function clearly tends to zero for 0, therefore there is no contri-
bution from the integral over .
Along the straight lines r
1
and r
2
we can assume that the arguments of the
complex numbers lying on them are (along r
1
) and (along r
2
) and that their
imaginary parts tend to zero, therefore we have z = re
i
(r
1
) and z = re
i
(r
2
).
Consequently, dz = e
i
dr (r
1
) and dz = e
i
dr (r
2
). Notice here that, although we
are on the negative real axis, r is positive. In fact, e
i
= e
i
= 1. The parameter
140 CHAPTER 4. INTEGRAL TRANSFORMS
Figure 4.3: Contour to use in Example 4.1.14.
r runs between + and 0 (r
1
) and between 0 and + (r
2
). The integral of the given
function along r
1
turns out to be:
_
r
1

ze
zx
dz =
_
0

re
i

2
e
xre
i
e
i
dr =
_
0

r i e
xr
(1)dr = i
_

0

re
xr
dr.
Along r
2
we have:
_
r
2

ze
zx
dz =
_

0

re
i

2
e
xre
i
e
i
dr =
_

0

r(i)e
xr
(1)dr = i
_

0

re
xr
dr.
In the end we have:
f(x) = L
1
{

s a} =
e
ax
2i
_
r
1
+r
2

ze
zx
dz =
e
ax

_

0

re
xr
dr.
The sign minus is due to the fact that, as we have said, the integral along the whole
closed curve C is zero, therefore
_
L
F(s)e
sx
ds =
_
r
1
+r
2
F(s)e
sx
ds. To evaluate
the integral
_

re
xr
dr we make the substitution xr = t
2
, therefore r =
t
2
x
and
dr =
2tdt
x
. We obtain:
_

0

re
xr
dr =
1
x
3/2
_

0
te
t
2
2tdt.
Since 2te
t
2
is the dierential of e
t
2
we can integrate the given function by parts
and obtain:
_

0

re
xr
dr =
1
x
3/2
_
_
te
t
2
_

_

0
e
t
2
dt
_
.
4.2. FOURIER TRANSFORMS 141
The term under square brackets is zero. By using the known result
_

0
e
t
2
dt =

2
we obtain:
_

0

re
xr
dr =

2x
3/2
.
This result completes our inversion of the function F(s) =

s a, namely we have:
f(x) = L
1
{

s a} =
e
ax
2

x
3
.
4.2 Fourier transforms
Fourier transforms are widely used in physics and astronomy because they allow to
express a function (not necessarily periodic) as a superposition of sinusoidal func-
tions, therefore we devote this section to them. Since the Fourier transforms are used
mostly to represent time-varying functions, we shall use t as independent variable
instead of x. On the other hand, the transformed variable represents for most of the
application a frequency and will be indicated with instead of s.
4.2.1 Fourier series
For some physical applications, we might need to expand in series some functions
that are not continuous or not dierentiable and that therefore do not admit a
Taylor series. Fourier series allow to represent periodic functions, for which a Taylor
expansion does not exist, as superposition of sine and cosine functions. Given a
periodic function f(t) with period T such that the integral of |f(t)| over one period
converges, f(t) can be expressed in this way:
f(t) =
a
0
2
+

n=1
_
a
n
cos
_
2nt
T
_
+ b
n
sin
_
2nt
T
__
,
where the constant coecients a
n
, b
n
are called Fourier coecients. Dening the
angular frequency =
2
T
we simplify this expression into:
f(t) =
a
0
2
+

n=1
[a
n
cos(nt) + b
n
sin(nt)] , (4.18)
namely the function f(t) can be expressed as a superposition of an innite number
of sinusoidal functions having periods T
n
=
2
n
.
It can be shown that these coecients are given by:
142 CHAPTER 4. INTEGRAL TRANSFORMS
a
n
=
2
T
_ T
2

T
2
f(t) cos(nt)dt (4.19)
b
n
=
2
T
_ T
2

T
2
f(t) sin(nt)dt (4.20)
Example 4.2.1 Find the Fourier series expansion of the function
f(t) =
_
_
_
1
T
2
+ kT t < kT
1 kT t <
T
2
+ kT
This is a square wave: a series of positive impulses followed periodically by negative
impulses of the same intensity. We can notice immediately that the function f(t)
is odd (f(t) = f(t)). Since the function cos(nt) is even, the whole function
f(t) cos(nt) is odd and its integral between T/2 and T/2 is zero. That means that
the coecients a
n
are zero.
To nd the coecients b
n
we apply Eq. 4.20 obtaining:
b
n
=
2
T
_ T
2

T
2
f(t) sin(nt)dt =
2
T
_

_
0

T
2
sin(nt)dt +
_ T
2
0
sin(nt)dt
_
=
4
T
_ T
2
0
sin(nt)dt =
2
n
[cos(nt)]
T
2
0
=
2
n
[1 cos(n)] .
Here we have used the relation T = 2. We can notice here that cos(n) is 1 if
n is even and -1 if n is odd, namely cos(n) = (1)
n
. We could nd the same
result by means of the de Moivres theorem applied to the complex number z = e
i
.
The coecients b
n
are equal to zero if n is even and to
4
n
if n is odd. The Fourier
expansion we looked at is therefore:
f(t) =
4

_
sin(t) +
sin(3t)
3
+
sin(5t)
5
+ . . .
_
.
By using the identities cos z = (e
iz
+ e
iz
)/2 and sin z = (e
iz
e
iz
)/2i the Fourier
expansion of a function f(t) can also be written as:
4.2. FOURIER TRANSFORMS 143
f(t) =
a
0
2
+

n=1
_
a
n
e
int
+ e
int
2
+ b
n
e
int
e
int
2i
_
=
a
0
e
i0t
2
+
1
2

n=1
_
(a
n
ib
n
)e
int
+ (a
n
+ ib
n
)e
int

.
In this way we can see that the function f(t) can be expressed as sum, extending
from to +, of terms of the form e
int
, where
n
= n, namely we have:
f(t) =

c
n
e
int
; c
n
=
_
_
_
1
2
(a
n
ib
n
) n 0
1
2
(a
n
+ ib
n
) n < 0
. (4.21)
This compact representation of the periodic function f(t) is called complex Fourier
series. If we combine the coecients a
n
and b
n
as indicated in Eq. 4.21 we nd that,
irrespective of the sign of n, we have:
c
n
=
1
T
_ T
2

T
2
f(t)e
int
dt. (4.22)
4.2.2 From Fourier series to Fourier transform
We have seen that the Fourier series allow us to describe periodic functions as su-
perpositions of sinusoidal functions characterized by angular frequencies
n
. To
represent non-periodic functions, what we can do is to extend the period T to inn-
ity (every function can be considered periodic if the period is large enough). That
corresponds to consider a vanishingly small frequency quantum =
n
n
=
2
T
and therefore a continuous spectrum of angular frequencies
n
. Given a function
f(t) =

n=
c
n
e
int
, with c
n
=
1
T
_ T
2

T
2
f(u)e
inu
du, we want to see what happens
in the limit T (or, analogously, =
2
T
0). We have:
f(t) =

n=
1
T
_ T
2

T
2
f(u)e
inu
du e
int
=

n=

2
_ T
2

T
2
f(u)e
inu
du e
int
.
In the limit for T and 0 the limits of the integration extend to innity,
the sum becomes an integral and the discrete values
n
become a continuous variable
(with d). We have thus:
f(t) =
1
2
_

de
it
_

duf(u)e
iu
. (4.23)
From this relation we can dene the Fourier transform of a function f(t) as:
144 CHAPTER 4. INTEGRAL TRANSFORMS

f() = F{f(t)} =
1

2
_

f(t)e
it
dt. (4.24)
Here we require, in order this integration to be possible, that
_

|f(t)|dt is nite.
Unlike the Laplace transform, the Fourier transform is very easy to invert. In fact,
we can directly see from Eq. 4.23 that:
f(t) =
1

2
_

f()e
it
d. (4.25)
Example 4.2.2 Find the Fourier transform of the normalized Gaussian distribution
f(t) =
1

2
e

t
2
2
2
.
By denition of Fourier transform we have:

f() =
1

2
_

f(t)e
it
dt =
1
2
_

e
it
t
2
2
2
dt.
We can modify the exponent of e in the integral as follows:
it
t
2
2
2
=
1
2
2
_
t
2
+ 2it
2
+ (i
2
)
2
(i
2
)
2

.
The rst 3 addends inside the square brackets are the square of t +i
2
, namely we
obtain:
it
t
2
2
2
=
(t + i
2
)
2
2
2
+
(i
2
)
2
2
2
=
_
t + i
2

2
_
2

1
2

2
.
Since the term e

1
2

2
does not depend on t we obtain:

f() =
1
2
e

1
2

2
_

t+i
2

2
dt.
This is the integral of a complex function, therefore we should use the methods of
complex integration we have learned so far. However, we can see that the integration
simplies signicantly by means of the substitution:
t + i
2

2
= s, dt =

2ds.
In this way we obtain:

f() =
1

2
e

1
2

2
_

e
s
2
ds =
1

2
e

1
2

2
,
4.2. FOURIER TRANSFORMS 145
where we have made use of the known result
_

e
s
2
ds =

. It is important to
note that the Fourier transform of a Gaussian function is another Gaussian function.
The Fourier transform allows us to express the Dirac delta function in an elegant
and useful way. We recall Eq. 4.23
f(t) =
1
2
_

de
it
_

duf(u)e
iu
.
By exchanging the variable of integration we obtain:
f(t) =
1
2
_

d
_

duf(u)e
i(tu)
=
1
2
_

du
_

df(u)e
i(tu)
=
_

duf(u)
_
1
2
_

e
i(tu)
d
_
,
where the exchange of the order of integration has been made possible by the Fubinis
theorem. Recalling Eq. 4.15 we can immediately recognize that:
(t u) =
1
2
_

e
i(tu)
d. (4.26)
Analogously to the Laplace transform, it is easy to calculate the Fourier trans-
form of the derivative of a function. It is:
F{f

(t)} =
1

2
_

(t)e
it
dt
=
1

2
_
f(t)e
it

(i)

2
_

f(t)e
it
dt
= iF{f(t)}. (4.27)
Here we have assumed that the function f(t) tends to zero for t (as it should
be since
_

|f(t)|dt is nite). It is easy to iterate this procedure and show that:


F{f
(n)
(t)} = (i)
n
F{f(t)}. (4.28)
This relation can be used in some cases to solve ODEs analogously to what done by
means of Laplace transforms, namely we transform both members of an ODE, solve
the obtained algebraic equation as a function of F{y(x)} (the Fourier transform of
the solution y(x) we seek) and then invert the function we have obtained. However,
for most of the practical cases, it is more convenient to use Laplace transformation
methods to solve ODEs. Fourier transformation methods can be extremely useful
instead to solve partial dierential equations (see Sect. 6.3.1).

Вам также может понравиться