Вы находитесь на странице: 1из 14

A View of the Transform

The operation shown above provides a transformation of variables from a function of


time x(t) to a function of frequency X(f).In order for the integral to converge, x(t) must
satisfy certain conditions [see any undergraduate text on the Fourier Transform-- this is
one area I have decided not to include!]. If those conditions are satisfied, one can think of
the operation as follows: the product of the two functions, averaged over all time by the
integration operation, compares the two functions. The resulting function X(f) is "large"
for values of where x(t) is "like" the complex exponential, .That is, periodic
phenomena in x(t) are winnowed out by the multiplication and integration. Even if the
function of time has no obvious periodic qualities (for example, the square pulse shown
below), the transform may display characteristics that relate it to the time domain
function.

If the reader isn't familiar with the concept and properties of the complex exponential,
he/she should become so! The complex exponential might be called the most
fundamental and genral of the periodic functions, and so is ever-present in any
description of the frequency domain.

The square pulse shown above has finite duration in time and is symmetric about the time
origin. The Fourier Transform of the square pulse appears below-- it has infinite duration
in frequency, maximum amplitude at the frequency origin, and is also symmetric about
the origin.
But it is in the uncovering of periodicity, indicated by peaks in the frequency domain, that
the Transform plays its most important role- in Fourier's original work, in identifying
weak radio or radar signals in noise, interpreting varying radiation from distant galaxies
as indicative of complexes of rotating stars, or identifying the frequency location of
power in a random biological signal, the electroencephalogram (EEG).

The Inverse Transform

One of the many useful properties is the ability to go back-- to perform the inverse
operation. The Fourier Transform has an inverse:

which is a mirror of the forward transform: multiply the function of frequency by a


complex exponential (note the plus sign) and integrate over all frequency.

The examples used later on don't address the utility of the inverse-- but it does have
powerful and useful functions! One example is the removal of "coherent" interference
from a sampled waveform: 60 Hz is everywhere, and sometimes it shows up where we
want it least-- added to a signal with sufficient amplitude so as to make analysis of the
signal impossible. Often, the 60 Hz appears as a nearly pure sinusoid, so, in the frequency
domain, it manifests its presence as a single sharp peak at (or near) 60 Hz. One can
transform the time signal into the frequency domain, remove the peak, and inverse
transform back to time, thus very effectively (in many instances) removing the
interference without much effect on the original desired signal. Used this way, the
transform and its inverse act as a very efficient notch filter, without the undesirable
effects that physical notch filters may impose.

The Impulse Function,

The unit impulse (perhaps better described as a distribution rather than a function)
provides (among many other uses) a mathematical construct by which many of the
characteristics of sampled signals can be described. The rationale for the introduction of
this particular and peculiar function is to provide some background for the mathematical
description of the sampling process.

The impulse has the following features:


and

The first operation identifies the impulse in terms of its unit area and location; the second
merely indicates the ability to ''weight'' the impulse with a resulting area other than unity;
the third is called the sifting property (it also demonstrates weighting the impulse with a
time value of a well-behaved function); and the fourth, the sampling property.

There are several transform pairs we'll use later on that utilize the sifting property:
Convolution

Along with the impulse, there is an operation called convolution that we need to use to
examine the properties of sampled signals:

When convolving two functions, we multiply one by a ``flipped'' and shifted version of
the other and integrate. The * indicates the convolution operation, not multiplication. The
integral is done with respect to a ``dummy'' time variable, . The minus sign on in
indicates the flipping and the time parameter t indicates a shift of the function
with respect to its position when Convolution can be done in time (as indicated
above) or in frequency. The Fourier Transform of convolution can be shown to be:

That is, convolution in the time domain is the equivalent of multiplication in the
frequency domain. The same is true in the inverse:

convolution in frequency is the equivalent of multiplication in the time domain. Note


that:

and

That is, convolving a function with an impulse generates a "copy" of the function at the
location of the impulse.

Some Examples of Convolution

A Graphical Approach
An often helpful way to look at convolution is to examine the concept grahically using a
pair of sample functions. In this example, we'll employ two very (graphically) simple
functions: an unit impulse located at t = 500 units denoted with the "vertical arrow"
symbol used for the impulse, and, an asymmetrical function of finite duration we'll call
x(t). x(t) is of finite duration, which isn't very general, but makes graphing easier!
If, in the figure above, we were to multiply the functions at each point in time and
integrate, the result is zero everywhere-- the functions do not overlap as graphed.

Flipping Out!
In implementing the convolution operation, one function is "flipped" about the time
origin-- either one can be flipped-- we'll flip x(t), since it is asymmetrical, a property that
helps us to see the effects of time operations.

Note that the flipping operation is about the time origin. If the impulse were flipped, it
would appear at the t = -500 unit location.

A "Dummy" time variable and Sliding functions


The convolution integration is done with respect to a "dummy variable" . The original
time variable, t, is now a parameter, not a variable under integration. t can be thought of
as a particular value of , indicating the offset of the function from its original flipped
position.

If we now move the flipped function, it appears as follows:

In this snapshot, t = -550--the amount of offset from the original flipped position. In this
figure, the time axis variable is , not t. For the value of t indicated, the convolution
integral is zero, since the functions do not yet overlap. As t goes from minus infinity, we
are sliding the flipped function from left to right in the figure.

Overlap has occurred!


When the functions overlap, the integral is no longer always zero, the the result of the
convolution operation becomes (I hope) more interesting:

In this figure, t = 550, the functions are partially overlapped, and the convolution is no
longer zero (see next figure).

When the functions overlap, the integration is no longer zero, and the result of
convolution begins to appear. Since one of the functions is an impulse, the form of the
integration result follows from the sifting property of the impulse function.

When t goes to infinity, the final result is as shown below:

After the whole process is done (for an admittedly simple example), we've managed to
show the steps, and, in this special case, we've demonstrated that convolving a function
with an impulse makes a copy (or sample) of the function at the location in time, t, where
the impulse was located.
A symbolic example
Another example of the convolution operation comes from the solution of differential
equations:

We multiply both sides by a well-behaved function:

in order to collapse the left hand side into a single derivative

so that both sides can be integrated to solve (eventually) for y(t)

Except for the limits required by this particular case, the integration has the form I've
defined as convolution. Indeed, one of the important uses of convolution is in relating the
input and output of a linear, time-invariant system, such as might be modelled by this
differential equation.
Fourier Series
One of Fourier's most important works was the identification of what is now called the
Fourier Series. The series can be computed in two forms.

The first form: (1)

with (2)

and (3)

The second form: (4)

with (5)

The function, x(t), must meet a set of conditions (Dirichlet) in order for the Series to exist
and converge. In order to model the sampling process, we need to know the Fourier
Transform of an elementary sampling function, s(t):

We find the transform via the Fourier Series!

so
and

using the results for the transform pair involving impulses.

Continuous Time, Discrete Frequency and Windowing


Note that the functions transformed are (at this point), continuous-time functions. They
may not be continous themselves (the functions may contain a finite number of finite
discontinuities). The Fourier Series is a discrete-frequency "function," and values are
obtained only for frequencies equal to n/T. T may be referred to as the time window for
computing the series and 1/T is called the fundamental frequency.

If x(t) is periodic in the window, the Fourier Series expansion (equations (1) and (4)
above) may be used to approximate x(t) for all values of t. If x(t) is not periodic, the
expansion still is (periodic in T). The upshot of this last characteristic is (for example): if
you compute the Fourier Series for a sine function, using 3/4 of the function's true period
as the time window, T, the expansion will approximate the sine function within the
chosen window, but, if expanded outside the window, will converge to a function that is
periodic in the window. The expansion in this example will have rapid jumps every T
time units.

An example!
Let's examine some Fourier Series operations using a simple function-- the sine wave.

Plot of the function 5*sin(2*pi*t). Four cycles are shown. This function has only one
frequency, 1 Hz.

If we compute the Fourier Series coefficients, Xn, using equation (5). Only two terms are
non-zero-- those at +/- 1 Hz.
A standard way of plotting a spectrum generated from Fourier Series-- the line spectrum.
This plot is derived from the magnitudes of the Xn's (the coefficients are generally
complex numbers). Values can be plotted only at frequencies that are multiples of 1/T. In
this case, T=1 second.

If we choose a time window other than an integral number of periods, the Series and the
reconstruction may differ significantly from the example just shown. For example, let's
use a 0.75 second window:

A time window that is not an integral number of cycles of the original sine function.

Line spectrum for the truncated window. Note there are many components (only the
lower frequency ones are shown), and that the d.c. (n=0) term is no longer zero. The
Fourier Series computations see only the 0.75 second function, not an entire cycle of sine.

If we reconstruct the signal using equation (4), with a finite number of terms, and, we
allow time to go beyond the 0.75 second window, we get:
Reconstructed function from Fourier Series coefficients (light line) and original sine
function (dark line). The reconstruction uses coefficients up to +/- 25, and shows a
tendency to "ring" near discontinuities (Gibb's Phenomenon). Note the quality of the
reconstruction within the original 0.75 second time window.

The FFT

The Fast Fourier Transform (FFT) is a computationally efficient implementation of


equation (5), using sampled data at discrete time intervals and summation replacing
integration. Actually, there are many algorithms for improving the computational
efficiency, but the original was identified by Cooley and Tukey in the 1960's. The
frequency plots on this page were actually generated using the FFT and MATLAB, not
the "true" Fourier Series. In the next section, we'll describe some of the properties and
limitations of sampling.

Sampling
When we analyze time series data using a computer, we are no longer working with
continuous time functions, but rather with discrete time functions. Additionally, we are
usually utilizing a fixed time duration (which we call a time ``window'').The fixed time
duration leads to results described by Fourier Series: of particular note is the effect that
all our operations proceed as if the signal is periodic outside the window. Second,
sampling the signal limits our knowledge of the signal in both time and frequency. In the
time domain, we have some intuitive feeling about sampling ``too slowly''- we see
needed detail disappear. In the frequency domain, we observe changes in the spectrum- a
form of distortion called aliasing.

Let's describe the sampling process as follows, assuming that we sample at regular time
intervals, for a time window of duration, T, with ; we call the sampling

frequency,
so that

The result of this rather abstract set of operations can be understood as follows: The
sampling operation in time generates an infinite number of copies of the Fourier
Transform of the sampled signal. These copies are spaced in frequency at multiples of the
sampling frequency.

A Graphical view
The images below depict the effects of sampling over all time (using an infinite series of
impulses as the sampling function), a continuous function of time. The effects of
discretizing (as is done when a computer is used for sampling), and the effects of a finite
time window are not included.

Consider the figure above to represent the magnitude spectrum, X(f), of a continuous
function of time [ x(t) in the equations above]. This is a lowpass function, since its
magnitude (and hence its energy) is concentrated at low frequencies. This is a more
"realistic" spectrum than those often used as lowpass examples, since the spectrum
decreases with increasing frequency, but isn't ever zero.
This figure represents the spectrum of the signal after sampling at 1000 samples/second
(except for an amplitude scale factor of 1000). This is the graphical view of the last
equation above, showing that the original spectrum is replaced by a sum of shifted copies.
Note that, for this sampling rate, each copy never decreases towards zero as in the first
graph.

Low frequency detail of the sampled spectrum. Note that the original low frequency
version could not be perfectly recovered by lowpass filtering that would attenuate the
shifted copies-- this distortion due to "low" sampling rate is referred to as aliasing. In this
example, aliasing would appear in the time domain version as increased rapid (high
frequency) changes in the signal.

The sample spectrum with the original spectrum also displayed. Distortion, defined as
changes from the original spectrum, is most apparent at the higher frequencies (around
500 Hz)
If this function were sampled at a higher rate (say 2000 samples per second), the
distortion is reduced. Looking at the last figure, the effect of higher sampling would shift
the copies to higher frequencies, reducing their influence (amplitude) in the lower
frequency range of interest of the original signal. A higher sampling rate has
disadvantages-- more data must be stored for each second of sampled signal!

What is present in the sampled signal that wasn't there in the original continuous one?
Look at the last graph at 500 Hz. In the unsampled signal, the spectrum at 500 Hz is
X(500). In the sampled signal spectrum, the spectral value at 500 Hz is:

X(500) + X(1000-500) + X(-1000+1500) + X(2000-1500) + ...

At each frequency, the sampled signal has added to it values from each copy as indicated
in the last equation above! If those values are small, aliasing is still present, but may not
have a deleterious effect. If they are large, our signal has been corrupted and may not be
useable post sampling.

The CES signal described in earlier sections will be used to provide a more physical
example of aliaising...

Вам также может понравиться