Академический Документы
Профессиональный Документы
Культура Документы
Table of Contents
Part I
Signal processing
Chapter 1
1.1
1.2
1.3
1.4
1.5
Chapter 2
2.1
2.2
2.3
3.2
3.3
3.4
3.5
2
8
10
11
12
13
15
15
17
18
20
Signal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Signature analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 3
3.1
Spectral processing
24
25
27
Functions
32
32
32
33
34
34
34
36
36
37
38
39
41
42
44
46
46
46
48
48
49
50
Part II
Chapter 4
4.1
4.2
4.3
4.4
Chapter 5
5.1
5.2
5.3
5.4
5.5
6.2
6.3
56
56
56
56
57
58
59
60
60
60
60
61
61
62
64
Acoustic measurements
Chapter 6
6.1
Acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound power (P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound (Acoustic) intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Free field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic impedance (Z) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reference conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dB scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound power level Lw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle velocity level Lv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound (Acoustic) intensity level LI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound pressure level LP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Octave bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
68
68
69
70
72
74
74
76
77
77
77
78
79
79
80
Sound quality
84
84
85
86
86
87
87
88
89
90
91
91
93
95
Chapter 7
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
Chapter 8
8.1
8.2
Sound metrics
Acoustic holography
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic holography concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Temporal and spatial frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summation of plane waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Propagating and evanescent waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(Back) propagating to other planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Wiener filter and the AdHoc window . . . . . . . . . . . . . . . . . . . . . . . . .
Derivation of other acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part III
100
100
101
102
103
104
104
107
109
110
111
112
114
115
118
119
119
121
122
124
125
126
Chapter 9
Statistical functions
130
130
130
131
131
131
132
133
133
134
134
134
135
136
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Short Time Fourier Transform (STFT) . . . . . . . . . . . . . . . . . . . . . . . .
Wavelet analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quadratic time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . .
The Wigner-Ville distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
140
142
142
143
146
147
148
150
Chapter 11
11.1
11.2
11.3
Resampling
Fixed resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.1 Integer downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.2 Integer upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.3 Fractional ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.4 Arbitrary ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Implementation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152
153
154
156
157
159
159
162
12.3
12.4
12.5
164
170
171
171
171
172
174
174
176
177
178
178
179
180
180
181
182
183
183
185
185
185
187
188
189
191
13.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conditions for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.1 Determination of the Rpm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.2 Waveform tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Structural equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Data equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Practical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
194
194
195
195
195
196
196
199
14.3
14.4
Part IV
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.1 Peak count methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.2 Level cross counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.3 Range counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Counting of single ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Counting of range-pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.1 From-to-counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.2 Range-mean counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.3 ``Range pair-range" or ``Rainflow'' method . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
204
206
206
207
208
208
209
211
211
212
213
217
15.3
15.4
15.5
220
222
223
223
225
226
228
230
233
233
234
236
237
238
242
243
244
244
245
246
248
250
251
253
254
256
260
260
264
16.3
268
270
270
275
277
279
Chapter 17
Running modes analysis
17.1
17.2
17.3
17.4
282
284
284
286
288
288
290
290
291
Chapter 18
Modal validation
18.1
18.2
18.3
18.4
18.5
18.6
18.7
18.8
18.9
18.10
18.11
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MSF and MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reciprocity between inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalized modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modal phase collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode indicator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summation of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Synthesis of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
294
295
297
298
300
302
303
304
305
307
308
19.2
19.3
310
310
311
316
317
318
320
Chapter 20 Design
20.1
20.2
20.3
20.4
322
325
325
329
329
338
339
347
348
349
351
354
354
358
359
359
359
Part I
Signal processing
Chapter 1
Spectral processing . . . . . . . . . . . . . . . . . . . . .
Chapter 2
Structural dynamics testing . . . . . . . . . . . . . .
23
Chapter 3
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Chapter 1
Spectral processing
Chapter 1
1.1
Spectral processing
time
frequency
Each sine wave in the time domain is represented by one spectral line in the
frequency domain. The series of lines describing a waveform is known as its
frequency spectrum.
Fourier transform
The conversion of a time signal to the frequency domain (and its inverse) is
achieved using the Fourier Transform as defined below.
S x(f)
x(t)e
j2ft
dt
Eqn. 1-1
x(t)
S (f)e
x
j2ft
df
Eqn. 1-2
This function is continuous and in order to use the Fourier Transform digitally
a numerical integration must be performed between fixed limits.
The Discrete Fourier Transform (DFT)
The digital computation of the Fourier Transform is called the Discrete Fourier
Transform. It calculates the values at discrete points (mf) and performs a nu
merical integration as illustrated below between fixed limits (N samples).
Spectral processing
x(t)e j2mft
time
t
Since the waveform is being sampled at discrete intervals and during a finite
observation time, we do not have an exact representation of it in either domain.
This gives rise to shortcomings which are discussed later.
Hermitian symmetry
The Fourier transform of a sinusoidal function would result in complex func
tion made up of real and imaginary parts that are symmetrical. This is illus
trated below. In the majority of cases only the real part is taken into account
and of this only the positive frequencies are shown. So the representation of the
frequency spectrum of the sine wave shown below would become the area
shaded in grey.
S(f) imaj
X(t)
S(f) real
A/2
A/2
A
-f
+f
-f
+f
A/2
Part I
Signal processing
Chapter 1
Spectral processing
N samples
time
inverse
frequency
To achieve high calculation performance the FFT algorithm requires that the
number of time samples (N) be a power of 2 (such as 2, 4, 8, ...., 512, 1024, 2048).
Blocksize
Such a time record of N samples is referred to as a block of data with N being
the blocksize. N samples in the time domain converts to N/2 spectral (frequency)
lines. Each line contains information about both amplitude and phase.
Frequency range
The time taken to collect the sample block is T. The lowest frequency that can be
detected then is that which is the reciprocal of the time T.
T
The frequency spacing between the spectral lines is therefore 1/T and the high
est frequency that can be determined is (N/2).(1/T).
Spectral processing
frequency
1
T
2
T
3
T
f 1
T
N2
T
fmax N . 1 N .f
2 T 2
The frequency range that can be covered is dependant on both the blocksize (N)
and the sampling period (T). To cover high frequencies you need to sample at a
fast rate which implies a short sample period.
time
record 1
time
record 2
time
record3
time
record 4
FFT 1
FFT 2
FFT 3
time
record 2
time
record3
time
record 4
FFT 1
FFT 2
FFT 3
Real time
operation
This is not the case if the computation time is taking longer than the measure
ment time or if the acquisition requires a trigger condition.
Overlap
Overlap processing involves using time records that are not completely inde
pendent of each other as illustrated below.
Part I
Signal processing
Chapter 1
Spectral processing
time
record 1
time
record 2
time
record3
time
record 4
FFT 1
FFT 2
FFT 3
If the time data is not being weighted at all by the application of a window,
then overlap processing does not include any new data and therefore makes no
statistical improvement to the estimation procedure. When windows are being
applied however, the overlap process can utilize data that would otherwise be
ignored.
The figure below shows data that is weighted with a Hanning window. In this
case the first and last 20% of each sample period is practically lost and contrib
utes hardly anything towards the averaging process.
Sampled
data
Processed data
with no overlap
Spectral processing
Applying an overlap of at least 30% means that this data is once again included
- as shown below. This not only speeds up the acquisition (for the same num
ber of averages) but also makes it statistically more reliable since a much higher
proportion of the acquired data is being included in the averaging process.
Sampled
data
Part I
Signal processing
Chapter 1
1.2
Spectral processing
Aliasing
Sampling at too low a frequency can give rise to the problem of aliasing which
can lead to erroneous results as illustrated below.
fs 2fm
The highest frequency that can be measured is fmax which is half the sampling
frequency (fs ), and is also known as the Nyquist frequency (fn ).
f
fmax s fn
2
fn
f1
f1
f2
fn
f3
2 fn = fs
input
frequency
f4
3 fn
4 fn
All multiples of the Nyquist frequency (fn ) act as `folding lines'. So f4 is folded
back on f3 around line 3 fn , f3 is folded back on f2 around line 2 fn and f2 is
folded back on f1 around line fn . Therefore all signals at f2 , f3 , f4 are all seen as
signals at frequency f1 .
The only sure way to avoid such problems is to apply an analog or digital antialiasing filter to limit the high frequency content of the signal. Filters are less
than ideal however so the positioning of the cut off frequency of the filters must
be made with respect to fmax and the roll off characteristics of the filter.
Spectral processing
ideal filter
fmax
fmax
Part I
Signal processing
fs
roll off
characteristics
of a real filter
fs
Chapter 1
1.3
Spectral processing
time
frequency
Because the signals are measured over a sample period T, the DFT assumes that
this is representative for all time. When the sine wave is not periodic in the
sample time window, the result is a consequent leakage of energy from the
original line spectrum due to the discontinuities at the edges.
discretely
sampled
waveform
time
time
DFT
assumed
waveform
frequency
The user should be aware that leakage is one of the most serious problems
associated with digital signal processing. Whilst aliasing errors can be reduced
by various techniques, leakage errors can never be eliminated. Leakage can be re
duced by using different excitation techniques and increasing the frequency
resolution, or through the use of windows as described below.
10
Spectral processing
1.3.1
Windows
The problem of discontinuities at the edge can be alleviated either by ensuring
that the signal and the sampling period are synchronous or by ensuring that the
function is zero at the start and end of the sampling period. This latter situa
tion can be achieved by applying what is called a `window function' which
normally takes the form of an amplitude modulated sine wave.
sample
period T.
Frequency spectrum of
a sine wave, periodic in
the sample period T.
sample
period T.
Frequency spectrum of a
sine wave, not periodic
with the sample period
without a window.
Frequency spectrum of
a sine wave that is not
periodic with the sample
period with a window.
The use of windows gives rise to errors itself of which the user should be aware
and should be avoided if possible. The various types of windowing functions
distribute the energy in different ways. The choice of window depends on the
input function and on your area of interest.
Part I
It should be noted that synchronizing the signal and the sampling time, or using
a self windowing function is preferable to using a window.
Signal processing
11
Chapter 1
Spectral processing
Window characteristics
The time windows provided take a number of forms - many of which are am
plitude modulated sine waves. There are all in effect filters and the properties
of the various windows can be compared by examining their filter characteris
tics in the frequency domain where they can be characterized by the factors
shown below.
noise Bandwidth
0dB
side lobe falloff
highest side
lobe
log f
The windows vary in the amount of energy squeezed in to the central lobe as
compared to that in the side lobes. The choice of window depends on both the
aim of the analysis and the type of signal you are using. In general, the broader
the noise Bandwidth, the worse the frequency resolution, since it becomes more
difficult to pick out adjacent frequencies with similar amplitudes. On the other
hand, selectivity (i.e. the ability to pick out a small component next to a large
on) is improved with side lobe falloff. It is typical that a window that scores
well on Bandwidth is weak on side lobe fall off and the choice is therefore a
trade off between the two. A summary of these characteristics of the windows
provided is given in Table 1.1.
Highest
side lobe
(dB)
Sidelobe falloff
(dB/decade)
Noise Band
width (bins)
Max.
Amp er
ror (dB)
Uniform
-13
-20
1.00
3.9
Hanning
-32
-60
1.5
1.4
Hamming
-43
-20
1.36
1.8
Kaiser-Bessel
-69
-20
1.8
1.0
Blackman
-92
-20
2.0
1.1
Flattop
-93
3.43
<0.01
Window type
12
Spectral processing
Window types
Uniform window
This window is used when leakage is not a prob
lem since it does not affect the energy distribu
tion. It is applied in the case of periodic sine
waves, impulses, transients... where the function
is naturally zero at the start and end of the sam
pling period.
The following windows Hanning, Hamming, Blackman, Kaiser-Bessel and
Flattop all take the form of an amplitude modulated
sine wave in the time domain. For a comparison of
their frequency domain filter characteristics - see
Table 1.1.
Hanning
This window is most commonly applied for general purpose analysis of ran
dom signals with discrete frequency components. It has the effect of applying a
round topped filter. The ability to distinguish between adjacent frequencies of
similar amplitude is low so it is not suitable for accurate measurements of small
signals.
Hamming
This window has a higher side lobe than the Hanning but a lower fall off rate
and is best used when the dynamic range is about 50dB.
Blackman
This window is useful for detecting a weak component in the presence of a
strong one.
KaiserBessel
The filter characteristics of this window provide good selectivity, and thus
make it suitable for distinguishing multiple tone signals with widely different
levels. It can cause more leakage than a Hanning window when used with ran
dom excitation.
Part I
Signal processing
13
Chapter 1
Spectral processing
Flattop
This window's name derives from its low ripple characteristics in the filter pass
band. This window should be used for accurate amplitude measurements of
single tone frequencies and is best suited for calibration purposes.
Force window
This type of window is used with a tran
sient signal in the case of impact testing.
It is designed to eliminate stray noise in
the excitation channel as illustrated here.
It has a value of 1 during the impact peri
od and 0 otherwise.
Exponential window
This window is also used with a transient
signal. It is designed to ensure that the sig
nal dies away sufficiently at the end of the
sampling period as shown below. The
form of the exponential window is de
scribed by the formula e -t . The `Exponen
tial decay' determines the % level at the
end of the time window.
14
Spectral processing
Force
Exponential
Blackman or
Kaiser-Bessel
Flattop
Uniform
Exponential
Hanning
Uniform
Part I
Amplitude
Energy
where the correction factor gives the correct signal energy for a
particular frequency band. This is the only method that should be
used for broad band analysis.
Signal processing
15
Chapter 1
Spectral processing
Amplitude correction
Consider the example of a sine wave signal and a Hanning window.
amplitude
time
time
amplitude
unwindowed signal
amplitude
frequency
windowed signal
frequency
Energy correction
Windowing also affects broadband signals.
original signal
16
window function
windowed signal
Spectral processing
In this case however it is the energy in the signal which it is usually important
to maintain, and an energy correction factor will be applied to restore the ener
gy level of the windowed signal to that of the original signal.
In the case of a Hanning window, the energy in the windowed signal is 61% of
that the original signal. The windowed data needs to be multiplied by 1.63
therefore to correct the energy level.
Amplitude mode
Energy mode
Uniform
Hanning x1
1.63
Hanning x2
2.67
1.91
Hanning x3
3.20
2.11
Blackman
2.80
1.97
Hamming
1.85
1.59
Kaiser-Bessel
2.49
1.86
Flattop
4.18
2.26
Part I
Signal processing
17
Chapter 1
1.4
Spectral processing
Averaging
Signals in the real world are contaminated by noise -both random and bias.
This contamination can be reduced by averaging a number of measurements in
which the random noise signal will average to zero. Bias errors however, such
as nonlinearities, leakage and mass loading are not reduced by the averaging
process. A number of different techniques for averaging of measurements are
provided.
Linear
This produces a linearly weighted average in which all the individual measure
ments have the same influence on the final averaged value. If the average value
of M consecutive measurement ensembles is x then M1
x 1 xm
M m0
Eqn 1-3
x x a(n1) x n
The intermediate average is an
. The final averaging can be
done at the end of the acquisition.
Stable
In the case of stable averaging again all the individual measurements have the
same influence on the final averaged value. In this case though, the intermediate
averaging result is based on -
xn
1
xn n
n x n1 n
Eqn 1-4
The advantage of stable averaging is that the intermediate averaging results are
always properly scaled. This scaling however makes the procedure slightly
more time consuming.
Exponential
Exponential averaging on the other hand yields an averaging result to which
the newest measurement has the largest influence while the effect of the older
ones is gradually diminished. In this case -
xn
1 x
xn
n1
18
Eqn 1-5
Spectral processing
if
x n(k) x n1(k)
|x n(k)| |x n1(k)|
or
Eqn. 1-6
otherwise
In this way, the averaging result contains, for a specific k, the maximum value
in an absolute sense of all the ensembles, considered during the averaging pro
cess.
if
x i n(k) x i n1(k)
|x r n(k)| |x r n1(k)|
or
Eqn 1-7
otherwise
This way, the averaging result contains all values that coincide with the maxi
mum values for the reference channel.
Part I
Signal processing
19
Chapter 1
1.5
Spectral processing
Reading list
Signal and system theory
J. S. Bendat and A.G. Piersol.
Random Data : Analysis and Measurement Procedures
Wiley - Interscience, 1971.
J. S. Bendat and A.G. Piersol.
Engineering Applications of Correlation and Spectral Analysis
Wiley - Interscience, 1980.
R.K. Otnes and L. Enochson.
Applied Time Series Analysis
John Wiley & Cons, 1978.
J. Max
Mthodes et Techniques de Traitement du Signal (2 Tomes)
Masson, 1972, 1986.
General literature in digital signal processing
A.V. Oppenheimer and R.W. Schafer
Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
L.R. Rabiner and B. Gold
Theory and Application of Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
K.G. Beauchamp and C.K. Yueu
Digital Methods for Signal Analysis
George Allen & Unwin, London 1979.
M. Bellanger
Traitement Numrique du Signal
Masson, Paris 1981.
A. Peled and B. Liu
Digital Signal Processing
Theory, Design And Implementation
John Wiley & Sons.
Discrete Fourier Transform
E.O. Brigham
The Fast Fourier Transform
Prentice Hall, Englewood Cliffs N.J., 1974.
20
Spectral processing
R.W. Ramirez
The FFT : Fundamentals and Concepts
Prentice Hall, Englewood Cliffs N.J., 1985.
C.S. Burrus and T.W. Parks
DFT/FFT and Convolution Algorithms : Theory and Implementation
John Wiley & Sons, 1985.
H.J. Nussbaumer
Fast Fourier Transform and Convolution Algorithms
Springer Verlag, 1982.
R.E. Blahut
Fast Algorithms for Digital Signal Processing
Addison Wesley, 1985.
IEEE-ASSP Society
Programs for Digital Signal Processing
IEEE Press, New York, 1979.
Part I
Signal processing
21
Chapter 2
Structural dynamics
testing
23
Chapter 2
2.1
Signal analysis
The dynamic analysis of a linear physical system can be achieved by measuring
the response of the system (output) to a form of excitation. This excitation can
be operational forces which, while typical, are not necessarily known. Measur
ing the response to known excitation forces is discussed in section 2.2
In examining the vibrational behavior of a structure, there are a range of func
tions that can be acquired which will provide information on the frequencies at
which particular phenomena occur. These measurement functions are de
scribed in chapter 3.
Noise levels are a common problem and specific information about acoustic
measurement functions are given in a separate set of documentation on Acous
tics and sound quality.
The examination of the behavior of a structure due to a changing environment,
such as during an engine run up is termed signature analysis and this subject is
discussed in section 2.3.
24
2.2
System analysis
System analysis refers to a method of examining the properties of a system, i.e.
how a structure responds to a specific input. In the case of a linear system, this
relationship between the input and the output is a fundamental characteristic of
the system and can be used to predict the behavior of the system due to differ
ent stimuli.
output
output
output
output
output
input
input
FRF
H(f)
H(f)
Output from
response DOF
Xi
Xi
Xj
For modal purposes the response signal is most commonly the acceleration at
the response DOF due to a force input at another. In this case peaks in the FRF
indicate that low input levels generate high response levels (resonances), while
minima indicate low response levels, even for high inputs (anti-resonances).
Part I
Signal processing
25
Chapter 2
resonance
log Amp
anti-resonance
frequency
Measurement points
The number of acquisition channels determines the number of response and ex
citation points that can be measured at any one time. Their position on the test
system can be defined as part of the geometry of the structure. In order to visu
alize the response of each DOF, then their geometrical position must be defined.
26
2.3
Signature analysis
This involves analyzing a series of non-stationary signals that are varying over
the analysis period. An example would be the vibrational/acoustical behavior
of a structure as a function of rotational speed. Thus during `run-up' and/or
`run-down' a series of signals are measured to determine the behavior of the
structure and to determine the rotational speed. (the tacho signal).
Spectral data are analyzed and plotted against the external parameter as illus
trated below. Such an arrangement is known as a waterfall or map of mea
sured functions. The functions that can be acquired during a run and placed in
a waterfall are listed in sections 3.1 and 3.2.
basic
function
tracking parameter
composite function
As well as the waterfall of measured functions, signature analysis enables you
to obtain so-called composite functions. These are two-dimensional functions
that are directly related to the tracking parameter value. Such functions are
overall levels and frequency sections and they are described in section 3.3.
Measurements are taken during the acquisition but further analyses of the mea
sured functions in relation to the tracking parameters can be performed during
post processing.
Tracking
The dominant parameter describing the change of a signal is termed the track
ing parameter. This could be time, rpm, temperature or other. The rotational
speed is a commonly used as a tracking parameter and for this a tacho signal is
used to determine the rpm.
A number of pulses per revolution are
generated by the rotating shaft. The tacho
channel uses a positive slope crossing of
a trigger level to determine the time be
tween pulses and thus the rpm.
t1
Part I
Signal processing
t2
t3
27
Chapter 2
While a number of channels can be used to measure tracking values, one must
be used to control the acquisition, i.e. to determine when the measurements
will be made.
Parameters relating to signature analysis
Sampling frequency f s 1
T
Sampling period
T N.T
Number of revs P
M samples/rev
Blocksize N = MP samples
P= Number of revs/
b
lock =( Number of revs/sec) . ( Number of secs)
rpm(Hz)
P rpm(Hz).T
f
M = Number of samples/rev = (Number of samples/sec) . (Number of
secs/rev)
fs
M
rpm(Hz)
N= Number of samples
= (Number of samples/rev) . (Number of revs)
(data acquisition size)
(blocksize)
N M.P
Orders
For rotating machinery most signal phenomena are related to the rotational
speed and its harmonics.
A rotational speed harmonic is called an order. It is the proportionality
constant (O) between the rotational speed (rpm) and the frequency (f).
28
f= O . rpm (Hz)
For stationary signals the
relevant analysis
parameters are -
fmax= fs / 2
Omax= M / 2
and order resolution (O)
f= O . rpm
O = 1 / P
Fixed sampling
This is another term for basic signature analysis, where signals are measured
using the standard data acquisition techniques as described above i.e. with a
fixed sampling frequency and sampling period. The rpm is measured but is
used only for control of the acquisition, and annotation of the acquired blocks.
In this case, the maximum order and the order resolution will vary with the ro
tational speed (rpm).
Order tracking
This involves measuring signals at different rotational speeds but in this case,
the sampling frequency (fs ) and observation time (T) are dependent on the rpm.
The data is sampled synchronously with the rotational speed (rpm). In this
way the number of samples per revolution is kept constant. The signals are in fact
sampled at constant shaft angle increments rather than time increments. This
implies that the maximum order measured remains constant (Omax= M / 2).
When order tracking, the number of revolutions /measurement (P) is indepen
dent of the rotational speed. Thus with a constant P, the order resolution is a
constant (O= 1 / P). The orders lie on spectral lines and leakage problems are
avoided when an integer number of revolutions are measured.
Part I
Signal processing
29
Chapter 3
Functions
Chapter 3
3.1
Functions
Eqn 3-1
n 0N 1
Autocorrelation
Correlation is a measure of the similarity between two quantities. The autocor
relation function is found by taking a signal and comparing it with a time
shifted version of itself.
The time domain autocorrelation function Rxx () is thus acquired by multiply
ing a signal by the same signal displaced by time () and integrating the prod
uct over all time.
R xx()
lim
T
x(t)x(t )dt
Eqn 3-2
k 0...N 1
Eqn 3-3
n 0...N 1
32
Functions
where F -1 is the inverse Fourier Transform and Sxx (k) is the discrete autopower
spectrum.
It can be seen that the greatest correlation will occur when and the auto
correlation function will thus be a maximum at this point equal to the mean
square value of x(t). Purely random signals will therefore exhibit just one peak
at Periodic signals however will exhibit another peak when the time
shift equals a multiple of the period.
The autocorrelation function of a periodic signal is also periodic and has the
same period as the wave form itself. This property is useful in detecting sig
nals hidden by noise. The advantage of using the auto correlation function
rather than linear averaging, is that no synchronizing trigger is required. Cer
tain impulse type signals also show up better using the autocorrelation function
rather than using a frequency domain function.
Crosscorrelation
Cross correlation is a measure of the similarity between two different signals. It
therefore requires multiple channels. In terms of the time domain it is defined
as:
R xy()
lim
T
x(t)y(t )dt
Eqn 3-4
As in the case of the autocorrelation function the discrete cross correlation func
tion Rxy (n) between two sampled signals x(n) and y(n) is calculated as,
R xy(n) F 1 Sxy(k)
,
k 0...N 1
n 0...N 1
Eqn 3-5
with Sxy (k) being the discrete crosspower spectrum between the two signals.
Cross correlation indicates the similarity between two signals as a function of
the time shift. It is therefore useful in determining the time difference between
such signals.
Part I
Signal processing
33
Chapter 3
Functions
Histogram
The probability histogram q(j) describes the relative occurrence of specific sig
nal levels. Let the signal input range of a sampled signal x(n) be divided in J
classes. Each class j,j = 0...J-1, can be characterized by an average value xj and
a class increment x.
2
nr of classes
signal range
1
0
-1
-2
-3 -2 -1 0 1 2
-3
nr of classes
The probability histogram of a sampled signal x(n) can then be defined as,
q(j) 1
N
where
N1
k x(n)
, j 0...J 1
Eqn 3-6
n 0
k x(n)
1, ifx j x x(n) x j x
2
2
k x(n)
0, otherwise
The maximum value of J is either the number of time samples (Time data) or
spectral lines in the block.
Probability Density
The probability density p(j) is a normalized representation of the probability
histogram q(j),
p(j) 100 q(j), j 0...J 1
x
Eqn 3-7
Probability Distribution
The probability distribution d(j) gives the probability (in percent) that the signal
level is below a given value. This function is calculated from the probability
histogram, q(t) given in equation 3-6.
34
Functions
d(j)
q(i), j 0...J 1
Eqn 3-8
i0
Part I
Signal processing
35
Chapter 3
3.2
Functions
Eqn 3-9
n 0...N 1
k 0...N 1
X(k) A M1
m0 (X m(k)), k 0...N 1
Eqn 3-10
Since only real valued time records are considered the frequency spectrum has
a Hermitian symmetry.
X(k) X * ( k) X * (N k), k 0.. N
2
Eqn 3-11
36
Functions
Autopower Spectrum
The autopower spectrum is the squared magnitude of the frequency spectrum.
The discrete autopower spectrum of a sampled time signal Sxx (k) is defined as
the ensemble average of the squared magnitude of M instantaneous discrete
frequency spectra Xm (k),
*
S xx(k) A M1
m0 (Xm(k)X m(k)), k 0...N 1
Eqn 3-12
Sxx
Gxx
-f
T
double sided
frequency spectrum
-f
double sided
autopower spectrum
signal
Figure 3-2
A2/2
(A/2)2
A/2
Eqn 3-13
single sided
(rms power)
autopower spectrum
Autopower spectra
Of this double sided frequency spectrum, only the positive frequency values
are considered. In order to obtain a time signal power estimate, a summation
of the power spectra values at the positive and negative frequencies must be
made, resulting in the so-called RMS Autopower spectra Gxx (k),
G xx(k) S xx, whenk 0
G xx(k) 2S xx(k), whenk 1... N 1
2
Eqn 3-14
Part I
Signal processing
37
Chapter 3
Functions
Amplitude format
RMS
Power
A2/2
RMS
Linear
A/2
RMS
PSD
A2/2F
RMS
ESD
A2 T/2 F
Peak
Power
A2
Peak
Linear
Peak
PSD
A2/ F
Peak
ESD
A2 T/ F
Crosspower spectrum
The cross power spectrum Sxy is a measure of the mutual power between two
signals at each frequency in the analysis band. It is the dual of the cross cor
relation function.
It is defined as the following product -
S xy(k) A M1
m0 X m(k) Y m(k) , k 0...N 1
X*M (K)
38
Eqn 3-15
Functions
Ym (K)
The crosspower spectrum contains information about both the magnitude and
phase of the signals. Its phase at any frequency is the relative phase between
the two signals and as such it is useful in analyzing phase relationships.
Since it is a product, it will have a high value when the both signal levels are
high, and a low value when both signal levels are low. It is therefore an indica
tor of major signal levels on both the input and output. Its use in this respect
should be treated with caution however since a high value can also arise from
just the output level without indicating that the input is the cause. The interde
pendence of input and output is revealed in the coherence function which is de
scribed in the following subsection.
The cross power spectrum is used in the calculation of frequency response func
tions.
The Amplitude mode in which the crosspower spectrum is presented is as de
scribed in the previous section on Autopower spectrum. Rms and PEAK val
ues are considered.
Coherence
There are three types of coherence functions; the ordinary coherence, partial co
herence and virtual coherence.
Ordinary Coherence
The (squared) ordinary coherence between a signal Xi (N) and Xj (N) is defined
by,
2 0 ij(k)
Sij(k) 2
S ii(k) S jj(k)
Eqn 3-16
where S ij(k) is the averaged crosspower. S ii(k) and S jj(k) are the averaged auto
powers.
It is a ratio of the maximum energy in a combined output signal due to its vari
ous components, and the total amount of energy in the output signal. Coher
ence can be used as a measure of the power in one channel that is caused by the
power in the another channel. As such it is useful in assessing the accuracy of
transfer function measurements. It does not however need to apply to input
and output and can also be measured between shakers.
Part I
Signal processing
39
Chapter 3
Functions
The coherence function can take values that range between 0 and 1. A high val
ue (near 1) indicates that the output is due almost entirely to the input and you
can feel confident in the frequency response function measurements. A low
value (near 0) indicates problems such as extraneous input signals not being
measured, noise, nonlinearities or time delays in the system.
Multiple coherence (used in the calculation of the measurement function FRF)
The multiple coherence function is the coefficient that describes, in the frequen
cy domain, the causal relationship between a single signal (an output spectrum)
and a set of other signals (the considered input spectra) as a function of fre
quency and all considered references. It is the ratio of the energy in an output
signal, caused by several input signals to the total amount of energy in the out
put signal. It is used to verify the amount of noise on the measurements, as all
responses should be related to the applied references (inputs).
The multiple coherence function between a single response spectrum Y(k) and a
set of reference spectra Xi (k) is calculated from
2 y:x(k) 1
S yy.n!(k)
Eqn 3-17
S yy(k)
Partial Coherence
The partial coherence is the ordinary coherence between conditioned signals.
Conditioned signals are those where the causal effects of other signals are re
moved in a linear least squares sense.
To define the partial coherence, consider the signals X1 ..., Xi , Xj ,... The partial
coherence between Xi and Xj , after eliminating the signals X1 ... Xg is given by,
2p ijg(k)
Sijg(k) 2
Siig(k) S jjg(k)
Eqn 3-18
with :
Sii g (k) =
40
Functions
Sjjg (k) =
Sijg (k) =
Sij(k) 2
S ii(k) S jj(k)
Eqn 3-19
with :
S'ii (k)
S'ij (k)
The value of the virtual coherence is always between 0 and 1. The sum of the
virtual coherences between any signal and all principal components is also in
the range [0,1].
X(k) [ U ] h X(k)
Eqn 3-20
X(k) [ U ] X(k)
[ U ] h[ U ] I
[S xx] [U ]hS xx[ U ]
Part I
Signal processing
41
Chapter 3
Functions
where
S'xx =
{X'(K)} =
[U] =
responses
Input
Input
Output
System
Input
X(k)
Output
Output
H(k)
Y(k)
If Ni be the number of system inputs and No the number of system outputs, let
{X(N)} be a Ni -vector with the system input signals and {Y(N)} a No -vector with
the system output signals. A frequency response function matrix [ H(k)] of size
(No , Ni ) can then be defined such that,
Eqn 3-21
The system described above is an ideal one where the output is related directly
to the input and there is no contamination by noise. This is not the case in real
ity and various estimators are used to estimate [H(k)] from the measured input
and output signals.
The H1 Estimator
The most commonly used one is the H1-estimator, which assumes that there is
no noise on the input and consequently that all the X measurements are accu
rate.
42
Functions
N
Y = HX + N
X
It minimizes the noise on the output in a least squares sense. In this case the
transfer function is given by -
H 1(k)
Syx(k)
Eqn 3-22
Sxx(k)
This estimator tends to give an underestimate of the FRF if there is noise on the
input. H1 estimates the anti-resonances better than the resonances. Best results
are obtained with this estimator when the inputs are uncorrelated.
The H2 Estimator
Alternatively, the H2 estimator can be used. This assumes that there is no noise
on the output and consequently that all the Y measurements are accurate.
H
M
Y = H(X - M)
X
It minimizes the noise on the input in a least squares sense and in this case the
transfer function is given by -
H 2(k)
Syy(k)
Syx(k)
Eqn 3-23
This estimator tends to give an overestimate of the FRF if there is noise on the
output. this estimator estimates the resonances better than the anti-resonances.
Part I
Signal processing
43
Chapter 3
Functions
Note!
The Hv Estimator
Finally with the Hv estimator, [ H(k)] is calculated from the eigenvector corre
sponding to the smallest eigenvalue of a matrix [ Sxxy ]:
Sxxy
S xx S xy
S yx S yy
Eqn 3-24
This estimator minimizes the global noise contribution in a total least squares
sense. When using this estimator the partitioning of the noise over the input
and output signals can be scaled.
Y
X
M
Y
N
Y-N =H (X-M)
X
This estimator provides the best overall estimate of the frequency function. It
approximates to the H2 estimator at the resonances and the H1 estimator at the
anti-resonances. It does however require more computational time than the
other two.
Frequency response functions depend on there being at least one reference
channel and one response channel.
Impulse Response
The impulse response (IR) function matrix [h(t)] expresses the time domain
relationship between the inputs and outputs of a linear system. This relation
ship takes the form of a convolution integral.
y(t)
44
x()x(t )d
Eqn 3-25
Functions
Eqn 3-26
Impulse response functions depend on there being at least one reference chan
nel and one response channel.
The FRF estimators (H1, H2 and Hv) are as described above.
Part I
Signal processing
45
Chapter 3
3.3
Functions
Composite functions
The functions described in this section represent functions that can be acquired
or processed during a Signature analysis. Since this type of analysis is intended
to examine the evolution of signals as a function of changing environment (e.g.
rpm, time, ...), then there needs to be functions that express this evolution.
These are called composite functions as they are derived from the `basic' mea
surement functions described in the previous section, for different environmen
tal conditions.
Frequency section
This function describes the evolution of the energy of the measured signal over
the rpm range in a specified frequency band. It is always expressed as an Rms
frequency spectrum and is available only when the basic measurement function
is a frequency domain function.
The frequency section is calculated by integrating over a Bandwidth around the
center frequency value.
46
Functions
Bandwidth
Lower
bandvalue
Center
frequency
Upper
bandvalue
The center frequency is the frequency at which the section will be calculated
and is specified by the Center parameter. The Lower bandvalue and the Up
per bandvalue are given by
Center frequency +/- {Bandwidth/2}
The Bandwidth is determined by the Band mode parameter. Possible ways in
which to express the Bandwidth are V
f
f
Band mode=frequency
Band mode=lines
f=constant
Part I
Signal processing
rpm
f
Band mode=%
f
constant
fc
47
Chapter 3
Functions
Order sections
This function describes the evolution of the energy of the measured signal in a
specified `order' band. Orders are introduced chapter 2.3 in the chapter on
types of testing. An `order' band is a frequency band whose center frequency
changes as a function of the measurement environment or tracking parameter.
It is necessary therefore that the tracking parameter be a `frequency' type of pa
rameter (e.g. rotation speed in rpm). An order is nothing other than a multiple
of this basic tracking parameter. The evolution of the energy in a specified or
der band is expressed as a function of the measured rpm. Through post proces
sing it is also possible to examine it in terms of measured time or frequency.
Possible means of defining the span for integration are:
V
f
f
Band mode=frequency
Band mode=lines
f=constant
rpm
f
f
Band mode=order
O=constant
f=constant . rpm
rpm
f
Band mode= %
O=constant
Oorder i=Bandwidth (%) . i
Oorder i+1=Bandwidth (%) . (i+1)
f=constant . rpm
Octave sections
An octave section represents the summation of values over octave bands. The
center frequencies of the bands are defined in the ISO norm 150 266. Possible
octave bands are are 1/1, 1/2, 1/3, 1/12 and 1/24 octaves.
48
Functions
3.4
Units
To ensure consistency in the manipulation of data LMS software always oper
ates with an internal set of reference units. The physical quantities with a ca
nonical dimension of length, angle, mass, time, temperature, current, and light,
each have a corresponding reference unit as listed below:
Canonical dimension
Abbreviation
Reference unit
Abbreviation
length
le
meter
angle
an
radian
rad
mass
ma
kilogram
kg
time
ti
second
current
cu
Ampre
temperature
te
deg Kelvin
light
li
candela
0K
cd
This means that all data in either the internal data structures of the LMS soft
ware or the database is stored in these units. A physical quantity with a dimen
sion that is a combination of the above canonical dimensions will be allocated a
unit in the internal unit system that is a combination of the corresponding refer
ence units.
For example, a quantity with dimension of acceleration (length/time2) will
have a unit that is the reference unit of length divided by the reference unit of
time squared (m/s2).
Part I
Signal processing
49
Chapter 3
3.5
Functions
Rms calculations
This section describes the ways in which rms calculations are performed for
different measurement functions. RMS stands for Root Mean Square and is a
measure of the energy in a signal.
If the data is amplitude corrected, then it is automatically converted to energy
correction for the calculations.
1 .
k1
i0
y 2i
yi
Eqn.
3-27
i0
ik
rms
A
Frequency spectra
The frequency spectrum is first converted to a double sided amplitude spectrum
Amplitude 2A
Amplitude A
frequency
50
-f
f=0
frequency
Functions
The frequency range over which you want the rms value computed is defined
by the upper and lower values of f1 and f2. All lines completely within the
range will be included in the calculations (Ai) where i takes values of 1 to k-1.
For the lines at the beginning and the end (A0 and A k), half of each value is
taken.
f2
f1
Ai
-f
i=0
i=k
frequency
A 2k
A 20 k1
2
2 A i
2
2 i1
Eqn. 3-28
A 0 k1
A
2 A i k
2
2 i1
Eqn. 3-29
Eqn. 3-30
Part I
Signal processing
51
Chapter 3
Functions
H rms
H rms
X rms
F rms
k1
2
A 2k
1 A 0
2
Ai
2
k1
2
i1
Eqn. 3-31
Sound power, sound intensity (active and reactive), SFTVI and SFUI
SFTVI (sound field temporal uniformity indicator) and SFUI (sound field uni
formity indicator) are ISO defined functions for acoustic measurements and
analysis. The rms computes the total energy in a band, so since these are al
ready a measure of energy, then the values of the spectral lines can simply be
added.
Rms
A0
2
Ai A2k
Eqn. 3-32
52
A 20
2
k1
A2i A2k
Eqn. 3-33
i1
Part II
Acoustics and Sound Quality
Chapter 4
Terminology and definitions . . . . . . . . . . . . . .
55
Chapter 5
Acoustic measurements . . . . . . . . . . . . . . . . .
67
Chapter 6
Sound quality . . . . . . . . . . . . . . . . . . . . . . . . . .
83
Chapter 7
Sound metrics . . . . . . . . . . . . . . . . . . . . . . . . .
99
Chapter 8
Acoustic holography . . . . . . . . . . . . . . . . . . . .
53
117
Chapter 4
Terminology and
definitions
55
Chapter 4
4.1
Acoustic quantities
Pi
Eqn 4-1
i1
Sound pressure
The effect of the sound power emanating from a source is the level of sound
pressure. Sound pressure is what the ear detects as noise, the level of which de
pends to a great extent on the acoustic environment and the distance from the
source. The sound pressure is defined as the difference between the actual and
ambient pressure.
This is a scalar quantity that can be derived from measured sound pressure
spectra or autopower spectra either at one specific frequency (spectral line), or
integrated over a certain frequency band.
Sound pressure measurements can be obtained at each measurement point, and
are independent of the measurement direction (X,Y, or Z). The units are Pascal
(Pa) or N/m 2.
56
TotalpowerP I
I.dS
Eqn 4-2
I(t)dt
T
Eqn 4-3
As such, if the energy is flowing back and forth resulting in zero net energy
flow then there will be zero intensity.
Normal sound intensity
This is the component of the sound intensity vector normal to the measurement
surface.
Free field
This term refers to an idealized situation
where the sound flows directly out from
the source and both pressure and intensi
ty levels drop with increasing distance
from the source according to the inverse
square law.
Part II
57
Chapter 4
Diffuse field
Particle velocity
Pressure variations give rise to movements of the air particles. It is the product
of pressure and particle velocity that results in the intensity. In a medium with
mean flow therefore
I pv
where
Eqn 4-4
58
Eqn 4-5
where
By combining equations 4-4 and 4-5 it can be seen that in a free field a relation
ship exists enabling the acoustic intensity to be determined from the effective
pressure of a plane wave.
p 2e
I .c
Eqn 4-6
Eqn 4-7
where
= mass density (kg/m 3)
c = velocity of sound in the medium (m/s)
Part II
59
Chapter 4
4.2
Reference conditions
It is a common practise to define standards for acoustic intensity, pressure, etc...
at an air temperature of 20_C and a standard atmospheric pressure of 1023 hPa
(1 bar). Under these conditions
the density of air o
= 1.21 (kg/m 3)
the velocity of sound in air c
= 343 (m/s)
= 415 rayls (kg/m 2s)
the acoustic impedance o .c
dB scale
Since the range of pressure levels that can be detected is large and the ear re
sponds logarithmically to a stimulus, it is practical to express acoustic parame
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales for which the reference values for intensity, pressure
and power are defined below.
|P I|
P0
Eqn 4-8
L v 20 log 10 vv
0
Eqn 4-9
60
|I|
L I 10 log 10
I0
Eqn 4-10
|I n|
L In 10 log 10
I0
dB
Eqn 4-11
p
L p 10 log 10 p
0
20 log 10
p
p0
Eqn 4-12
Part II
61
Chapter 4
4.3
Octave bands
Complete (1/1) octave bands represent frequency bands where the center fre
quency of one band is approximately twice (according to standardized values)
that of the previous one.
fc, i
fc, i+1
fc, i+2
f c, i1 2. f c,i
Partial octave bands (1/3, 1/12 1/24 . . .) represent frequency bands where
f c, i1 ( 2 1x ). f c,i
and where x = 3,12, 24 . . .
1/1 bands
1/3 bands
12x
The Lower band limit of a 1/x octave band is f c.2
12x
The Upper band limit of a 1/x octave band is f c.2
The bands defined by these formulas are termed the `natural' bands. The Inter
national ISO norm 150266 defines normalized center frequencies for octave
bands and the values for 1/1, 1/2 and 1/3 octave bands are listed in table 4.1.
Natural frequencies are used for calculations but the normalized frequencies
are used for annotation. Octave bands above or below the normalized values
are annotated with the natural frequencies.
62
Normalized
frequency
16
1/ 1/ 1/
1
2
3
oct oct oct
x
18
x
22.4
25
x
x
x
x
45
50
250
x
x
71
x
x
400
90
100
112
x
x
140
800
x
x
3150
4000
5000
x
x
6300
8000
9000
x
1600
10000
11200
1250
1400
160
7100
1120
x
2250
5600
630
1000
4500
900
x
2000
3550
x
710
80
2800
560
x
1600
2240
315
500
1/ 1/ 1/
1
2
3
oct oct oct
1800
450
56
Part II
200
355
40
Table 4.1
280
35.5
125
224
28
63
160
180
20
31.5
1/ 1/ 1/
1
2
3
oct oct oct
x
x
12500
14000
x
16000
63
Chapter 4
4.4
Acoustic weighting
Frequency weighting
The human ear has nonlinear, frequency dependent characteristics, which
means that the sensation of loudness cannot be perfectly described by the
sound pressure level or its spectrum. To derive an experienced loudness level
from the sound pressure signal, the frequency spectrum of the sound pressure
signal is multiplied by a frequency weighting function. These weighting func
tions are based on experimentally determined equal loudness contours which
express the loudness sensation as a function of sound pressure level and fre
quency. A number of equal loudness contours are shown in Figure 4-1. The
loudness level is expressed in `Phons'. 1 kHz-tones are used as the reference,
which means that for a 1000 Hz tone, the Phon value corresponds to the dB
sound pressure level.
Figure 4-1
64
20
10
0
-10
-20
-30
-40
-50
-60
Frequency (Hz)
-70
10
Figure 4-2
Part II
102
103
104
65
Chapter 4
66
A weighting dB
B weighting dB
C weighting dB
-56.7
-50.5
-44.7
-39.4
-34.6
-30.2
-26.2
-22.5
-19.1
-16.1
-13.4
-10.9
-8.6
-6.6
-4.8
-3.2
-1.9
-0.8
0
+0.6
+1.0
+1.2
+1.3
+1.2
+1.0
+0.5
-0.1
-1.1
-2.5
-4.3
-6.6
-9.3
-28.5
-24.2
-20.4
-17.1
-14.2
-11.6
-9.3
-7.4
-5.6
-4.2
-3.0
-2.0
-1.3
-0.8
-0.5
-0.3
-0.1
0
0
0
0
-0.1
-0.2
-0.4
-0.7
-1.2
-1.9
-2.9
-4.3
-6.1
-8.4
-11.1
-8.5
-6.2
-4.4
-3.0
-2.0
-1.3
-0.8
-0.5
-0.3
-0.2
-0.1
0
0
0
0
0
0
0
0
0
-0.1
-0.2
-0.3
-0.5
-0.8
-1.3
-2.0
-3.0
-4.4
-6.2
-8.5
-11.2
Chapter 5
Acoustic measurements
67
Chapter 5
5.1
Acoustic measurements
Sound Intensity
The sound intensity in a specified direction at a point is the average rate of
sound energy transmitted in the specified direction through a unit area normal
to this direction at the point considered.
In most situations it is the component of the sound intensity vector normal to
the measurement surface, I n , which is measured.
In order to determine sound intensity you can measure both the instantaneous
pressure and the corresponding particle velocity simultaneously. In practice,
the sound pressure can be obtained directly using a microphone. The instanta
neous particle velocity can be calculated from the pressure gradient between
two closely spaced microphones. A sound intensity probe can therefore consist
of two closely spaced pressure microphones which measure both the sound
pressure and the pressure gradient between the microphones.
For frequency domain calculations, it can be shown that the sound intensity can
be calculated from the imaginary part of the crosspower between the two mi
crophone signals. The following formula is used
I Imag
S 1,2
2fd
Eqn 5-1
Where S1,2 is the double sided crosspower between the two microphone sig
nals, f is the signal frequency, d is the microphone distance and is the air den
sity.
68
Acoustic measurements
For this function, all channels are processed as channel pairs, each pair consist
ing of two consecutive channels. It therefore requires that an even number of
channels is defined.
The reactive sound intensity (non propagating energy) is calculated as
I reactive
S 1,1 S 2,2
2fd
Eqn. 5-2
For the idealized case of measurements in the free field (free space without re
flections) and in the direction of propagation, the reactive intensity is zero.
Residual intensity
This is defined as
RI L p pI 0
Eqn 5-3
where L p is the measured sound pressure level and pI0 is the pressure residual
intensity index. To calculate the residual intensity therefore it is necessary to
have the pressure residual intensity index available. This is described below.
Intensity measurements can be made in a sound field where the sound intensity
level is in the range
L p pIo L I L p
Eqn 5-4
Lp is defined in equation 4-12, and LI in equation 4-10. In a free field the pres
sure and intensity levels are the same, whereas in all other cases, the measured
intensity will be less than the pressure. The residual intensity ( L p pI 0) rep
resents the lowest intensity level which can be detected by the system for the
given sound pressure level.
Part II
69
Chapter 5
Acoustic measurements
pIo (L p L In) dB
Eqn 5-5
where Lp is the sound pressure level and LIn is the normal sound intensity lev
el.
L p pIo dB LI Lp
Eqn 5-6
LI dB
Lp
Ld
pIo
Figure 5-1
70
Acoustic measurements
Precision
(class 1)
10
Engineering
(class 2)
10
Survey
(class 3)
Table 5.1
L d ( pIo ) dB
Part II
Eqn 5-7
71
Chapter 5
5.2
Acoustic measurements
p 2e 2
p(f) df
2
f1
f2
2
A (f)df
Eqn 5-8
f1
Acoustic intensity
This as a vector quantity calculated directly from measured acoustic intensity
functions.
f2
I
I(f)df
Eqn 5-9
f1
When intensity measurements are not available but sound pressure measure
ments are available, then the magnitude of the acoustic intensity can be com
puted from the effective sound pressure p and the acoustic impedance .c
p2e
I
.c
0
72
Eqn 5-10
Acoustic measurements
but only under the assumption of plane progressive waves in a free field.
Sound power
This is calculated from the geometrical area S and the acoustic intensity compo
nent perpendicular to a surface
P I n.S
Eqn 5-11
Eqn 5-12
Particle velocities
These can be calculated when both acoustic intensity and sound pressure data
are available
v pI
Eqn 5-13
All the possible analysis functions are summarized in Table 5.2. (These are
based on the assumption of plane progressive waves in a free field.)
Acoustic quantity Symbol
Effective (RMS)
sound pressure
p
Intensity
Sound power
p
Particle velocity
Table 5.2
Part II
pe
Required data
Formula
MKS
units
sound pressure
spectrum p
2 p
Pa or
N/m2
pressure autopower
A
2 A
intensity
I
W/m2
Sound pressure
spectrum and area
p 2e
0c .S
(1)
pressure autopower
and area
p 2e
0c .S
(1)
I.S
I
p
m/s
73
Chapter 5
5.3
Acoustic measurements
source
reflecting plane
74
Acoustic measurements
ISO-3744
ISO-3745
Part II
75
Chapter 5
5.4
Acoustic measurements
Frequency bands
Whenever an acoustic quantity is integrated over a certain frequency band, the
following formula applies
a(f)df
f2
a
Eqn 5-14
f1
The integration of a continuous function a(f) is replaced by a finite sum over the
corresponding discrete samples:
a 1 a1
2
where
a 12 a
i
a1
a2
f1 < fi < f2
Eqn 5-15
= a(f1 )
= a(f2 )
This integration takes into account the full value of all data samples between
the two limits, and 50 % of the first and last sample. It can be obtained between
any two measured frequency limits.
It is good practise to maintain the type of frequency band that was used in the
acquisition of the data for the calculation. In fact data acquired in octave bands
must remain in those bands for the analysis. The calculation of the field indica
tors also makes little sense unless the analysis bands correspond with the mea
surement bands.
76
Acoustic measurements
5.5
Field indicators
When attempting to analyze the sound power being radiated from a noise
source in situ, the international standard ISO 9614-1 lays out a number of mea
surement conditions which must be adhered to if the results are to be consid
ered acceptable for this purpose. A number of criteria must be satisfied, based
on the values of particular indicator functions, to ensure the requisite adequacy
of the measurements and meshes. This section describes both the field indica
tors themselves and the criteria used to assess the results.
1
M1
(Ink In)2
Eqn 5-16
k1
Where I n is the mean value of M short time averages of Ink defined in the fol
lowing equation.
In 1
M
Ink
Eqn 5-17
k1
Part II
77
Chapter 5
Acoustic measurements
F 2 L p L |I n|
Eqn 5-18
Eqn 5-19
where i indicates the measurement surface and N is the total number of sur
faces (of the local component).
L |I n|
1 N |Ini|!
L |I | 10 log 10
Io
N
i1
Eqn 5-20
where |I ni| is the absolute (unsigned) value of the normal intensity vector.
Note!
A large difference between intensity and pressure suggests that the probe is
not well aligned or that you are operating in diffuse field.
In order to calculate F2 it is necessary to have both intensity and autopower (or
pressure) measurements for all points on the mesh.
F3 L p LIn
Lp
78
Eqn 5-21
Acoustic measurements
1
L In 10 log 10
N
IInio
N
i1
Note!
If the quantity
II
ni
Eqn 5-22
too great and the set of measurements do not satisfy the ISO requirements.
In order to calculate F3 it is necessary to have both intensity and autopower (or
pressure) measurements for all points on the mesh.
F4 Nonuniformity indicator
This indicates the measure of spatial (or positional) variability that exists in the
field. It can be compared with the statistical parameter standard deviation.
F4 1
In
1
N1
(Ini In)2
Eqn 5-23
i1
Where i indicates the measurement surface and N is the total number of sur
faces. I n is the mean of the normal acoustic intensity vectors taken over the N
surfaces.
In 1
N
Ini
Eqn 5-24
i1
5.5.1
The criteria
Three criterion can be evaluated in verifying the results of an acoustic intensity
analysis.
Part II
79
Chapter 5
Acoustic measurements
Ld F2
Criterion 1
F3 F2
Criterion 2
80
Acoustic measurements
Octave band
Precision
class 1
Engineering
class 2
63-125
50-160
19
11
250-500
200-630
19
1000-4000
800-5000
57
29
6300
19
14
Part II
Survey
class 3
81
Chapter 6
Sound quality
83
Chapter 6
6.1
Sound quality
84
Sound quality
Part II
85
Chapter 6
Sound quality
Nerve fibers
Cochlea
Scala vestibuli
Pinna
Scala tympani
Eustachian tube
Ear canal
Ear drum
Outer ear
Figure 6-1
Round window
Middle
ear
Inner ear
Binaural hearing
Another essential characteristic of human hearing is that it is binaural in nature.
The sound signals received by the left and right ear show a relative time delay
as well as a spectral difference dependent on the direction of the sound. Below
about 1500 Hz, the phase difference between the two signals will be the main
contribution to localization, while above this frequency the interaural level dif
ference and difference in spectrum will be the principal factors.
Processing in the human brain not only allows the sound to be spatially local
ized, but also to suppress unwanted sounds and to concentrate on a sound
coming from a specific direction {2 6}. This is the well known `party' effect
where it is possible to focus one's hearing on an individual a certain distance
away in the presence of significant background noise.
Sound perception
The body, head and outer ear effects consist mainly of a spatial and spectral fil
tering that is applied to the acoustic stimulus. Consequently, just looking at the
frequency spectrum of a free positioned microphone does not necessarily lead
to a correct assessment of the human response. In other words, there is no sim
ple relationship between the measured physical sound pressure level and the
human perception of the same sound.
86
Sound quality
The effects of the inner ear are many, but the most important are its nonlinear
characteristics. This means that the auditory impression of sound strength,
which is referred to by the term `loudness' is not linearly related to the sound
pressure level. In addition, the perceived loudness of a pure tone of constant
sound pressure level varies with its frequency. Also the auditory impression of
frequency, which is referred to by the term `pitch' is not linearly related to the
frequency itself. These and other effects are described below.
Loudness
The sound pressure level is not linearly related to the auditory impression of
sound strength (or loudness). Together with the frequency dependencies dis
cussed above, this means that the sensation of loudness cannot be correctly de
scribed by the acoustic pressure level or its spectrum. Figure 6-2 {5} shows a
number of curves representing levels of perceived equal loudness (for sinusoidal
tones) across a frequency range as a function of acoustic pressure level.
Figure 6-2
Pitch
The perceived `frequency sensation', referred to as `pitch', is not directly related
to the frequency itself {6}.
Part II
87
Chapter 6
Sound quality
The pitch of a pure tone varies with both the frequency and the sound pressure
level, and this relationship is itself dependent on the frequency of the tone.
Pure tones can be used though to determine how pitch is perceived. One possi
bility is to measure the sensation of `half pitch'. In this case the subject is asked
to listen to one pure tone, and then adjust the frequency of a second one such
that it produces half the pitch of the first one. At low frequencies, the halving
of the pitch sensation corresponds to a ratio of 2:1 in frequency. At high fre
quencies however this does not occur and the corresponding frequency ratio is
larger than 2:1. For example a pure tone of 8kHz produces a `half pitch' of only
1300Hz.
So although the ratio between pitches can be determined from experiments, to
obtain absolute values, it is necessary to determine a reference for the sensation
`ratio pitch'. A reference frequency of 125 Hz was chosen so that at low fre
quencies, the numerical value of the frequency is identical to the numerical val
ue of the ratio pitch. Because ratio pitch determined in this way is related to
our sensation of melodies, it was assigned the dimension `mel'. Therefore a
pure tone of 125 Hz has a ratio pitch of 125 mel, and the tuning standard, 440
Hz, shows a ratio pitch with almost the same numerical value.
At high frequencies, the numerical value of frequency and that of the ratio pitch
deviate substantially from another. The experimental finding that a pure tone
of 8kHz has a `half pitch' of 1300Hz, is reflected in the numerical values of the
corresponding ratio pitch. The frequency of 8 kHz corresponds to a ratio pitch
of 2100 mel and the frequency of 1300 Hz corresponds to a ratio pitch of 1050
mel, which are half of 2100 mel.
Critical bands
The inner ear can be considered to act as a set of overlapping constant percent
age Bandwidth filters. The noise Bandwidths concerned are approximately
constant with a Bandwidth of around 110 Hz, for frequencies below 500 Hz,
evolving to a constant percentage value (about 23 %) at higher frequencies.
This corresponds perfectly with the nonlinear frequency-distance characteris
tics of the cochlea. These Bandwidths are often referred to as `critical Band
widths' and a `Bark' scale is associated with them as shown in Table 6.1.
88
Sound quality
1
50
100
2
150
100
3
250
100
4
350
100
5
450
110
6
570
120
7
700
140
8
840
150
9
1000
160
10
1170
190
11
1370
210
12
1600
240
13
1850
280
14
2150
320
15
2500
380
16
290
450
17
3400
550
18
4000
700
19
4800
900
20
5800
1100
21
7000
1300
22
23
24
8500 10500 13500
1800 2500 3500
Table 6.1
Masking
The critical bands described above, have important implications for sounds
composed of multiple components. For example, narrow band random sounds
falling within one such filter Bandwidth will add up to the global sensation of
loudness at the center frequency of the filter. On the other hand, a high level
sound component may `mask' another lower level sound which is too close in
frequency.
An example of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +)
can be heard in the presence of narrow-band noise, centered around 1200 Hz,
up to a level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.
Threshold of hearing
Frequency
Figure 6-3
Part II
89
Chapter 6
Sound quality
The higher the level of the masking sound, the wider the frequency band over
which masking occurs. Again, it turns out that multiple sound components fal
ling within one of the ear filter Bandwidths add up to the masking level, while
when they are wider apart each can be considered as a separate sound with its
own masking properties.
Temporal effects
Finally, a number of temporal effects are associated with the hearing process.
Sounds must `build up' before causing a neural reaction, the reaction time how
ever is dependent on the sound level. This has an effect on the perceived loud
ness since the loudness of a tone burst decreases for durations smaller than
about 200 ms. For larger durations, the loudness is almost independent of
duration.
This also has its consequences for masking :
- Short sounds preceding a second loud sound can be reduced in loudness or
even masked. The time intervals for this temporal `pre-masking' phenome
non are in the order of tens of milliseconds.
- A similar effect may occur after switching off a loud sound. During a time
interval up to 200 ms (dependent on masking and tone level), short tone
bursts may be masked (post-masking).
- In the presence of a given continuous sound, tone bursts with levels exceed
ing that of the first signal, might be obscured, depending on their length.
This is called `simultaneous masking'.
A detailed discussion of these temporal effects can be found in {6}.
90
Sound quality
6.2
Part II
91
Chapter 6
Sound quality
Digital
Spectral
processing
Digital
Filtering
output
Replay
input
Comfort
analysis
Figure 6-4
Reporting
Analysis
Measurements
Sound quality measurements are acoustic measurements made with micro
phones. These can be digitally recorded and imported into the computer sys
tem, but in order to successfully evaluate a sound it is absolutely essential that
it is both recorded and replayed in the most accurate and representative way
possible. Binaural recording is a technique whereby microphones are mounted
inside the ear in an artificial head to represent the sensation of human hearing
as closely as possible.
Evaluation
The next step in dealing with a sound quality issue, is to gain a proper under
standing of the quality of the sound. In order to evaluate sound quality charac
teristics, different (non-exclusive) approaches may be followed.
(a) The acoustic signal can be evaluated subjectively by a specialist or jury of
listeners. This can be achieved by replaying the signal either digitally via a
recorder or directly via an analog output to headphones or speakers. When
using direct replay, cyclic repetition of a particular segment can be per
formed and techniques are provided to suppress the `click' at the start and
end of a segment as well as on-line notch filtering. This latter facility can
give a very fast assessment of the critical spectral characteristics of a sound.
(b) The acoustic pressure signal is processed in such a way that perception-rele
vant quantitative values can be obtained through the use of adequate sound
quality metrics. Such metrics form part of the comfort analysis.
Modification
Important information on the nature of a sound can be obtained by modifying
the sound signal and comparing its perceived quality with the original. This
modification can be imposed in the time, frequency or order domains.
92
Sound quality
Artificial head
DAT recorder
Recording
equalization
Figure 6-5
Calibration
Listener
Computer
Sound
Quality
Analysis
de-equalization
Equalization
Recording
The first stage in this process is to make an exact recording of a sound. A single
microphone situated in free space is insufficient for this since at least four mi
crophones would be necessary to correctly capture the 3D nature of the sound.
It has been demonstrated in the previous section that the pressure experienced
by the eardrum will be greatly influenced by the presence of the head and torso
of the listener and is further affected by the non-linear operating characteristics
of the ear itself. As a consequence of this, one of the most accurate ways to re
cord a sound is to mimic the function of the ears themselves and place two mi
crophones inside the ear canals. Such a technique is known as binaural record
ing, which involves two inputs representing what the left and the right ears
would hear.
Although it is possible to place the earphones inside the head, it is more com
mon to use an artificial head which provides similar spatial filtering to that of
an actual head shoulders and torso.
Equalization
You may wish to reconstruct this recording as if it were the original sound and
not as it is heard inside the head. In this case, you will need to `undo' the mod
ifications that were caused by the presence of the head. The sound can be re
constructed as if it were in a free field or a diffuse field.
Part II
93
Chapter 6
Sound quality
A free field refers to an idealized situation where the sound flows directly out
from the source and the pressure levels drop with increasing distance from the
source. A diffuse field occupies a smaller space and the sound is reflected
many times.
Thus, when you are recording a sound you can determine the type of field you
wish to reconstruct it in and the appropriate compensation or equalization will
be applied. If you only wish to replay the sound through headphones, then
you do not need equalization and so you can either select to have a non-equal
ized recording or you will have to de-equalize it before it is replayed through
headphones.
Transfer to computer
The recording on the DAT recorder is held in a 16 bit audio format. When this
is transferred to a computer system, it will be then converted to a 32 bit floating
point format. To achieve this conversion a calibration factor is required.
Replay
When you need to replay the signal on the headphones, then de-equalization
may be necessary if free-field or diffuse field equalization has been applied to
the original recording. In addition, compensation is required to take account of
the transfer function associated with the particular set of headphones to be
used.
94
Sound quality
6.3
Part II
Reading list
1
D.LUBMAN, Noise Quality, Toward a Larger Vision of Noise Control Engineering, Jour
nal of Noise Control Engineering, ....
W.BRAY ET AL, Development and Use of Binaural Measurement Technique, Proc. Noise
Con. `91, Tarytown (NY), July 14-16, 1991, pp 443-450.
J.HASSAL, K.ZAVERI, Acoustic Noise Measurements, Bruel & Kjaer, DK2850 Naer
um, Denmark, 1988
10
11
12
13
14
S.J.STEVENS, Procedure for Calculating Loudness : Mark VII, J. Acoust. Soc. Am., Vol.
33, Nr.11, pp.1577-1585, 1961.
15
S.J.STEVENS, Perceived Level of Noise by Mark VII and Decibel, J. Acoust. Soc. Am.,
Vol.511, Nr.2, pp. 575-601, 1971.
16
17
L.L.BERANEK, Criteria for Noise and Vibration in Communities, Buildings and Vehicles
in Noise and Vibration Control, revised edition, McGraw-Hill Inc., 1988.
18
95
Chapter 6
96
Sound quality
19
20
21
M.F.RUSSEL, What Price Noise Quality Indices, Proc. Engineering Integrity Society
Symposium on NVH Challenges - Problem Solutions, Oct.21, 1992.
22
23
D.G.FISH, Vehicle Noise Quality - Towards Improving the Correlation of Objective Mea
surements with Subjective Rating, I. Mech. E. - paper 925186, Ref. C389/468 FISATAconference, Engineering for the customer, pp. 29-36, 1992.
24
25
26
27
28
29
30
31
32
K.FUJITA ET AL, Research on Sound Quality Evaluation Methods for Exhaust Noise,
JSAE Review (E), Vol. 9, Nr. 2, April 1988, pp. 28-33.
33
American National Standard, S.3.14-1977 (R886), Rating Noise with Respect to Speech
Interference, Acoustical Society of America.
34
35
36
Sound quality
Part II
37
38
39
40
41
H.HAMMERSHOI, H.MOLLER, Artificial Head for Free Field Recording ; How Well
Do They Simulate Real Heads ?, Proc. 14th ICA, Beijing, 1992, Paper H6-7 (2pp).
42
43
44
45
S.M.HUTCHINS ET AL, Noise, Vibration and Harshness from the customer's Point of
View, IMechE paper 925181, Ref. C389/049, Proc. FISATA-92 Conf, Engineering for
the Customer.
46
H.AOKI ET AL, Effects of Power Plant Vibration on Sound Quality in the Passenger
Compartment During Acceleration, SAE paper 870955, Proc. SAE Noise and Vibration
Conf., Traverse City (MI), Apr. 28-30, 1987, pp.53-62.
47
K.C. PARSONS, M.J. GRIFFIN, Methods for predicting Passenger Vibration Discom
fort, Society of Automotive Engineers Technical Paper Series 831921
48
49
50
51
52
British Standards Institution, Measurement and evaluation of human exposure to wholebody mechanical vibration and repeated shock Ref. No. BS 6841 - 1987
53
American National Standard, S3.14 - 1977 (R-1986), Rating Noise with Respect to
Speech Interference, order from the Acoustical Society of America.
54
ANSI S3.5, Calculation of the Articulation index, American National Standards Insti
tute, Inc., 1430 Broadway, New York, New York 10018 USA, 1969
55
ISBN
97
Chapter 7
Sound metrics
It may be said that the best way to evaluate the quality of a sound is
to listen to it and express an opinion about it, but in a lot of cases there
is also a strong interest in correlating the results from these subjective
evaluations with measurable parameters. Therefore a number of
sound quality metrics exist where perception-relevant quantitative
values are calculated from the acoustic pressure signal.
Sound pressure levels
Loudness metrics
Sharpness
Roughness
Fluctuation strength
Pitch
Articulation index
Speech interference levels
Impulsiveness
The references are listed in chapter 6
99
Chapter 7
7.1
Sound metrics
(e ) where t is the sample period of the signal and is the time constant. The
values of depends on the type of signal (mode) and three default (standard
ized) values are supplied.
= 35ms for impulse (peaky) signals
= 125 ms for fast changing signals
= 1000 ms for slow changing signals.
By selecting the type of signal (mode) then the appropriate time constant is ap
plied.
When the signal contains spikes and is therefore defined by the mode im
pulse" an additional peak detector mechanism is implemented. In this case
when an increase in the averaged signal is detected, then the signal is followed
exactly. When the signal is decreasing, then exponential averaging is used with
a long time constant, set by default to 1500 ms. The time constant used in this
situation is termed the decay time constant.
100
Sound metrics
7.2
Eqn 7-1
where
LAeq,T is the equivalent continuous A-weighted sound pressure level, in deci
bels, determined over a time interval T starting at t1 and ending at t2
p0
pA(t)
In practice with sampled data the equivalent sound pressure level is computed
by a summation of the sampled values of the pressure level, in dB over the
number of samples required.
As a generalization, you can apply the same formula to a non-A-weighted
sound pressure signal p(t) to obtain Leq,T.
Part II
101
Chapter 7
7.3
Sound metrics
Loudness
The equal loudness contours shown in Figure 6-2 in the document Sound
quality" are the result of large numbers of psycho-acoustical experiments and
are in principle only valid for the specific sound types involved in the test.
These curves are valid for pure tones and depict the actual experienced loud
ness for a tone of given frequency and sound pressure level when compared to
a reference tone. The resulting value is called the `loudness level'.
The loudness level itself is expressed in Phons. 1 kHz-tones are used as the ref
erence, which means that for a 1 kHz tone, the Phon value corresponds to the
dB sound pressure level. The equal loudness contours for free field pure tones
and diffuse field narrow-band random noise are standardized as ISO 226-1987
(E).
A linear unit derived from the (logarithmic) Phon values is the Sone (S), which
is related to the Phon (P) in the following way :
S 2 (P40)10
Eqn 7-2
The Sone scale's linear relationship to the experienced loudness makes it easier
to interpret. A loudness of 1 Sone corresponds to a loudness level of 40 Phons.
A tone which is twice as loud, will have double the loudness (Sone) value, and a
loudness level which is 10 Phons higher.
When broadband or multi-tone sounds are being considered, the frequency
spectrum of the loudness is made in terms of critical bands instead of the total
value. Critical bands and barks are described in Table 6.1 in the chapter
onSound quality". In this case the terminology `specific loudness' is used, ex
pressed in Sones/Bark.
For steady state sounds, standardized calculation procedures have been de
fined by Zwicker and Stevens and are accepted as ISO standards {12, 13, 14}. A
more recent procedure by Stevens {15} has not yet been accepted as an ISO stan
dard.
They are both based on :
V
a convention for the relation between octave band sound pressure lev
els and octave band partial (specific) loudness descriptions
For temporally varying sounds, Zwicker has also proposed an approach taking
into account temporal effects {16}, which is not yet accepted as an ISO standard.
102
Sound metrics
7.3.1
Stevens Mark VI
The Stevens (Mark VI) method, standardized as ISO 532-A-1975 and ANSI
S3.4-1980, starts from octave band sound pressure levels. Their loudness is
compared to that of a critical band noise at 1 kHz. It is only defined for diffuse
sound fields with relatively smooth, broadband spectra. Through a set of stan
dardized curves, each octave band level is converted into a partial loudness in
dex (s) see Figure 7-1. The partial loudness values are then combined into a
total loudness (in Sones), using equation 7-3.
st s m F(s sm)
where st =
sm =
s=
F=
for
Figure 7-1
Part II
Eqn 7-3
103
Chapter 7
7.3.2
Sound metrics
Figure 7-2
7.3.3
Loudness Zwicker
Loudness assessment using the Zwicker method (standardized as ISO 532B)
starts from 1/3 octave band sound pressure level data, which can originate
from either a free or diffuse sound field. It is capable of dealing with complex
broadband noises, which may include pure tones.
104
Sound metrics
The method takes masking effects into account. Masking effects are important
for sounds composed of multiple components. A high level sound component
may `mask' another lower level sound which is too close in frequency. An ex
ample of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +) can be
heard in the presence of narrow-band noise, centered around 1200 Hz, up to a
level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.
Threshold of hearing
Frequency
Figure 7-3
The method uses different sets of graphs for diffuse and free fields that relate
loudness level to sound pressure level and that take the masking into account
by a sloping-edge filter characteristic for each octave band. This way, domi
nant and hence masking frequency bands will show their influence over a large
frequency range and prevent masked sounds contributing to the total level.
Figure 7-4 shows an example of the Zwicker method. The 1/3 octave band
data are transferred to the appropriate Zwicker diagram.
Part II
105
Sound metrics
Chapter 7
Frequency
Figure 7-4
The partial loudness contours are computed for each defined segment (global
evaluation) or frame (tracked evaluation) using a classical Zwicker loudness
calculation. The frame or segment size should be selected to ensure that the
spectral resolution needed for the FFT-based octave band analysis can be
achieved. The frame size can be used to restrict the analysis to time periods
over which time-varying signals can be regarded as stationary.
The Zwicker loudness analysis allows you to distinguish between unmasked
and masked contours thus allowing you to see that certain levels are either
wholly or completely masked by previous ones.
The total loudness is calculated as the surface under the enveloping partial loud
ness contours and can be expressed in Sones, or as loudness level in Phones as a
function of time. This is presented as a single value in the global evaluation
and a trace of values for the tracked evaluation.
106
Sound metrics
7.4
Sharpness
A sensation which is relevant to the pleasantness of a sound is its `sharpness',
allowing you to classify sounds as shrill (sharp) or `dull'. The sharpness sensa
tion is strongly related to the spectral content and center frequency of narrowband sounds and is not dependent on loudness level or the detailed spectral
content of the sound.
Roughly, it corresponds to the first spectral moment of the specific loudness,
with a pre-emphasis for higher frequencies. A quantitative procedure has been
proposed, expressing the sharpness in the unit `acum'. The reference sound of
1 acum is a narrow-band noise, one critical band wide, and at a center frequen
cy of 1 kHz and having a level of 60 dB.
The dependency of sharpness on the center frequency and bandwidth of the
noise is shown in Figure 7-5 {6}. The middle curve represents a noise of one
critical bandwidth as a function of center frequency, the upper and lower
curves representing the sharpness of noises with respect to fixed upper (10
kHz) or lower (0.2 kHz) cut-off frequency as a function of the other cut-off val
ue. Higher frequency noises produce higher sharpness.
Figure 7-5
Part II
107
Chapter 7
Sound metrics
S (z)
0.11N (z)g(z)z
24Bark
N(z)z
Eqn 7-4
0Bark
where
Figure 7-6
Eqn 7-5
S
S (z)z
Eqn 7-6
0Bark
108
Sound metrics
7.5
Roughness
The roughness or harshness of a sound is a quality associated with amplitude
modulations of tones. When this modulation frequency is very low (15 Hz), the
actual time varying loudness fluctuations can be perceived. This fluctuation
sensation is discussed in section 7.6.
At high modulation frequencies (above 150-300 Hz), three separate tones can
be heard. In the intermediate frequency range (15-300 Hz), the sensation is of a
stationary, but rough tone, which renders it rather unpleasant. This sensation is
often associated with engine noise, where fractional orders can cause the modu
lation effects.
Roughness increases with degree of modulation and with modulation frequen
cy, and is less sensitive to the base frequency. The unit used to describe rough
ness is the asper"; 1 asper being produced by a 100%, 70 Hz modulated 1 kHz
tone of 60 dB.
The dependency relationship between modulation depth and frequency is how
ever not straightforward. An important element is that the temporal variations
of the loudness can cause masking effects, and a temporal masking depth (L)
is introduced, representing the difference between maximum and minimum in
the actually perceived time dependent loudness pattern. Due to post masking,
this masking depth is smaller than the modulation depth, with the difference
becoming greater at higher frequencies. The roughness (R) of an amplitude
modulated sound can then be approximated as
R " f mod L
Eqn 7-7
Part II
109
Chapter 7
7.6
Sound metrics
Fluctuation strength
When the sound functions have modulation frequencies below 20 Hz, they are
perceived as changes in the sound volume over time. Typically, fluctuation sig
nal sound louder (and more annoying) than steady state signals of the same
rms amplitude. In this case, the intensity of the sensation is referred to as
Fluctuation strength" with the unit vacil". A reference sound of 1 vacil corre
sponds to a 1 KHz tone of 60 dB with a 100 % amplitude modulation of 4Hz.
The ear is most sensitive to fluctuations at 4 Hz. Quantitative models have
been proposed for the fluctuation strength {6} which take into account the tem
poral masking effects due to the sound fluctuation.
The dependency of the fluctuation strength (F) on the modulation frequency
(fmod) and masking depth (L) is then the following
F#
110
L
(f mod4Hz) (4Hzf mod)
Eqn 7-8
Sound metrics
7.7
Pitch
Pitch is a sound attribute that classifies sounds on a scale from low to high. For
pure tones, pitch depends largely on the frequency of the tone, but it is also in
fluenced by its level.
In a complex tone, consisting of many spectral components, one or more pitches
can be perceived. These pitches also depend to a large extent on the frequen
cies of the constituent components, but also masking effects can occur, making
some pitches more prominent than others.
Pitches, both for pure and complex tones, which can be derived from the spec
tral content of the signals, are called spectral pitches.
It has been observed that in a complex tone, consisting of a fundamental fre
quency and a number of its harmonics, a pitch corresponding to the fundamen
tal frequency is perceived, even when that fundamental frequency is filtered
out of the signal. In this case, the perceived pitch does not relate anymore to a
component actually present in the signal but relates to the difference between
the higher harmonics. This type of pitch is called residue pitch or virtual pitch.
The pitch calculation is implemented according the method developed by
Terhardt (J. Acoust. Soc. Am. Vol 71, pp 679-688, 1982). Both spectral and
virtual pitches can be derived as well as the weight of each calculated pitch.
These indicate how prominently the pitches are perceived.
If, in the calculation the effect of the tone level on the pitch is taken into ac
count, the calculated pitch is called true pitch. If the influence of level on the
tone is neglected, it is called nominal pitch.
Part II
111
Chapter 7
7.8
Sound metrics
112
Standard
The calculation is based on the work of Beranek as set out in The de
sign of speech communication systems", Proceedings of the IRE, Vol 45,
880-884, 1947. The results of this method will lie in the range 0-100%
Sound metrics
Part II
Modified
These calculations are based upon the AIM method which has been de
scribed in the work mentioned above, but which opens up the internal
floating range of 30dB to a fixed range of 80dB between the limits of 20
and 100dB. The results of this method will lie in the range-107% to al
most 160%
113
Chapter 7
7.9
Sound metrics
Figure 7-9
114
Sound metrics
7.10
Impulsiveness
This metric is used to quantify the impulsive nature of a signal. It is used for
instance in the quantification of the diesel engine noise.
The algorithm for calculating impulsiveness is based on the signal envelop, and
results in a number of output values; the mean impulse peak level, mean im
pulse rise rate and mean impulse duration. Each of these parameters is de
scribed in the Figure below. In addition the mean impulse rate (occurrence) is
determined.
Peak level
signal envelope
rise
rate
threshold
center position
threshold offset
rms level
rise time
fall time
Part II
115
Chapter 8
Acoustic holography
117
Chapter 8
8.1
Acoustic holography
Introduction
Acoustic holography allows you to accurately localize noise sources. It there
fore helps in both the reduction of unwanted vibro-acoustic noise and opti
mization of noise levels. It :
V
estimates the acoustic power and the spectral content emitted by the
object under examination.
Basic principles
In performing acoustic holography, you need to measure cross spectra between
a set of reference transducers and the hologram microphones. From these mea
surements you can derive sound intensity, particle velocity and sound power
values.
A basic assumption is that you are operating in free field conditions and that
the energy flow is coming directly from the source. Measurements need to be
taken close to the source.
It provides you with an accurate 3D characterization of the sound field and the
source with a higher spatial resolution than is possible with conventional inten
sity measurements.
118
Acoustic holography
8.2
The goal is to determine the whole acoustic wavefront from the known pressure
on the measurement plane. Each microphone in the array measures the com
plex pressure (amplitude and phase).
pressure
T
time
f=1/T
=c/f
The transformation from the time to the frequency domain is achieved using
the Fourier Transform given below
Part II
119
Chapter 8
Acoustic holography
F()
f (t)e
Eqn 8-1
jtdt
Spatial domain
If we now consider measurements where time is fixed and pressure varies as a
function of distance, we can obtain a measure of energy flow.
pressure
distance
k0 = 2 f = = 2
c
If we fix the temporal frequency, this means that the acoustic wavelength is
fixed too.
The complex pressure as a function of the space is called the pressure image at
the specified frequency.
Conversion from the spatial domain is also done using a Fourier transform. In
Acoustic holography pressure is measured in two dimensions (x and y for
example), so a 2-dimensional transformation is performed.
S(k x, k y)
P
measured(x, y)e
dx.dy
Eqn 8-2
where S (kx , ky) is the spatial transform of the measured pressure field to the
wavenumber (kx and ky ) domain resulting in the 2-D hologram pressure field.
120
Acoustic holography
Pmeasured
kx
ky
wavenumber domain kx ky
Each of these sinusoidal functions can be understood as the result of cutting the
wavefronts of a plane wave by the measurement plane.
Part II
121
Chapter 8
Acoustic holography
measurement plane
spatial periodicity
wavelength
= c/f
wavelength
= c/f
There is a coincidence between the nodes of the sinusoidal function and the wa
vefronts. In effect, decomposing the pressure field into a sum of sinusoidal
functions means decomposing the real acoustic wave into a sum of plane
waves.
Whatever the angle of incidence, the spatial periodicity must be greater than
the wavelength
.
evanescent waves
whose level decreases as they
propagate.
Propagating waves represent the sound field that is propagated away from the
near towards the far field. Evanescent waves describe the complex sound field
in the near field of the source.
To understand why we must take evanescent plane waves into account, let us
consider our decomposition of the pressure field into sinusoidal functions. If
the spatial periodicity of a sinusoidal function is shorter than the wavelength, it
cannot be the result of cutting a propagating plane wave by the measurement
plane :
122
Acoustic holography
spatial periodicity
measurement plane
Whatever the direction of the propagating plane wave may be, there is no pos
sible coincidence between the nodes of the sinusoidal function and the wave
fronts. Therefore, this sinusoidal function must be understood as the intersec
tion between an evanescent wave (which can have a smaller spatial periodicity
than propagating waves) and the measurement plane.
A mathematical interpretation of the evanescent waves is based on the value of
kz which is the component perpendicular to the measurement directions in the
wave number domain.
kz
ky
measurement plane
kx
kz can be determined from the wave number k0 and the known values of kx
and ky from the transformation.
k 0 k 2x k 2y k 2z
c
kz
c
(k k )
2
2
x
Eqn 8-3
2
y
2
2
2
kz is real when k x k y ( c ) (the spatial periodicity is greater than the wave
length). This means that the waves lie in the circle defined by the radius /c in
the wave number domain. kz is imaginary outside of this region.
Part II
123
Chapter 8
Acoustic holography
ky
k 2x k 2y k 20
k z is imaginary
evanescent waves
k0
c
k 2x k 2y k 0
kx
k z is real
propagating
waves
P(r)G (r r)dxdy
Eqn 8-4
1
g d(k x, ky, z z)
Eqn 8-5
Eqn 8-6
where z' is the measurement plane and z is the position of the required plane.
The green function is given by
g d e jkzz
and kz can be found from equation 8-3.
124
Acoustic holography
ky
kx
radius = k0
When propagating towards the source, a Wiener filter can be used to include a
certain number of evanescent waves to improve the resolution. Taking a higher
number of waves taken into account may result in the amplification becoming
unstable. This depends on a parameter of the Wiener filter known as the Signal
to Noise Ratio (SNR). When the SNR value is greater than 15dB, then the am
plification will become unstable as the number of evanescent waves included
increases. Using an low SNR value (5dB for example) means that the evanes
cent waves are taken into account but they are so attenuated that the improve
ment in resolution is negligible. The default value of 15dB provides the best
compromise in terms of resolution and amplification.
When the Wiener filter is used, the pressure image needs to be multiplied by a
two-dimensional window. As is the case with a single FFT, the observed pres
sure must be `periodic' within the observed hologram. If this is not the case,
then truncation errors occur as with a single FFT. These truncation errors mani
fest themselves as ghost sources at the borders of the observed area.
Two windows are used
Part II
125
Chapter 8
Acoustic holography
N 1 (1 ) I N 1 (1 )
2
2
W[I] 1
When
I N 1 (1 )
2
W[I] 0.5 0.5 cos
When
I N 1 (1 )
2
W[I] 0.5 0.5 cos
2
I N 2 1 (1 )
N .N
2
I N 2 1 (1 )
N .N
126
incorrect
Acoustic holography
Knowing the pressure field on the parallel plane, it is possible to calculate the
particle velocity and eventually the intensity on this plane.
The particle velocity (V) will be known if the pressure differential can be deter
mined -which is the case with Acoustic holography since the pressure can be
measured at r and (r + r)
P(r) f(P(r), P(r dr))
V
Eqn 8-7
j
P(r)
ck
Once the pressure and the velocity are known then the intensity is just the
product of the two.
I PV
Part II
Eqn 8-8
127
Part III
Time data processing
Chapter 9
Statistical functions . . . . . . . . . . . . . . . . . . . . .
129
Chapter 10
Time frequency analysis . . . . . . . . . . . . . . . . .
139
Chapter 11
Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . .
151
Chapter 12
Digital filtering . . . . . . . . . . . . . . . . . . . . . . . . .
163
Chapter 13
Harmonic tracking . . . . . . . . . . . . . . . . . . . . . .
193
Chapter 14
Counting and histogramming . . . . . . . . . . . . .
128
203
Chapter 9
Statistical functions
129
Chapter 9
Statistical functions
maximum
+ extremum
range
t
minimum
real value
Figure 9-1
Nt
absolute value
Minimum
This is defined as the lowest value contained within the specified range of val
ues.
Maximum
This is defined as the highest value contained within the specified range of val
ues.
Range
The range is the difference between the minimum and maximum values.
Extremum
The extreme is the highest absolute value contained within the specified range.
It is equal to the maximum when the absolute value of the maximum is greater
than the absolute value of the minimum, and is equal to the minimum value
otherwise.
Sum
This is the summation of all the (N) values within the frame
Sum
N1
xj
Eqn. 9-1
j0
Integration
This is the area under the curve of values, found by multiplying the half of the
sum of the values by the time increment.
130
Statistical functions
x j1
xj
area
N2
xj 2xj1 t
Eqn. 9-2
j0
t
RMS
1
N
N1
x2j
Eqn 9-3
j0
Crest factor
The crest factor is given by
|max min|
2RMS
Eqn 9-4
The crest factor provides a measure of the ``spikeness'' in the data. A sine sig
nal has a crest factor of 1.4. A random signal has a crest factor of about 3 or 4.
A short spike will yield a high crest factor.
Mean
The mean of a set of data values (x) estimates the central value contained with
in the set. It is defined as
x 1
N
N1
xj
Eqn 9-5
j0
131
Chapter 9
Statistical functions
Median
The median of a probability function p(x) is the value for which larger and
smaller values of x are equally probable:
x med
p(x)dx p(x)dx 12
Eqn 9-6
xmed
For discrete data, the median is defined as the middle value of the data samples
when they are arranged in increasing (or decreasing) order.
When N is odd, the median is
Eqn 9-7
x med x N1
2
Thus half the values are numerically greater than the median and half are
smaller.
When N is even, the median is estimated as the mean of the two unique central
values.
x med
x N1 x N
2
Eqn 9-8
The mean and median both provide information on the average or central val
ue of a set of data. Which is the most suitable one to use in a particular circum
stance depends on the skewness of the data. Skewness is illustrated in Figure
9-2.
132
Statistical functions
p(x)
p(x)
p(x)
x
(a) mean = median
Figure 9-2
Negative skewness
Positive skewness
Symmetrical data
no skew
x
(b) mean > median
x
(c) mean < median
Skewness refers to the shape of the distribution about the central value. Per
fectly symmetrical data has no skew. Data distributions where there is a small
number of extremely high values are said to exhibit positive skew. Those with
a few extremely low values show negative skew. The mean is more influenced
by such extreme values than the median, but can be used with confidence if the
skewness lies within the range -1 to 1. For the calculation of skewness see
Equation 9-13 below.
Percentiles
The median can also be expressed as the 50th percentile since it represents the
value where 50% of all the values in the data set are below it and 50% lie above
it. It is also possible to compute the 10th, 25th, 75th and 90th percentiles.
The nth percentile of a probability function p(x) is the value at which n% of the
values in the set are smaller then the percentile value. So 10% of the values are
smaller than the 10th percentile and 90% are larger.
N1
xj x
2
Eqn 9-9
j0
and as such can also be regarded as the second order moment of a distribution.
The standard deviation is defined as the square root of the variance:
133
Chapter 9
Statistical functions
Eqn 9-10
N1
xj x
Eqn 9-11
j0
Extreme deviation
The extreme deviation is given by
max(max mean, mean min)
Eqn 9-12
The extreme deviation is similar to the crest factor, except that it is referenced to
the mean and will therefore follow data which drifts away from zero.
Skewness
Skewness was illustrated in Figure 9-2. It characterizes the degree of asymme
try of the distribution around its central value. It is defined as
3
x j x !
skew(x 0, ..., x N1) 1
N
j0
N1
Eqn 9-13
134
Statistical functions
Even if the estimated skewness is other than zero, it does not necessarily mean
that the data is in fact skewed. You can have confidence in the skewness only
when the estimated skewness is larger than the standard deviation on this esti
mated parameter (Eqn 9-13). For the idealized case of a normal (Gaussian) dis
tribution, the standard deviation on the estimated skewness is approximately
6N . In real life it is good practice to place confidence in skewness only when
the estimated value is several times as large as this.
Kurtosis
One further characteristic of a distribution can be obtained from the kurtosis of
a function. This is also a unitless parameter that measures the relative sharp
ness or flatness of a distribution relative to a normal or Gaussian one.
This is illustrated in Figure 9-3.
p(x)
p(x)
normal distribution
Figure 9-3
p(x)
positive kurtosis
negative kurtosis
Eqn 9-14
135
Chapter 9
Statistical functions
Note!
Higher order moments (skewness and kurtosis), are often less robust than lower order moments which are based on linear sums. (It is possible that the calculation of the skewness or kurtosis generates an overflow.) They must be
used with caution.
Markov regression
This function provides you with a measure of the likelihood of one data value
within a set being similar to another.
It is based on the circular autocorrelation R(.) of a set of data. This calculates
the correlation between one particular value and a value displaced by a certain
lag, as illustrated below.
lag
lag
The circular correlation takes the last shifted value and wraps it to the start.
The circular correlation for a lag of 1 data sample is given by
N2
xjxj1 x0xN1
j0
R(1)
Eqn 9-15
R(0)
N1
x2j
Eqn 9-16
j0
136
R(1)
R(0)
Eqn 9-17
Statistical functions
This function can therefore take values between 0 (very low correlation) and 1
(high similarity). It approaches 1 for a narrow or filtered band and 0 for broad
band signals. It provides an indication therefor of how much a broadband sig
nal has been filtered.
137
Chapter 10
139
10.1
Introduction
A great deal of physical signals are non-stationary. Fourier analysis establishes
a one-to-one relationship between the time and the frequency domain, but pro
vides no time localization of a signal's frequency components. Whilst an over
all representation of all frequencies that appeared during the observation peri
od is presented, there is no indication as to exactly at what time which
frequencies were present.
Time-frequency analysis methods describe a signal jointly in terms of both time
and frequency. The aim is to find a distribution that determines the portion of
the signal's energy which lies in a particular time and/or frequency range. In
addition these distributions might or might not satisfy some other interesting
mathematical properties, such as the marginal equations".
The instantaneous power of a signal at time t is given by
|s(t)| 2
The intensity per unit frequency is given by the squared modulus of the Fourier
transform S()
|S()| 2
The joint function P(,t) should represent the energy per unit time and per unit
frequency
P(, t)
Ideally summing this energy distribution over all frequencies should give the
instantaneous power
Eqn 10-1
and summing over all time should give the energy density spectrum.
Eqn 10-2
140
Equations 10-1 and 10-2 are known as the `marginal' equations and in addition
the total energy, E
E
P(, t)dtd
Eqn 10-3
should be equal to the total energy in the signal while satisfying the marginals.
There are a number of distributions which satisfy equations 10-1 and 10-2 but
which demonstrate very dissimilar behavior.
In general there are two main classes of time-frequency analysis methods V
141
10.2
Wavelet analysis
sliding
frame
time t
frequency
Figure 10-1
For a time signal s(t) multiplied by a window function g(t), the Short Time
Fourier Transform located at time is given by
142
STFT(, ) 1
2
e
Eqn 10-4
STFT(, ) 1
2
e
Eqn 10-5
By analogy with the previous discussion this reflects the behavior around the
frequency ``for all times'' as illustrated by the horizontal bands in Figure
10-1. These bands can be regarded as a bank of bandpass filters which have
impulse responses corresponding to the window function.
Wavelet analysis
A method that provides an alternative for the analysis of non-stationary sig
nals, where it becomes difficult to find the right compromise between time and
frequency resolution for the analysis window of the STFT is the Wavelet analy
sis.
In effect, the Fourier transform decomposes the signal using a set of basis func
tions, which in this case are sine waves. The Wavelet transform also decom
poses the signal, but it uses another set of basis functions, called wavelets.
These basis functions are concentrated in time, which results in a higher time
localization of the signal's energy. One prototype basis function is defined, and
a scaling factor is then used to dilate or contract this prototype function to ar
rive at the series of basis functions needed for the analysis.
143
h a(t)
1 h t
|a| a
Eqn 10-6
CWT(a, t)
|a|
s()h a t
d
Eqn 10-7
144
time
time
(a) STFT
frequency
Figure 10-2
This is in fact a very natural way to analyse a signal. Low frequencies are phe
nomena that change slowly with time so requiring a low resolution in this do
main. In this situation, a good time resolution can be sacrificed for a high fre
quency resolution. High frequency phenomena vary rapidly with time which
then becomes the important dimension, so under these conditions wavelet anal
ysis increases the time resolution at the cost of frequency. This type of analysis
is also very closely related to the human hearing process, since the human ear
seems to analyse sounds in terms of octave bands.
145
10.3
146
W(, t) 1
2
s * t 2
e
js
t 2
d
Eqn 10-9
W(, t) 1
2
S * 2
e
2
d
jtS
Eqn 10-10
tend
Thus one characteristic of the Wigner Ville distribution is that for a signal of finite
duration the distribution is zero up to the start and beyond the end. The same can be
said when considering the frequency version which means that for a band lim
ited signal, the Wigner Ville distribution will be zero outside of that range.
The same manoeuvre can be used to see why the reverse is true if at some point
the signal level drops to zero. Consider the situation illustrated below.
147
t0
At a point where the signal itself is zero (t0 ), multiplying the section to the left
by the section to the right results in a non-zero value. In general it can be said
that the Wigner distribution is not zero when the signal is. This unwelcome charac
teristic makes it difficult to interpret, especially when analyzing signals with
many components.
The same mechanism accounts for noisiness that can be seen in the distribution
in places where it is not present in the signal as shown below.
t1
t2
When evaluating the distribution at point (t1 ) the overlapping sections will not
include the noise, but even at point (t2 ) where there is no noise in the signal, it
will already influence the distribution. Noise will be spread over a wider peri
od than occurred in the actual signal therefore.
The same reasoning can be used to explain the appearance of the interference
terms along the frequency axis. This is especially so when a signal contains
multiple frequency components at the same moment in time, which will result
in interference terms at a frequency mid way between the frequencies of the
different components. As mentioned above, these terms can easily be recog
nized by their oscillatory nature and smoothing techniques can reduce their ef
fect. Some possible smoothing techniques are discussed below.
Generalization
A generalization of the Wigner-Ville distribution leads to a whole class of timefrequency representations, with as main desirable mathematical property their
invariance against operations like time shift, frequency shift or time/frequency
scaling. This means that a shift in time or frequency of the signal leads to an
equivalent shift of the time-frequency representation of that signal, or that scal
ing the signal leads to a corresponding scaling of the time-frequency represen
tation.
148
Eqn. 10-11
t f
where Wx (t',f') is the Wigner-Ville distribution of the signal x(t), and where T
is the kernel function". It is the choice of this kernel function that determines
the basic properties of each specific time-frequency representation derived
from this general definition. The kernel function can also be seen as a smooth
ing function applied to the Wigner-Ville distribution.
Typical examples of techniques that can be defined in this framework are
Spectrogram
where the kernel = Wigner distribution of the analysis window.
Smoothed Pseudo-Wigner Distribution (SPWD)
where the kernel =separable smoothing function with independent smooth
ing spread in time- and frequency domain.
Pseudo-Wigner Distribution (PWD)
the same as the SPWD, but with no smoothing along the frequency axis.
This can also be considered as short-time Wigner distribution".
Choi-Williams Distribution (CWD)
where the kernel = exponential smoothing function.
The class of shift-invariant representations (time- and frequency shifting) is
also called Cohen's class, and examples of representations belonging to that
class are the spectrogram, the Wigner-Ville distribution, PWD, SPWD, ...
The class of time shift/time scale invariant representations is also known as the
Affine class, and examples of representations belonging to this class are the sca
logram, the Wigner-Ville distribution, CWD, ...
149
10.4
References
Books
Time-frequency analysis :
Leon Cohen - Prentice Hall - 1995 - 299 pp. - ISBN 0-13-594532-1
Papers
Linear and Quadratic Time-frequency Signal Representations :
F. Hlawatsch, G.F.Boudreaux-Bartels (IEEE SP Magazine, April 1992)
Time-frequency distributions - A review :
Leon Cohen (Proc. of IEEE, July 1989)
Wavelets and signal processing :
O. Rioul, M. Vetterli (IEEE SP Magazine, October 1991)
Time-frequency analysis applied to door slam sound quality problems. :
H. Van der Auweraer, K. Wyckaert, W. Hendrickx (Journal de physique IV,
May 1994)
150
Chapter 11
Resampling
151
Chapter 11 Resampling
11.1
Fixed resampling
The process of converting a signal that has been sampled at a particular rate to
one that is sampled at a different rate is known as resampling.
Resampling may be necessary for a number of reasons. A DAT recorder, for ex
ample, samples a signal at a rate of 48000 samples per second. If the signal has
a Bandwidth of only 200Hz then 500 samples a second would be adequate and
as a consequence far more data than is needed to describe the signal exists. In
this situation the sample rate could be decreased, a process which is referred to
as decimation or downsampling.
On the other hand, while a critically sampled signal may contain all the infor
mation to adequately describe the frequency contents of the signal, it may not
look good, or be easy to interpret, in the time domain.
Increasing the sampling rate will generate a signal which has identical spectral
contents but a much better defined time waveform. When the resampling in
volves an increase in the sampling rate it is referred to as interpolation or upsam
pling.
152
Resampling
11.1.1
Integer downsampling
Integer downsampling by a factor n effectively means retaining every nth point
of the source data. However it is necessary to take measures to avoid aliasing
problems when doing this. The example below shows the effects of downsam
pling by a factor of 13, when the original number of samples per period was 16.
Sampling a signal at a rate lower than 2 points per period of the highest fre
quency in the signal will give rise to erroneous results.
325Hz
500Hz
Bandwidth
Downsampling by a factor of 5 will reduce the sample rate to 200Hz and the
Bandwidth to 100Hz. It is first necessary to apply a low pass filter to limit the
spectral content of the data to the 100 Hz Bandwidth. This will remove the
higher frequency component leaving a time domain signal containing 125
points per period for the remaining 8 Hz component.
1.0
0.5
8
100
Bandwidth
325
frequency (Hz)
153
Chapter 11 Resampling
100
Bandwidth
frequency (Hz)
Not applying the filter would result in the following. The 325 Hz component
will fold to 75Hz in the 100Hz Bandwidth and as a consequence the result is
heavily distorted.
1.0
0.5
8Hz
11.1.2
75Hz
100Hz
Bandwidth
325Hz
Integer upsampling
Integer upsampling by a factor n involves inserting (n-1) data points between
the original measured ones. Normally the inserted points will have a value of
zero, and it is then necessary to apply an appropriate filter to remove the har
monics introduced by the process.
It can be proven that the spectrum of the upsampled signal consists of the origi
nal one plus a mirrored version of it at all higher frequencies.
154
Resampling
f bw
bw
The resulting signal has identical spectral contents to the original. The in
creased number of points per cycle provides a much improved time domain de
scription of the waveform.
155
Chapter 11 Resampling
11.1.3
Fractional ratios
Resampling by a non-integer ratio can be realized by a combination of upsam
pling and downsampling. So downsampling by a factor of 2.5 can be achieved
by first upsampling by a factor 2 then downsampling by a factor 5. The order
in which these two processes are done is very important if the original signal
content of interest is to be preserved.
Consider a signal sampled at 2kHz and which contains signals up to 300 Hz. A
new sampling rate of 800Hz is required, representing a downsampling by a fac
tor of 2.5
156
Resampling
11.1.4
Arbitrary ratios
Some resampling requirements can not be easily realized by a simple combina
tion of an upsampling and a downsampling. For some ratios, even though they
can be expressed as a fraction one needs an extremely high intermediate up
sampling ratio. The process imposes a heavy computational load and the result
is numerically not well conditioned.
Consider for instance a measurement at 8192 samples per second that is to be
resampled to 8000 Hz for replay on digital audio hardware. This can theoreti
cally be realized by upsampling by a factor 125 followed by a downsampling
with a factor 128, but this is computationally extremely costly.
In this situation another strategy is used. Consider the signal shown below
which was originally sampled at a rate indicated by the white circles. The re
quired sample rate is indicates by the filled circles. The new sample rate is not
an integer ratio to the original.
The first stage is to upsample by a relatively high factor (a). This factor is
known as the `Upsampling factor before interpolation' parameter and the de
fault value used is 15. The resulting sample rate is indicated by the squares.
The second stage then involves performing a linear in
terpolation on the upsampled signal to arrive at a new
sample rate that is an integer multiple (b) of the target
frequency. This introduces an error which will be
small as long the source trace is upsampled at a high
enough ratio. The maximum distortion that can occur
with the upsampling factor is indicated by the soft
ware.
This error is indicated in the form of the `SDR' (Signal to Distortion Ratio). It
depends on the `Upsampling factor before interpolation' parameters and the
filter 's cut-off frequency as shown below:
SDR=10log10 (80*(100 R / ( <cut-off in percent> )))
where
R = Upsampling factor before interpolation
cut-off in % = the cut off frequency as a % of the Nyquist frequency.
157
Chapter 11 Resampling
The final stage in this process is to downsample by this integer factor (b) to the
required rate. It is also possible that the downsampling is achieved directly by
the interpolation process itself as long as the downsampling rate being per
formed is lower that the preceding upsampling rate (a).
158
Resampling
11.2
Adaptive resampling
Adaptive or synchronous resampling enables you to resample a signal such
that its characteristics can be examined in a different domain. A well known
mechanical application is the extraction of ``order-related" phenomena of en
gine vibrations based on the measurement of the rotation speed of one of its
components. Phenomena which are very difficult to analyze or interpret in one
domain, become clear and obvious in another.
For synchronous averaging, for example, it is essential that repetitive phenome
na occur at the very same instant for the different signal sections that are aver
aged. Using the synchronous resampling technique, the data can be trans
formed into that particular domain in which the phenomena are indeed
repetitive.
In the same way as the Fourier transform presents the contents of time domain
data in the frequency domain, it converts angle domain data to the order do
main. Just as something that happens twice every second has a frequency of 2
Hz, something that occurs twice every cycle is related to order 2. Consider the
example of measurements taken on an engine at a supposedly constant rpm.
Even very slight variations in rpm will result in a frequency domain represen
tation where the related spectral components are sharp for the low orders, but
become smeared out for higher frequencies. The small RPM variations, lead to
leakage errors in the frequency domain.
For applications where there is a need to investigate higher order phenomena
(such as gear box analysis for example), such smearing makes it very difficult
to discriminate order from resonance components. Transforming such data to
the order domain, will result in all orders being clearly shown, but any reso
nance phenomenon present will be smeared out. The frequency and order do
main representations are therefore complementary to one another and useful
information can be obtained in the domain most suited for analysis. Adaptive
resampling facility enables you to convert from one domain to another.
Implementation example
This example below illustrates the procedure involved in converting from the
time to the angle domain. The principle can be used to convert between any
two domains.
Your original time signal must be measured in conjunction with a tracking sig
nal. This is most likely to be a tacho signal; a pulse train, that can be converted
to an rpm /time function and then integrated to obtain an angle/time function.
159
Chapter 11 Resampling
angle
Ordinate
time
Abscissa
In the case of a transformation from the time domain into the angle domain, the
required (constant) resolution in the angle domain () defines the time inter
vals at which data samples of the vibration measurement should be available.
angle
measured points
required points
t1
t2
t3
time
The most appropriate resolution () is based on the minimum slew rate which
must be coped with.
When sampling in the time domain the time increment is the reciprocal of the
sampling frequency.
1 T
Fs
So according to the Nyquist criterion, information is available up to Fs/2.
Adaptive resampling conforms to the same rules: so if you do not have enough
samples then information is lost, while if you use too many samples then the
processing effort is unnecessarily increased.
It is necessary to determine the angle which corresponds to the required Fs in
the time/frequency domain. Adaptive resampling uses a varying time incre
ment if the angle/time relationship is non linear. Data loss will occur first at
the lowest rpm values (slew rate) and the aim is to determine the threshold
angle between over and under sampling.
d
dt
min
Fs
rpm min
Fs
So for example if the minimum slew rate (d/dt) is 500 rpm and the sample
frequency is 2000Hz, then the threshold angle will be
160
Resampling
1
t
t1
y(t)
t1
t, t
t1
t, t
y(t)
y1
y'(')
y1
y''()
y
1
downsampling
161
Chapter 11 Resampling
11.3
162
References
[1]
[2]
[3]
[4]
Chapter 12
Digital filtering
163
12.1
{a(n)}
m
a(m)u 0(n m)
Eqn 12-1
x(n)
y(n) [x(n)]
Eqn 12-2
A linear system implies that applying the input ax1 +bx2 will result in the output
ay1 +by2 where a and b are arbitrary constants.
A time-invariant system implies that the input sequence x(n-n0 ) will result in
the output y(n-n0 ) for all n0 .
From equation 12-1 the input x(n) to a system can be expressed as
x(n)
m
164
x(m)u 0(n m)
Eqn 12-3
Digital filtering
If h(n) is defined as the impulse response of a system which is the response to the
sequence u0 (n), then by time invariance h(n-m) is the response to u0 (n-m). By
linearity, the response to sequence x(m)u0 (n-m) must be x(m)h(n-m).
Thus the response to x(n) is given by
y(n)
x(m)h(n m)
m
h(m)x(n m)
Eqn 12-4
m
Equation 12-4 is known as the convolution sum and y(n) is known as the con
volution of x(n) and h(n), designated by x(n) * h(n). Thus for a linear time in
variant (LTI) system a relation exists between the input and output that is com
pletely characterized by the impulse function h(n) of the system.
LTI system
x(n)
h(n)
y(n)
|h(n)|
Eqn 12-5
n
A causal system is one for which the output for any n=n0 depends only on the
input for n n0. A linear time-invariant system is causal if and only if the unit
sample response is zero for n<0, in which case it may be referred to as a causal
sequence.
Difference equations
Some linear time-invariant systems have input and output sequences that are
related by a constant coefficient linear difference equation. Representing such
systems in this way, can provide means of making them realizable and the ap
propriate difference equation reveals useful information on the characteristics
of the system under investigation such as the natural frequencies, their multi
plicity, the order of the system, frequencies for which there is zero transmission
...
165
The general form of an Mth order linear constant coefficient difference equation
is given in equation 12-6.
M
i0
i1
Eqn 12-6
Eqn 12-7
Delay
b1
y(n)
x(n)
b0
y(n1)
a1
Delay
Delay
x(n)z n
Eqn 12-8
n
166
Digital filtering
Y(z)
X(z)
Eqn 12-9
and H(z) can again be expressed in the general form of difference equations
H(z)
a 0 a 1z 1 a 2z 2...a Mz M
1 b 1z 1 b 2z 2...b Nz N
Eqn 12-10
y(n)
h(m)e j0(nm)
m
e
j 0n
h(m)e j0m
m
x(n)H(ej0)
Eqns 12-11
The quantity H(e j) is the frequency response function of the filter, which gives the
transmission of the system for every value of .
This is in fact the z transform of the impulse response function with z=ej.
H(z)| zej H(e j)
h(n)e jn
Eqn 12-12
n
167
j
H(e )
h(n)e jn
Eqn 12-13
n
h(n) 1
2
H(e
j)e jn
where the impulse response coefficients are also the Fourier series coefficients.
Since the above relationships are valid for any sequence that can be summed,
the same can apply to x(n) and y(n) and it can be shown that
Y(e j) X(e j)H(e j)
Eqn 12-14
and so the convolution in the time domain has been converted to multiplication
in the frequency domain.
Discrete Fourier Transform
For a periodic sequence of N samples, the Discrete Fourier Transform is given
as
H p(k)
N1
hp(k)ej(2N)nk
Eqn 12-15
n0
and the DFT coefficients are identical to the z transform of that same sequence
evaluated at N equally spaced points around the unit circle. The DFT coeffi
cients are therefore a unique representation of a sequence of finite duration.
The continuous frequency response can be obtained from the DFT coefficients,
by artificially increasing the number of points equally spaced around the unit
circle. So by augmenting a finite duration sequence with additional equally
spaced zero valued samples the Fourier transform can be calculated with arbi
trary resolution.
168
Digital filtering
h(n)
N1
N2
Such filters are always stable and can be realized by delaying the impulse re
sponse by an appropriate amount. The design of FIR filters is described in sec
tion 12.2.2.
A filter (system) whose impulse response extends to either - or + (or both)
is termed an infinite impulse response (IIR) filter or system. Design of these filters
is discussed in sections 12.2.3 and 12.2.4.
169
12.2
FIR
IIR
Stability
will be stable if
|poles|<1
Phase
Efficiency
low
the length (nr of taps) must be
relatively large to produce an
adequately sharp cut off
better
lower order required
low
high
infinite duration
Adaptive filtering
easy
difficult
Realization
There are nine basic designs of filters that are described in this chapter as listed
below.
FIR Window
FIR Remez
IIR Bessel
IIR Butterworth
IIR Chebyshev
IIR Cauer
170
Digital filtering
12.2.1
H
attenuation
Figure 12-1
stop band
transition band
Filter characteristics
The filter design functions operate with normalized frequencies with a unit fre
quency equal to the sampling frequency.
Normalized frequency =
and thus lies in the range 0 to 0.5
frequency (Hz)
sampling frequency
171
h(n) h(N 1 n)
and in this case = (N-1)/2.
This means that for each value of N there is only one value of for which exact
ly linear phase will be obtained. Figure 12-2 shows the type of symmetry re
quired when N is odd and even.
center of symmetry
center of symmetry
N =11 =5
N =12 =5.5
Figure 12-2
Filter types
Several types of filter are provided (some of which are illustrated below) as
well as multipoint filters where the required response can be of an arbitrary
shape.
H
H
low pass
band pass
H
H
high pass
Figure 12-3
band stop
Filter types
172
Digital filtering
H d() j
Eqn 12-16
h(n) 1
2
H ()e
d
jnd
Eqn 12-17
1
2
je
jnd
cosnn
Which is an anti symmetric
unit sample response. In
practice however the ideal
case is not required and a
pass band will be specified
as shown here.
H
stop
band
ripple
pass band
stop band
transition band
Figure 12-4
Hilbert transformer
This filter imparts a 900 phase shift to the input. The ideal Hilbert transformer
has a desired frequency response of
H d() j0
j 0
Eqn 12-18
h(n) 1
2
H ()e
d
jnd
Eqn 12-19
1
2
je jnd
0
!
je jnd
2
2 sin ( n2)
n
173
In practice however the ideal case is not required and the desired frequency re
sponse of a Hilbert transformer can be specified as Hd () = 1 between the limits
l <<u as shown below.
H
Figure 12-5
12.2.2
h(n)e jn
Eqn 12-20
n
h(n) 1
2
H(e
j)e jn
The coefficients of the Fourier series are identical to the impulse response of the
filter. Such a filter is not realizable however since it begins at - and is infi
nitely long. It needs to be both truncated to make it finite and shifted to make
it realizable. Direct truncation is possible but leads to the Gibbs phenomenon
of overshoot and ripple illustrated below.
Figure 12-6
174
Digital filtering
A solution to this is to truncate the Fourier series with a window function. This
is a finite weighting sequence which will modify the Fourier coefficients to con
trol the convergence of the series. Then
^
h(n) h(n)w(n)
Eqn 12-21
^
where w(n) is the window function sequence and h(n) gives the required im
pulse response.
The desirable characteristics of a window function are
d
(N 1)
(N 1)
n
2
2
W(n)
= 0 elsewhere
-(N-1)/2
(N-1)/2
Hanning
This type of window trades off transition width for ripple cancellation. In this
case
(N 1)
(N 1)
W(n) (1 ) cos 2n when
n
2
2
N
= 0 elsewhere
= 0.5
Hamming
This has similar properties to the Hanning window described above. The for
mula is the same but in this case =0.54.
Kaiser
The Kaiser window function is a simplified approximation of a prolate spheroi
dal wave function which exhibits the desirable qualities of being a time-limited
function whose Fourier transform approximates a band-limited function. It
displays minimum energy outside a selected frequency band and is described
by the following formula
175
W(n)
I 0 1 [2n(N 1)] 2
I 0
when
(N 1)
(N 1)
n
2
2
The window function W(n) is obtained from the inverse DFT of the Chebyshev
polynomial evaluated at N equally spaced points around the unit circle.
176
Digital filtering
H(k)
N1
h(n)ej(2N)nk
n0
h(n) 1
N
N1
H(k)ej(2N)nk
k0
(1 zN)
N1
[1 z1H(k)
e j(2N)nk]
Eqn. 12-22
k0
177
1
1
1
2
2
Figure 12-7
Approximation errors
The filter coefficients are obtained after applying an inverse DFT on the opti
mum frequency response.
Weighting
For each frequency band the approximation errors can be weighted. This is
done by specifying a weighting function W(). Applying a weighting function
of 1 (unity) in all bands implies an even distribution of the errors over the
whole frequency band. To reduce the ripple in one particular band it is neces
sary to change the relative weighting across the bands and in this case to ensure
that the band of interest has a relatively high weighting. It is convenient to
normalize W() in the stopband to unity and to set it to the ratio of the approxi
mation errors (2/1) in the passband.
12.2.3
178
Digital filtering
maximum ripple in
the pass band (dB)
attenuation (dB)
u upper cutoff
lower cutoff l
Figure 12-8
1
s 2 1 z 1
T 1z
Eqn 12-23
Eqn 12-24
a 2 tan( dT2)
T
Eqn 12-25
The analog axis is mapped onto one revolution the of the unit circle, but in a
non-linear fashion. It is necessary to compensate for this nonlinearity (warp
ing) as shown below
179
d
computed analog frequencies
c
d
defined digital frequencies
Figure 12-9
Bessel filters
The goal of the Bessel approximation for filter design is to obtain a flat delay
characteristic in the passband. The delay characteristics of the Bessel approxi
mation are far superior to those of the Butterworth and the Chebyshev approxi
mations, however, the flat delay is achieved at the expense of the stopband at
tenuation which is even lower than that for the Butterworth. The poor
stopband characteristics of the Bessel approximation make it impractical for
most filtering applications !
180
Digital filtering
Bessel filters have sloping pass and stop bands and a wide transition width re
sulting in a cutoff frequency that is not well defined.
The transfer function is given by
H(s)
d0
B n(s)
Eqn 12-26
Eqn 12-27
(2n)!
2 nn!
Eqn 12-28
Butterworth filters
These are characterized by the response being maximally flat in the pass band
and monotonic in the pass band and stop band. Maximally flat means as many
derivatives as possible are zero at the origin. The squared magnitude response
of a Butterworth filter is
|H(s)| 2
1
1 (ssc) 2n
Eqn 12-29
where n is the order of the filter. The transfer function of this filter can be de
termined by evaluating equation 12-29 at s=j
|H(j)| 2 H(s)H( s)
1
s2 n
1 ( j
2)
Eqn 12-30
Butterworth filters are all-pole filters i.e. the zeros of H(s) are all at s=.
They have magnitude (1/2 ) when /c =1 i.e. the magnitude response is
down 3dB at the cutoff frequency.
181
|H |2
3dB
n=4
n=10
c
1
1 2C 2n()
Eqn 12-31
where Cn () are the Chebyshev polynomials and is the parameter related to
the ripple in the pass band as shown below for n odd and even.
1
1 2
1
1 2
n odd
n even
For the same loss requirements, the Chebyshev approximation usually requires
a lower order than the Butterworth approximation, but at the expense of an
equi-ripple passband. Therefore, the transition width of a Chebyshev filter is
narrower than for a Butterworth filter of the same order.
The increased stopband attenuation is achieved by changing the approximation
conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics. The stopband loss keeps increasing at the maximum pos
sible rate of 6*<Order> dB/Octave.
182
Digital filtering
C n(r)
1 2 C (
n
r)
Eqn 12-32
where Cn () are the Chebyshev polynomials, is the pass band ripple parame
ter and r is the lowest frequency where the stop band loss attains a specified
value. These parameters are illustrated below for n odd and even.
1
1 2
1
1 2
n odd
n even r
...
For the same loss requirements, the Inverse Chebyshev approximation usually
requires a lower order than the Butterworth approximation, but at the expense
of an equi-ripple stopband.
The increased passband flatness is achieved by changing the approximation
conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics.
183
n odd
n even
1
1 2R 2n(L)
Eqn 12-33
184
Digital filtering
Frequency response
Replace s by
s
s
s
sc
s2 u l
s( u l)
l
s( u l)
s 2 u l
sT2
1z 1
1z 1
Eqn 12-34
The final result is a set of filter coefficients a and b, stored in vectors of length
n+1,where n is the order of the filter. A facility, described below, enables you to
determine the optimum order of a filter required for a particular design.
185
1
passband ripple
1-1
attenuation
2
p
s
v
Ripple passband
Attenuation
Lower frequency
Upper frequency
Sampling frequency
The filter can be any one of the types mentioned above and the prototype can
be either a Butterworth, Chebyshev type I or type II or a Cauer filter. This pro
cess does not apply to the Bessel filter because of the particular condition per
taining to these filters in that the filter order affects the cutoff frequency.
The minimum filter order required is determined from a set of functions de
scribed below.
One function relates the pass band and stop band ripple specifications to a filter
design parameter where
2
12
(1 1) (1 21) 22
Another parameter relates the pass band cut off frequency p , the transition
width v and the low pass filter transition ratio k where
p
tan p2
k
s
tan s2
analog
186
digital
Digital filtering
A final function relates the filter order n, the low pass filter transition ratio k
and the filter design parameter This relationship depends on the type of pro
totype analog filter.
n
n
n
k
Butterworth
cosh1( 1)
11k 2
ln
k
K(k)K (1 2)
K()K(1 k 2)
Chebyshev
Elliptic
12.2.4
187
12.3
Analysis
This section describes the functions that provide information on the characteris
tics of filters.
Group delay
The group delay of a set of filters provides a measure of the average delay of a
filter as a function of frequency. The frequency response of a filter is given by
H(z)| zej H(e j) |H(e j)|.e j()
The phase delay is defined as
()
p()
Eqn 12-35
and the group delay is defined as the first derivative of the phase
g()
d()
d
Eqn 12-36
If the wave form is not to be distorted then the group delay should be constant
over the frequency bands being passed by the filter.
For a linear delay, () = - where -
then is both the phase delay and the group delay.
188
Digital filtering
12.4
Applying filters
This section describes how filters can be applied to data.
Direct trace filtering
Implementing this method basically filters the data x according to the filter de
fined by coefficients a and b to produce the filtered data y.
Zero phase filtering
This option also filters the data using the filter defined by the coefficients a and
b, but in such a way as to produce no phase distortion. In the case of FIR filters
an exact linear phase distortion is possible since the output is simply delayed
by a fixed number of samples, but with IIR filters the distortion is very non-lin
ear. If the data has been recorded however and the whole sequence can be replayed, then this problem can be overcome by using the concept of `time rever
sal'. In effect the data is filtered twice, once in the forwards direction, then in
the reverse direction which removes all the phase distortion but results in the
magnitude effect of the filter being squared.
If x(n)=0 when n<0, then the z transform of the time reversed sequence is
0
Z{x( n)}
Eqn 12-37
x( n)z n
n
which if -n=u
x(u)(z1)u
0
X(z) Z{x(n)}
So if
then
Time reversal filtering can be realized using the method shown in Figure 12-12.
x(n)
a(n)=x(-n)
f(n)
Time
reversal
b(n)=f(-n)
y(n)
Time
reversal
189
190
Digital filtering
12.5
References
[1]
[2]
[3]
[4]
191
Chapter 13
Harmonic tracking
193
13.1
Introduction
There are a number of circumstances when it is necessary to track periodic com
ponents (orders) when the signal of interest is buried in noise, or the rotational
speed is changing rapidly. Indeed some effects only manifest themselves when
the rate of change of frequency is high. In these situations, real time analog and
digital filters have limited of resolution due to transients and excessive proces
sing requirements. The Kalman filter however is able to accurately track sig
nals of a known structure concealed in a confusion of noise and other periodic
components of unknown structure.
An important characteristic of the Kalman filter is that it is non-stationary. It
functions well at high slew rates, because the system model used does not pre
sume either fixed time of frequency content, but adapts itself automatically as
the system itself is changing. This ability to derive the system model for each
time sample in the recording (within certain user-defined constraints) frees it
from the usual time/frequency resolution constraint encountered with the
traditional frequency transformations.
fine spectral resolution of the orders (i.e. 0.01 Hz) obtained after just a
few measurement samples (not even one cycle of the fundamental com
ponent),
no phase distortion.
In order to use the Kalman filter the following conditions must apply -
194
Harmonic tracking
13.2
Theoretical background
The application of the Kalman filters to track harmonic components involves
two stages.
1 Accurate determination of the Rpm
If you want to track an order, then you must provide the corresponding
Rpm/time trace. Your Rpm may have been determined using a Tacho signal
which results in a pulse train or a swept sine function in which case you will
need to convert it to a Rpm/time function.
2 The tracking of the specified waveform
Section 13.2.2 describes the mathematical background to the operation of the
tracking function.
Some practical considerations are discussed in section 13.3.
13.2.1
13.2.2
Waveform tracking
The Kalman filtering method involves setting up and solving a pair of equa
tions known as the Structural and the Data equations.
195
Eqn. 13-1
Eqn. 13-2
Eqn. 13-3
(n) is a deterministic but unknown term which allows for deviations from the
true stationary wave.
It is also useful to define S (n) as the standard deviation of the non homogene
ity of the structural equation.
signal y(n) contains both the signal that matches the structural equation as well
as noise and other periodic components.
196
Harmonic tracking
Eqn. 13-4
where (n) contains noise and periodic components at frequencies other than
the target signal.
Once again S (n) is defined as the standard deviation of the nuisance element
of the data equation.
Eqn. 13-5
The error in equation 13-5 is made isotropic by applying a weighting factor r(n)
which is defined as the ratio of the standard deviations of the errors in the
structural and data equations.
r(n)
s(n)
s (n)
Eqn. 13-6
Eqn. 13-7
The weighting function r(n) expresses the degree of confidence between the
structural equation and data equation, or, the certainty of the presence of orders
in the data. This function shapes the nature of the Kalman filter and influences
its tracking characteristics. A small value for r(n) leads to a filter that is highly
discriminating in frequency, but which takes time to converge. Conversely, fast
convergence with low frequency resolution is achieved by choosing a large r(n).
197
When applied to all observed time points Equation 13-7 provides a system of
overdetermined equations which may be solved using standard least squares
techniques.
198
Harmonic tracking
13.3
Practical considerations
This section considers some practical characteristics of the Kalman filter and the
parameters that influence them.
Frequency resolution
In principle the Kalman filter is capable of tracking sinusoidal components of
any frequency up to half the sample frequency. In practice however, it has been
found that the ability to distinguish between two closely spaced sine waves is
inversely proportional to the total observation time. As a consequence, the ob
servation time should be equal to the inverse of minimum frequency spacing
required between components.
Filter characteristics
It was mentioned above that the weighting r(n) used in Equation 13-7 can be
used to influence the nature of the tracking filter used. This weighting can be
adjusted through the specification of a harmonic confidence factor which is de
fined as the inverse of the weighting factor.
s (n)
HC 1
r(n) s (n)
Eqn. 13-8
Applying a high value implies confidence in the harmonic (structural data) and
assumes that the error in your measured data is high. In this case the filter will
be narrow so that it is highly discriminating in frequency. This is obtained at
the cost of time to converge in amplitude. Applying a low value implies that
the error in the measured data is low and consequently a wider filter can be
used which while less discriminating in frequency has the advantage that the
amplitude converges more quickly.
The three Kalman filters shown below are characterized by different harmonic
confidence factors which influence the width of the filter.
199
HC= 50
HC= 100
HC= 200
Figure 13-1
Bandwidth characteristics
Equation 13-7 shows that the weighting function, r(n), which is the inverse of
the harmonic confidence factor, can be different for every time point. This
means that the bandwidth of the filter can vary as a function of the frequency
or order being tracked.
Using a frequency defined band width means that at low Rpm values, a num
ber of orders will be encompassed by the filter range.
amp
Rpm
amp
orders
frequency
Rpm
orders
frequency
Figure 13-2 Defining the filter bandwidth in terms of frequency and amplitude.
200
Harmonic tracking
Tracking closely spaced order signals with a high slew rate requires sampling
at a high frequency over a long period which imposes a heavy computational
effort. However if you consider the significant slew rate encountered during
the deceleration of gas turbines of 75Hz/sec over 5 seconds, from the above this
implies a sample rate of 750Hz. It can be seen therefore that such an extreme
slew rate does not impose any realistic limitation on the sample rate.
201
Chapter 14
Counting and
histogramming
203
14.1
Introduction
In fatigue analysis, real life measurements of mechanical or thermal loads are
used to assess and predict the damage inflicted by such loads over the life time
of a product. Figure 14-1 shows such measurements made on a vehicle part
over a period of around 5 minutes (330 seconds).
acceleration
0.4
(g)
time
(s)
-0.4
Figure 14-1
Figure 14-2
204
Stage 1, counting
The data is scanned for the occurrence of one of the events listed above.
This in effect reduces the full time history to a set of mechanical or ther
mal load events.
Stage 2 histogramming
This involves dividing the counted occurrences into classes where for
each event, its number of occurrences is specified.
205
14.2
14.2.1
Figure 14-3
206
2
1
0
-2 -1
0
level
Minima
Figure 14-4
14.2.2
2
1
0
-2 -1
0
level
Maxima
4
Nr of occurrences
Nr of occurrences
Nr of occurrences
2
1
0
-2 -1
level
Extrema
Figure 14-5
Peak counts and level cross counts are closely related. The number of positive
crossings of a certain level is equal of the number of peaks above that level mi
nus the number of valleys above it. This implies that a level cross count can be
derived from a peak-valley count.
A level crossing count is typically initiated by specifying a grid on top of the
signal to determine the levels. The grid can be specified in ordinate units or as
a percentage of the ordinate range. The resulting histograms for the above sig
nal when up, down and both types of crossings are counted are shown below.
207
0
level
-2 -1
6
4
14.2.3
-2 -1
0
level
8
6
4
2
up (+) crossings
Figure 14-6
10
Nr of occurrences
10
Nr of occurrences
Nr of occurrences
10
-2 -1
0
level
+
1
1
+
1
+
1
+
1
+
4
Figure 14-7
208
Nr of occurrences
4
3
2
1
0
-4
-3
-2
-1
Range
Figure 14-8
Counting of rangepairs
The counting of single ranges (usually indicated as a range-count), is both sim
ple and straightforward but sensitive to small variations of the signal. Thus in
the analysis of the left hand signal illustrated in Figure 14-9, single range
counting would result in a large number of relatively small ranges.
Figure 14-9
If this signal were passed through a filter, suppressing the small load varia
tions, the resulting signal would reveal a count of only one very large range.
As a consequence the two analysis results are completely different and the
method is very sensitive to small signal variations.
The range-pair counting method overcomes this sensitivity. Rather then split
ting up the signal into consecutive ranges, it is interpreted in terms of a ``main"
signal variation (or range) with a smaller cycle (range pair) superimposed on it.
209
If a pair of extremities are separated by a range that is less than the defined
range of interest (R), then they are `filtered out' of the range count.
210
14.3
14.3.1
Fromtocounting
Such a ``combined" event can be the occurrence of a peak at level j followed by
a valley at level i. As an example, consider the combination of a valley at level
A followed by a peak at level C as illustrated in Figure 14-11.
4
D
2
C
B
A
12
11
In this example, the Fromto sequence (12) is counted separately from the
sequences (34) and (1112), although the ranges involved are identical
(C-A=D-B).
The result of such ``fromto'' counting can be presented in a so called MarkovMatrix A[i,j]. The element aij gives the number of peaks at level j followed by a
valley at level i. The matrix of results of counting the events in Figure 14-11 are
shown below.
211
From j
B
C
To i
peaks
valleys
The lower left triangle of the Markov matrix contains the positive fromto
events, the upper right triangle summarizes the negative transitions. The addi
tional separate columns contain the counting results for peaks and valleys at a
particular level. These results are easily obtained for the triangles of the Mar
kov matrix.
14.3.2
Rangemean counting
Another example of a two-dimensional counting method results in the socalled Range-mean matrix. The variation or range (i-j) is associated with its
corresponding mean value (i+j)/2.
4
D
2
B
A
12
3
1
D-B
11
D-B
C-A
212
Number of events
Mean
Range
14.3.3
213
e5
R
e3
R
e1
e0
e5
e4
e1
e6
R
e6
e0
e2
e2
214
At the end of the second phase, a ``residue" of peaks and valleys is left
which is analyzed according to the single range principle. It can be
shown that this residue has a specific shape, namely a diverging part fol
lowed by a converging part.
Example
The following example shows how the range-pair range method operates.
Consider the time signal shown be
low.
S4
S8
S5
S3
S1
S7
S6
S2
S4
S3
S8
S8
S5
S3
S1
S1
S7
S4
S5
S7
Counting a range-pair implies deleting the counted extremes from the signal.
``Stepping backwards", the extremes S1,S2,S3, and S6 are now considered and
another pair (S2,S3) is found.
215
S6
S6
S2
S8
S8
S3
S1
S1
S2
S7
S7
S3
From the remaining four extremes, no ``pairs'' can be subtracted. This forms
the residue which is further counted as single ``from-to-ranges''.
Further considerations
The result of the range pair-range counting depends on the length of the data
record being analyzed at one time because the largest range counted will be be
tween the lowest valley and the highest peak. This largest variation is often re
ferred to as the `half load cycle'. If the lowest valley occurs near the beginning
of a very long load cycle, and the highest peak near the end, you should con
sider whether it makes physical sense to combine such occurrences, so remote
in time into one cycle.
The counting method is insensitive to the size of the range filter applied. The
only effect of increasing the range filter size from R to 3R, for example, is that
all elements in a From-to counting for which |from-to|<3*R, become zero. In
other words, the choice of the range filter size is not critical.
216
14.4
References
[1]
[2]
[3]
Statistical load data processing, van Dijk C.M, 6th ICAF Symposium Mi
ami, Florida USA, May 1971 .
[4]
[5]
Cycle counting and fatigue damage, Watson P., SEE Symposium of 12th
February 1975, Journal of Society of Environmental Engineers, September
1976.
217
Part IV
Analysis and design
Chapter 15
Estimation of modal parameters . . . . . . . . . .
219
Chapter 16
Operational modal analysis . . . . . . . . . . . . . .
267
Chapter 17
Running modes analysis . . . . . . . . . . . . . . . .
281
Chapter 18
Modal validation . . . . . . . . . . . . . . . . . . . . . . . .
293
Chapter 19
Rigid body modes . . . . . . . . . . . . . . . . . . . . . .
309
Chapter 20
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
321
Chapter 21
Geometry concepts . . . . . . . . . . . . . . . . . . . . .
218
357
Chapter 15
Estimation of modal
parameters
219
15.1
(jr ) (jr )
h ij(j)
k1
ijk
ijk
*
k
input
FREQUENCY
DAMPING
MODE SHAPES
input
*
k
Eqn 15-1
k1
220
r ijk r ijk !
h ij(j)
(j ) (j *)
N
k1
Eqn 15-2
where
hij (t)
= IR between the response (or output) degree of freedom i and the ref
erence (or input) DOF j
r ijk
k
Eqn 15-3
where
dk
k
or
k k nk j nk 1 2k
Eqn 15-4
where
nk
k
Equation 15-5 shows that the residue can be proven to be the product of three
terms
r ijk a kv ikv jk
Part IV
Eqn 15-5
221
where
vik
vjk
ak
Note that the mode shape coefficients can be either real (normal mode shapes)
or complex. If the mode shapes are real, the scaling constant can be expressed
as,
ak
1
2jm k dk
Eqn 15-6
where
mk
222
15.2
Types of analysis
The section discusses some general principles to be considered when perform
ing a modal analysis. These topics include V
15.2.1
min
Figure 15-2
max
frequency
Under this assumption, the FRF equation 15-2 can be simplified to equation
15-7. This is assuming the data to have the dimension of displacement over
force.
h ij
Part IV
r ijk
(j k)
r *ijk
(j *k)
Eqn 15-7
223
min max
It is possible to compensate for the modes in the neighborhood of this band, by
introducing so called upper and lower residual terms into the equation.
h ij
r ijk
(j k)
r ijk *
(j *k)
ur ij
lr ij
Eqn 15-8
2
where
urij = upper residual term (residual stiffness) used to approximate modes
at frequencies above max.
lrij = lower residual term (residual mass) used to approximate modes at
frequencies below min
Upper and lower residuals are illustrated in Figure 15-3.
d/f
mass line
upper
residual urij
lower
lr ij
residual
2
Figure 15-3
stiffness line
frequency
r ijk
(j k)
Eqn 15-9
224
15.2.2
rijke t r*ijke t
*
k
Eqn 15-10
k1
you will see that the pole values k are independent of both the response and
the reference DOFs. In other words the pole value k is a characteristic of the
system and should be found in any function that is measured on the structure.
When applying parameter estimation techniques, one of two strategies can be
employed; making local or global estimates.
Part IV
Local estimates
Global estimates
225
15.2.3
[Rk]e t [R*k]e t
*
k
Eqn 15-11
k1
where
[H] =(No ,Ni ) matrix with hij as elements
[Rk ] =
Equation 15-5 can be used to express the residue matrix in factored form,
[R k] a k{V} kV r k
Eqn 15-12
where
{V}k = No vector (column) with mode shape coefficients at the output DOFs
Vr k = Ni vector (row) with mode shape coefficients at the input
DOFs
If DOFs i and j are both output and input DOFs then the above equation im
plies Maxwell Betti reciprocity,
r ijk r jik
Eqn 15-13
This assumption is not essential however since the residue matrix can be ex
pressed in a more general form,
[R k] {V} kL k
Eqn 15-14
226
Using the factored form of the residue matrix, equation 15-11 can be written as,
N
*
k
Eqn 15-15
k1
If just the data between any output DOF and all input DOFs are considered
then
H i
vikLke t v*ikL*ke t
*
k
Eqn 15-16
k1
where
H i = Ni vector of data between output DOF i and all input DOFs.
It is essential in the model of equation 15-16 that both the poles and the modal
participation factors are independent of the output DOF. In other words in this
formulation the characteristics become L ke kt
Eqn 15-17
Eqn 15-18
or since
1 " 2
Part IV
Eqn 15-19
227
The latter equation shows that in the response data relative to an input DOF j, a
combination of the coupled modes is observed and not the individual modes.
The combination coefficients for the modes are the modal participation factors
l1j and l2j .
The response data relative to another input DOF l, is expressed by an equation
similar to equation 15-19.
{H} i {V}1l 1l {V} 2l 2l
e t...
Eqn 15-20
The only difference between these last two equations is the modal participation
factors l1l and l2l . If they are linearly independent of the modal participation
factors for input i, then the modes will appear in a different combination in the
response data relative to input l. As a multiple input parameter estimation
technique analyses data relative to several inputs simultaneously, and the mod
al participation factors are identified, then it is possible to detect highly
coupled or repeated modes.
15.2.4
ijk
h ij,n(j)
(j n )
N
k1
!
(j n *k)
r ijk*
Eqn 15-21
where
hij,n = samples of data in measured range.
228
hij
min
max
analysis frequency band
frequency
measurement band
Figure 15-4
The analysis frequency band includes only three modes whereas the measure
ment band includes five. If the data is transformed from frequency to time do
main, then the time increment between samples will be determined by the anal
ysis frequency band and not the measurement band. If the frequency band of
analysis is bounded by max and min then t is determined from
t
2
2( max min )
Eqn 15-22
Part IV
229
h ij,n(t)
*
k
Eqn 15-23
k1
or
N
Eqn 15-24
k1
where
z k e kt
Eqn 15-25
Time domain parameter estimation methods are based on the model defined by
equation 15-24. They analyze hij,n to estimate zk .
k is then calculated from
equation 15-25. Note however that this calculation is not unique since
z k e ( kjm2t)t e kt
Eqn 15-26
This implies that no poles outside the frequency band 2 /t can be identi
fied. In other words, with a time domain parameter estimation method, all es
timated poles are to be found in the frequency band of analysis ( min , max ).
This may cause problems in estimating modal parameters if the data in the fre
quency band of analysis is strongly influenced by modes outside this band (re
sidual effects). Since with frequency domain methods
k is estimated directly,
no such limitation arises. A frequency domain technique may therefore some
times be preferred over a time domain technique for analyzing data over a nar
row frequency band, where residual effects are important.
15.2.5
230
Eqn 15-27
with
M S, C S, K S
{f}
{lp }
Eqn 15-28
with
M f, C f, K f matrices describing the pressure-volume acceleration
2{lf }
pdS
Eqn 15-29
Sb
{l f}
x dS
Eqn 15-30
Sb
KS Kc
0 Kf
0
C Mf
x
p
f
.
q
Eqn 15-31
xj
pi
| q. 0 . | fi0
qi
fj i
Eqn 15-32
Part IV
231
232
15.3
Application
DOF
Domain Estimates
Inputs
Peak picking
frequency,
damping
single
freq
local
single
Mode picking
mode shapes
single
freq
local
single
Circle fitting
frequency
damping
mode shapes
single
freq
local
single
Complex Mode
Indicator Function
frequency
damping
mode shapes
multi
freq
global
single or
multiple
Least Squares
Complex Exponential
frequency
damping
modal
participation
factors
multi
time
global
single or
multiple
Least Squares
Frequency Domain
mode shapes
multi
freq
global
single or
multiple
Frequency domain
Direct Parameter
identification
frequency
damping
modal
participation
factors
multi
freq
global
single or
multiple
Table 15.1
Selection of a method
A guide on which parameter estimation techniques method to adopt is outlined
below. Details on all the methods are given in the following sections.
SDOF
Single degree of freedom curve fitters are rough and ready and will give you a
quick impression of the most dominant modes (frequency damping and mode
shapes) influencing a structure under test. As such they are useful in checking
the measurement setup and can help assess:
V
Part IV
233
whether the accelerometers are correctly labelled with their node and
direction;
For this purpose it is recommended to identify real modes since these are the
easiest to interpret when displayed.
The circle fitter gives the most accurate estimates of the SDOF techniques, but
may create large errors on nodal points of the mode shapes.
Complex MIF
This method can be used in the same way as the SDOF techniques to give you
an idea of the most dominant modes and check the test setup. It has the
advantage that multiple input FRFs can be used and the mode shape estimates
are of a higher quality. Furthermore, it can extract a modal model that includes
the most dominant modes in a particular frequency band.
Time domain MDOF
This is the most general purpose parameter estimation technique that is prob
ably the standard tool used in modal analysis. It provides a complete and accu
rate modal model from MIMO FRFs. Its major weakness seems to be when
analyzing heavily damped systems where the damping is greater than 5% such
as in the case of a fully equipped car.
Frequency domain MDOF
The Frequency Domain Direct Parameter technique provides similar results to
the Time domain technique described above, in terms of accuracy but is gener
ally slower. It is weak when dealing with lightly damped systems (damping
less than 0.3%) but fortunately performs better on heavily damped ones, thus
complementing the other MDOF technique. Since it operates in the frequency
domain it is able to analyze FRFs with an unequally spaced frequency axis.
15.3.1
Peak picking
Peak picking is a single DOF method to make local estimates of frequency
and damping. The method is based on the observation that the system re
sponse goes through an extremum in the neighborhood of the natural frequen
cies.
For example, on a frequency response function (FRF) the real part will be zero
around the natural frequency (minimum coincident part), the imaginary part
will be maximal (peak quadrature) and the amplitude will also be maximal
(peak amplitude). The frequency value where this extremum is observed is
called the resonant frequency r and is a good estimate of the natural frequency
of the mode nk for lightly damped systems.
234
A corresponding estimate of the damping can be found with the 3dB rule. The
frequency values 1 and 2,on both sides of the peak of the FRF at which the
amplitude is half the peak amplitude (3dB down) are introduced in the formula
in equation 3.1 to yield the critical damping ratio. The method is also illus
trated in Figure 15-5 below. 1 and 2 are also called half power points.
2 1
2 r
Eqn 15-33
hij
dB
ampl
3 dB
1 r 2
Figure 15-5
frequency
Since the curve fitter locates the resonance frequency on a spectral line, signifi
cant errors can be introduced if the FRF has a low frequency resolution and the
peaks of modes fall between two spectral lines. This can be compensated for by
extrapolating the slopes on either side of the picked line to determine the am
plitude of the FRF more precisely.
It may be necessary to deal with the situation when one of the half power
points is not found. This may arise when the frequency of one mode is close to
that of another mode, or it is near to the ends of the measured frequency range.
Note!
Peak picking is a single DOF method: it is therefore only suitable for data with
well separated modes.
As this method yields local estimates, it requires only one data record to obtain
frequency and damping values for all modes. However, if several data records
are available, it may be that different records identify different modes.
Part IV
235
15.3.2
Mode picking
If you assume that the modes are uncoupled and lightly damped, the modal
amplitude can be computed from the peak quadrature or peak amplitude of the
FRF. With this assumption, the data in the neighborhood of the resonant fre
quency can be approximated by
h ij,n "
r ijk
(j n k)
Eqn 15-34
r ijk
k
Eqn 15-35
Note that from the modal amplitude a residue or mode shape estimate is ob
tained by multiplying by the modal damping.
To use the Mode picking method you must have an estimate of dk . This esti
mate can be obtained with the Peak picking method (see section 15.3.1) or other
techniques.
The Mode Picking method is obviously quite sensitive to frequency shifts in the
data. If for example the resonant frequency of a mode in a data record is
shifted a few spectral lines with respect to the frequency that is used as reso
nant frequency for that mode, then the modal amplitude would be erroneously
picked. To accommodate situations where frequency shifts occur, you need to
specify an allowed frequency shift around the resonant frequencies dk that are
used to calculate the modal amplitudes. Rather than picking the modal ampli
tude at the resonant frequencies the method now scans a band around each
modal frequency for each data record. The maximum amplitude in this band is
used to determine the modal amplitude and thus the mode shape coefficient.
Mode picking allows you to make a very quick determination of a modal mod
el. The accuracy of this model however depends on how well the assumptions
of the methods were applicable to the data.
236
15.3.3
Circle fitting
The Circle fitting method is based on estimating a circle in the complex plane
through data points in a band around a selected mode. The method was origi
nally developed by Kennedy and Pancu for lightly damped systems under the
single DOF assumption. In the band around a mode, the data can be approxi
mately described by
h ij,n
r ijk
j n k
r *ijk
Eqn 15-36
j n *k
U jV
R jI
j( n d)
Eqn 15-37
arctan V
U
Re(h)
(R,I)
d
U 2 V 2
(R U2, I V2)
Im(h)
f
Figure 15-6
d
Part IV
237
Having determined the natural frequency and assuming a lightly damped sys
tem, the damping is given by equation 15-38.
1
d
Eqn 15-38
Eqn 15-39
d
Eqn 15-40
15.3.4
Qr
[H()] {} r
{L} Tr
r
Eqn 15-41
r1
Or in matrix form as
[H()] []
238
Q [L]
r
Eqn 15-42
where
[H()]= the FRF matrix of size Ni by No
[] = the mode shape matrix of size No by 2N
Qr =
Eqn 15-43
where
[U]= the left singular matrix corresponding to the matrix of mode shape
vectors
[S]= the diagonal singular value matrix
[V]= the right singular matrix corresponding to the matrix of modal par
ticipation vectors
In comparing equations 15-42 and 15-43, the mode shape and modal participa
tion vectors in equation 15-42 are, through the singular value decomposition,
scaled to be unitary vectors and the mass matrix in equation 15-43 is assumed
to be an identity matrix, so that the orthogonality of modal vectors is still satis
fied.
For any one mode, the natural frequency is the one where the maximum singu
lar value occurs.
The Complex Mode Indicator Function is defined as the eigenvalues solved
from the normal matrix, which is formed from the FRF matrix ([H] H[H]) at
each spectral line.
[H] H[H] [V][S] 2[V] H
Eqn 15-44
Eqn 15-45
where
k ()= the kth eigenvalue of the normal FRF matrix at frequency
Part IV
239
In practice the [H] H[H] matrix is calculated at each spectral line and the ei
genvalues are obtained. The CMIF is a plot of these values on a log scale as a
function of frequency. The same number of CMIFs as there are references can
be obtained. Distinct peaks indicate modes and their corresponding frequency,
the damped natural frequency of the mode. This is illustrated in Figure 15-7.
Peaks in the CMIF function can be searched for automatically whilst taking into
account criteria that are used to eliminate spurious peaks due to noise or mea
surement errors.
1
CMIF
log
.1
.01
.001
frequency
Figure 15-7
When the frequencies have been selected, equations 15-43 and 15-44 can be
used to yield the complex conjugate of the modal participation factors [V],
and the as yet unscaled mode shape vectors [U].
The unscaled mode shape vectors and the modal participation factors are used
to generate an enhanced FRF for each mode (r), defined by
H
HE
r () {U} r [H()]{V} r
Eqn 15-46
Since the mode shape vectors and modal participation factors are normalized
to unitary vectors by the singular value decomposition, the enhanced FRF is ac
tually the decoupled single mode response function
HE
r ()
Qr
r
Eqn 15-47
A single degree of freedom method (such as the circle fitter technique) can now
be applied to improve the accuracy of the natural frequency estimate and then
to extract damping values and the scaling factor for the mode shape.
240
CMIF
.1
log
.01
.001
frequency
amp
frequency
Figure 15-8
One CMIF can be calculated for each reference DOF. They can be sorted in
terms of the magnitude of the eigenvalues. They can all be plotted as a func
tion of frequency as shown in the example in Figure 15-9.
CMIF 1
.1
CMIF_1
log
.01
CMIF_2
.001
frequency
Figure 15-9
Part IV
241
MAC (2a,1b)
MAC (1a,2b)
MAC (2a,2b)
CMIF_1
CMIF_2
"0
"1
CMIF_1
CMIF_2
"1
"0
Peak picking can be facilitated by using tracked CMIFs. This alters the display
of the CMIFs for when the mode shapes represented by the two CMIFs are
switched, the CMIFs are also switched. This is determined by the cross over
check described above.
242
.1
log
.01
.001
frequency
Figure 15-10 Example of first and second order tracked CMIFs
15.3.5
*
k
Eqn 15-48
k1
Part IV
243
Eqn 15-49
Eqn 15-50
*k, k
1N
Eqn 15-51
Turning the reasoning around therefore, one could first try to estimate the coef
ficients in equation 15-49 using all available data. Estimates of the complex ex
ponential coefficients
k can then be found by solving equation 15-51.
rijkznk r*ijkz*nk
k1
Eqn 15-52
z k e kt
Instead of damped complex exponentials, the characteristics are now power se
ries with base numbers zk .
244
Eqn 15-53
Eqn 15-54
h11,2N1
)
h11,N 1
)
hij,n1
)
t
h N0N i,Nt1
h 11,0 !
s h11,2N!
)
)
)
h 11,N t2N a 1 !
h 11,Nt
a
2
)
)
)
$
%
$
) %
h ij,n2N
h
ij,n
a
)
)
h N 0Ni,N t2N
hN N N
2N
Eqn 15-55
where
Nt= last available time sample
N0 = number of response DOFs
Ni = number of input DOFs
We can write this in a simpler manner
Part IV
245
r 1,1
.
).
.
sr 1,2 r 1,2N
! a1 ! r 1,0 !
r2,2 r 2,2N a 2 r 2,0
) %=$
) %
)
)
) $
r 2N,0
. . r 2N,2N a2N
Eqn 15-56
N0
Ni
Nt
(hij,nkhij,nl)
Eqn 15-57
Building this covariance matrix is the first stage in applying the Least Squares
Complex Exponential method. This phase is usually the most time consuming
since all the available data is used to build the inner products expressed by
equation 15-57.
Note that after solving equation 15-56 all that is required to calculate the esti
mates of modal frequency and damping is to substitute the estimated coeffi
cients in equation 15-54 and to solve for zk .
r 1,1 r 1,2
. r 2,2
r 1,0
a1
a 2 = r 2,0
246
r 1,1
.
.
.
r 1,2 r 1,3
r 2,2 r 2,3
. r 3,3
.
.
r 1,4 a
! 1! r 1,0!
r 2,4 a 2 r 2,0
$a %=$ r %
r 3,4
3,0
3
r 4,4 a 4 r 4,0
With corresponding least squares error 2, and so on. Now if a model is as
sumed with a number of modes equal to the number of modes that is present in
the data then the corresponding least squares error should be significantly
smaller than the error for models with fewer modes.
A diagram that plots the least squares error for increasing number of modes is
called the least squares error chart. Figure 15-11 shows a typical diagram if data
is analyzed for a system with 4 modes (and 4 modes are observable from the
data!).
Noise on the data may cause the error diagram to show a significant drop at a
certain number of modes, followed by a continued decrease of the error as the
number of modes is increased. The problem now is to determine how many
extra modes, or so called computational modes, are to be considered to com
pensate for the noise on the data so that the best estimates of modal frequency
and damping can be obtained. This problem is also illustrated in Figure 15-11.
Least
Squares
Error
No noise on data
Noise on data
Nr of modes
To determine the optimal number of modes you could try to compare frequen
cy and damping estimates that are calculated from models with various num
ber of modes. Physical intuition would lead you to expect that estimates of fre
quency and damping corresponding to true structural modes, should recur (in
approximately the same place) as the number of modes is increased. Computa
tional modes will not reappear with identical frequency and damping. A dia
gram that shows the evolution of frequency and damping as the number of
modes is increased is called a Stabilization diagram. The optimal number of
modes that can be calculated for use can then be seen, as those modes for which
the frequency and damping values of the physical modes do not change signifi
cantly. In other words, those which have stabilized.
Part IV
247
v
f
d
d
f
f
v
f
d
v
f
f
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
d
v
f
f
s
s
s
s
s
s
s
s
s
s
v
f
o
number of modes
amplitude
frequency
Figure 15-12 A stabilization diagram
Example
Let two data records be measured on a system, both shown in Figure 15-13.
h11
1
0
-1
h21
1
0
-1
t
Let four data samples be measured of which the values are listed in the Table
below.
248
n
0
1
2
3
h11
1
0
-1
0
h21
0
1
0
-1
Consider a model for 1 mode (N=1). Equations 15-55 and 15-56 become re
spectively
0 1! a
1!
1 0
1
0
=
$
a
1 0 2 0%
1
0 s1
20 02
aa =02
1
2
The solution is therefore a1 =0, a2 =1. Now equation 15-54 is used to calculate zk
and so k,
z2 1 0
z * j
The frequency and damping values follow from
z e t
z j, 0 j
2t
z j, 0 j
2t
The solution indicates a mode with a period 4t and zero damping. This is
compatible with the trend of the cursor as shown in Figure 15-13.
Part IV
249
vikLke t v*ikL*ke
*
k
Eqn 15-58
k1
where
H i = Ni vector (row) of IRs between output DOF i and all input DOFs
L k = vector of modal participation factor for mode k. If Ni reference DOFs
are assumed then L k is of dimension Ni
vik
Note that in this model, frequency, damping and modal participation factors
are independent of the particular response DOF. It should therefore be possible
to estimate these coefficients using all the available data simultaneously.
vikLkznk v*ikL*kz*nk
k1
Eqn 15-59
z k e kt
It can be proved that if the data can be described by equation 15-59, it can also
be described by the following model
250
H n i H n1A 1 HnpA p 0
Eqn 15-60
Eqn 15-61
pN i 2N
Eqn 15-62
(The proof of this follows from basic calculus along the same lines as for Least
Squares Complex Exponential in section 15.3.5).
Equation 15-60 represents, in matrix notation, a coupled set of Ni finite differ
ence equations with constant coefficients. The coefficients A1 . . . Ap are there
fore matrices of dimension (Ni Ni ).
The condition expressed by equation 15-61 states that the terms [ Lk ] and zk n
are characteristic solutions of this system of finite difference equations. As
equation 15-59 is a superposition of 2N of such terms, it is essential that the
number of characteristic solutions of this system of equations pNi at least equals
2N as expressed by equation 15-62.
Note finally, that if data for each reference DOF is treated individually, i.e. Ni =
1, then equation 15-60 and 15-61 simplify to equations 15-53 and 15-54. Thus
the least squares complex exponential method is a special case of the multiple
input least squares complex exponential method.
Hp1 1
.
HN 1
1
.
Hn1i
.
H
N 1 N
t
.
.
.
!
Hp1 !
A
.
1
!
H N
H N p 1
1
A
.
t.
=
) 2
H n i
H npi
A p
.
.
H
N
N
H N p N
H 0 1
.
t
Eqn 15-63
where
Part IV
251
R 1,1
.
)
.
N0
Nt
([Hnk]ti[Hnl]i)
Eqn 15-64
i1 np
R 1,2 R 1,p A 1
R 1,0
R 2,2 R 2,pA 2 R 2,0
)
)
) ) )
R p,0
.
. R p,p A p
Eqn 15-65
The order (p) of the finite difference equation is related to the number of modes
in the data by equation 15-62. It is preferable that this be determined by the
method itself. As the coefficients of the finite difference equation are solved for
in a least squares sense, this can be done by observing the least squares error as
a function of the assumed order. As an order is reached such that the model
can describe as many modes as are present in the data, the error should drop
considerably.
Due to the condition expressed by equation 15-62 there is no linear relation be
tween the number of modes that can be described by the model and the order
of the model. The relation between the number of modes, the order of the
model and the number of reference DOFs is listed in Table 15.2. It can be seen
that a model of order 8 can describe 11 or 12 modes if data for 3 inputs are ana
lyzed simultaneously. In the error diagrams therefore the same least squares
error is shown for 11 and 12 modes.
As for the Least Squares Complex Exponential method, a stabilization diagram
can again be created to determine the optimal number of modes. As well as
comparing frequency and damping values calculated from models of consecu
tive order it is now also possible to compare the stabilization of modal partici
pation factors. In section 15.2.3, the modal participation factors were shown to
be proportional to the mode shape coefficients at the reference DOFs. They also
represent a physical characteristic of the structure like the frequency and damp
ing. Therefore, the values corresponding to structural modes should also stabi
lize as the order of the model is increased. This additional criterion adds much
to the readability of the stabilization diagram and to the ability to distinguish
computational modes from physical modes
Additionally, the modal participation factors can be used by themselves to
identify physical modes. If they are normalized with respect to the largest, the
values should all be approximately real, in phase or in anti-phase, for structur
al modes.
252
Ni=1
Ni=2
Ni=3
Ni=4
Ni=5
Ni=6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
51
52
54
56
58
60
62
64
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
1
2
2
3
4
4
5
6
6
7
8
8
9
10
10
11
12
12
13
14
14
15
16
16
17
18
18
19
20
20
21
22
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
16
1
1
2
2
2
3
3
4
4
4
5
5
6
6
6
7
7
8
8
8
9
9
10
10
10
11
11
12
12
12
13
13
1
1
1
2
2
2
3
3
3
4
4
4
5
5
5
6
6
6
7
7
7
8
8
8
9
9
9
10
10
10
11
11
Table 15.2
Example
To clarify the method, consider again the example discussed on page 248. Let
the example system satisfy reciprocity so that h12 is also equal to h21 . The vec
tor [h12 h21 ] then represents the data between response DOF 1 and reference
DOFs 1 and 2.
Considering a model for 1 mode (so p= 1, as Ni = 2) equations 15-55 and 15-56
become respectively
Part IV
253
1 0! a a
0 1!
11 12
0 1 a a =1 0
12 22
1 0
0 1
20 02
aa
11
12
a 12
0 2
a 22 = 1 0
l 1l 2
z 1 1z
= 00
2t
L [* j, 1]
Notice that the solution for the frequency and damping is the same as found
with the Least Squares Complex Exponential (see page 249). In addition you
also find an estimate of the modal participation factors. For this example they
indicate that there should be a phase difference of 90_ in the system response
between excitation from reference DOFs 1 and 2 as h11 is a cosine, and h12 a
sine. This estimate seems to be correct.
15.3.6
*
k
Eqn 15-66
k1
If estimates of the modal frequency and damping are available, then the resi
dues appear linearly as unknowns in this model.
254
j
N
k1
r ijk
p
k
lr ij
!
ur
ij 2
j p *k
p
r *ijk
Eqn 15-67
where
urij = an upper residual term used to approximate modes at frequencies
above max
lrij = an lower residual term used to approximate modes at frequencies
below min
These are illustrated in Figure 15-3. Note that the residues as well as lower and
upper residuals are local characteristics; in other words, they depend on the
particular response and reference DOF.
The Least Squares Frequency Domain method is based on the model expressed
by equation 15-67. Least squares estimates of residues, lower and upper resid
uals are calculated by analyzing all data values in a selected frequency range.
vikLke t v*ikL*ke t
*
k
Eqn 15-68
k1
Part IV
255
H pi
N
k1
v ikL k
v *ikL * k
j p k
j p *k
LR i
2p
UR i
Eqn 15-69
where
[UR]i = upper residuals between response DOF i and all reference DOFs,
vector of dimension Ni
[LR]i = lower residuals between response DOF i and all reference DOFs, vec
tor of dimension Ni
The multiple input LSFD method is based on equation 15-69.
15.3.7
Theoretical background
The basis of the FDPI method is the second order differential equation for me
chanical structures
..
Eqn 15-70
When transformed into the frequency domain, this equation can be reformu
lated in terms of measured FRFs
Eqn 15-71
where
= frequency variable
256
IA 0A
1
Eqn 15-72
This will yield the diagonal matrix [] of poles and a matrix of eigenvectors.
It will become clear from the following section, that the matrix thus obtained
is not equal to the matrix of mode shapes, although it is related to it.
In a final step, the modal participation factors are estimated from another least
squares problem, using the obtained [] and matrices.
Data reduction
Prior to estimating the system matrix, all available data are condensed via a
projection on their principal components. For all response stations, a maximum
of Nm principal components are first calculated and then analyzed. The ob
tained matrix represents the modal matrix for this set of fictitious response
stations.
The data reduction procedure offers the following advantages
Part IV
257
Eqn 15-73
2C 2 1C 1 C0 C 1 2C 2
The presence of these residual terms will influence the estimates for frequency,
damping and mode shapes (as well as the modal participation factors for multi
ple input analysis).
Determining the optimum number of modes
As with the Least Squares Complex Exponential (LSCE) method, a least squares
error chart can be built to determine the optimal number of modes in the se
lected frequency band. Because of the principal component projection, this
chart may look somewhat different. For small models, only the first (most im
portant) principal data are used, and the global error will decrease drastically.
As more and more principal components are included by estimating more
modes, their information becomes less important, which may distort the least
squares error chart.
A more reliable tool for estimating the optimal number of nodes for the FDPI
technique is the singular values diagram. As an alternative to the error dia
gram, and to some extent to the stabilization diagram too, the rank of the calcu
lated covariance matrix can be determined. The rank of the matrix is also a
good indication of the optimal number of modes to be used in the analysis.
The rank of the matrix can be determined using a singular value decomposi
tion. A diagram showing the normalized singular values in ascending order is
called a singular values diagram: the rank of the matrix is determined at the
point where the singular values become significantly smaller compared to the
previous values.
When building a stabilization diagram, (see LSCE method page 247), the same
data are described by models of increasing order. An updating procedure is
implemented to save calculation time.
PseudoDOFs for small measurement sets
Due to the type of identification algorithm, the FDPI technique can only esti
mate as many modes in the model as there are measurement Degrees of Free
dom. This means that normally
258
Nm N0
However, using a similar approach as for the time domain LSCE method, it is
possible to create so-called pseudo-" Degrees of Freedom from the measure
ments that are available, thus generating enough new" measurements to allow
a full identification on as few as one measurement.
it is very fast (no least squares solution required as for the LSFD meth
od)
If the mode shape expansion method is not employed then the LSFD technique
is used to estimate mode shapes.
Normal modes
From the meaning of the matrices [A0 ] and [A1 ] and the eigenvalue problem
(15-72), it is possible to estimate damped (generally complex) mode shapes ,
or undamped real normal modes.
Normal modes can be identified via the FDPI technique by solving an eigenva
lue problem for the reduced mass and stiffness matrices only
M 1K n n
Eqn 15-74
This eigenvalue problem is very much related to the one that is solved by FEM
software packages that ignore the damping contribution in a system. This is an
entirely different approach to the one that is used to estimate real modes via the
LSFD technique. The latter technique estimates the real-valued mode shape
coefficients that curve-fit the data set in a best least squares sense (proportional
damping assumed), while the FDPI method uses an FEM-like approach.
Damping values are computed by applying a circle-fitter to enhanced FRFs for
each mode. The enhanced FRFs are calculated by projecting the principal FRFs
on the reduced mode shapes.
Part IV
259
15.4
15.4.1
the polyreference LSCE estimator does not always work well when the
number of references (inputs) is larger than 3 for example
Theoretical aspects
A scalar matrix-fraction description better known as a common-denominator
model will be used. The Frequency Response Function (FRF) between output
o and input i is modeled as
^
H oi( f)
N oi( f)
Eqn 15-75
D( f)
for i = 1, . . . . . , Ni and o = 1, . . . . . , No
with
n
260
Replacing the model H oi( f) in equation 15-75 by the measured FRF H oi( f)
gives, after multiplication with the denominator polynomial,
n
j0
j0
!j(f)Boij !j(f)Hoj(f)Aj # 0
Eqn 15-76
for i = 1, . . . . . , Ni 0 = 1, . . . . . , No and f= 1, . . . . . , Nf
Note that equation 15-76 can be multiplied with a weighting function Woi (f ).
The quality of the estimate can often be improved by using an adequate
weighting function.
As the elements in equation 15-76 are linear in the parameters, they can be re
formulated as
X1 0
0) X) 2
0 0
0
0
)
X N 0N i
B1
Y1
B
Y2
)2
B # 0
)
X N0N i N0N i
A
with
A0
B oi0
B
A1
.. , A
B k
.
oi1
.
..
B oin
An
Part IV
261
X1 0 ... 0
0
0 X2
..
J .. ..
.
.
.
XN N
0 0
o
Y1
Y2
..
.
Y NoN i
Eqn 15-77
has Nf No Ni rows and (n+1)(No Ni +1) columns (with Nf >> n, where n is the
order of the polynomials). Because every element in equation 15-76 has been
weighted with Woi (f ), the Xk 's in equation 15-77 can all be different.
The ML equations
Assuming the different FRFs to be uncorrelated, the (negative) log-likelihood
function reduces to
No
l ML()
Ni
Nf
var{Hoi( f)}
Eqn 15-78
262
(b ) set m1 m m
with r m r( m), J m +r()+| m and
1
CRLB " [J H
mJ m]
Eqn 15-79
with Jm the Jacobian matrix evaluated in the last iteration step of the GaussNewton algorithm. As one is mainly interested in the uncertainty on the reso
nance frequencies and damping ratios, only the covariance matrix of the de
nominator coefficients is in fact required.
Hence, it is not necessary to invert the full matrix to obtain the uncertainty on
the poles (or on the resonance frequencies and the damping ratios).
Part IV
263
15.5
exact FRF
supposition
modal FRF
static
compensation
term
dynamic
compensation
term
264
R upperresidual U
VT
The mode shape values of the static compensation mode () are related to the
left singular vector, the singular value, and the frequency value (0)
j U j j 0
The participation factor values (L) can be derived from the mode shape values
()
Lj
j
2m r 0 j
m r V U
Lj
Part IV
j
V
2 0 j 0 j
265
Chapter 16
Operational modal
analysis
267
16.1
268
Part IV
269
16.2
Theoretical aspects
This section describes the mathematical background to the methods used to
identify modal parameters from operational data.
Over recent years, several modal parameter estimation techniques have been
proposed and studied for modal parameter extraction from output-only data.
These include
Auto-Regressive Moving Averaging models (ARMA),
Natural Excitation Technique (NExT)
Stochastic subspace methods.
The Natural Excitation Technique (NExT)
The underlying principle of the NExT technique is that correlation functions
between the responses can be expressed as a sum of decaying sinusoids. Each
decaying sinusoid has a damped natural frequency and damping ratio that is
identical to the one of the corresponding structural mode. Consequently, con
ventional modal parameter techniques such as polyreference Least-Squares
Complex Exponential (LSCE) can be used for output-only system identifica
tion.
Stochastic subspace methods
With the subspace approach, first a reduced set of system states is derived, and
then a state space model is identified. From the state space model, the modal
parameters are derived. The terminology subspace" comes mainly from the
control theory it is a family name" which groups methods that use Singular
Values Decomposition in the identification process.
Two subspace techniques, referred to as the Balanced Realization (BR) and the
Canonical Variate Analysis (CVA) are provided.
16.2.1
Eqn 16-1
{y k} [C]{x k} {v k}
where
270
For p and q large enough, the matrices [A] and [C] are respectively the state
space matrix and the output matrix. Along with this model, the observability
matrix [Op ] of order p and the controllability matrix [Cq ] of order q are defined :
[C]
[C][A]
q1
[O p]
.. p1; [Cq] [[G][A][G]...[A] [G]]
[C][A]
Eqn 16-2
T
where [G] E[{x k1}{y k} ] and E[.] denotes the expectation operator. The
matrices [Op ] and [Cq ] are assumed to be of rank 2Nm , where Nm is the number
of system modes.
Eqn 16-3
Eqn 16-4
where r is the damping factor and r the damped natural frequency of the r-th
mode.
The damping ratio r of the r-th mode is given by
r
r
2r 2r
Eqn 16-5
The mode shape {}r of the r-th mode at the sensor locations are the observed
parts of the system eigenvectors {}r of [], given by the following equation
{"} r [C]{}
Eqn 16-6
The extracted mode shapes can not be mass-normalized as this requires the
measurement of the input force.
Part IV
271
Eqn 16-7
[R 1] [R 2] . [R q]
[R 2] [R 3] .. [R q1]
..
[H p,q] .. .. ..
.
.
.
.
.
[R p] [R p1]
[R pq1]
Eqn 16-8
Direct computations of the [Rk ] from the model equations lead to the following
factorization property
Eqn 16-9
[H p,q] [O p][C q]
Let [W1 ] and [W2 ] be two user-defined invertible weighting matrices of size
pNresp and qNresp , respectively. Pre-and post multiplying the Hankel matrix
with [W1 ] and [W2 ] and performing a SVD decomposition on the weighted
Hankel matrix gives the following
[S 1] [0]
[W 1][H p,q][W 2] [[U 1][U 2]]
[0] [0]
T
Eqn 16-10
where [S1 ] contains n non-zero singular values in decreasing order, the n col
umns of [U1 ] are the corresponding left singular vectors and the n columns of
[V1 ] are the corresponding right singular vectors.
On the other hand, the factorization property of the weighted Hankel matrix
results in
272
Eqn 16-11
From equations 16-10 and 16-11, it can be easily seen that the observability ma
trix can be recovered, up to a similarity transformation, as
[O p] [W 1] 1[U 1][S 1] 12
Eqn 16-12
Eqn 16-13
O,p1 [Op1][A]
Eqn 16-14
where [Op-1 ] is the matrix obtained by deleting the last block row of [Op ] and
[Op-1 ] is the upper shifted matrix by one block row.
Different choices of weighting will lead to different stochastic subspace identifi
cation methods. Two particular choices for the weighting matrices give rise to
the Balance Realization and the Canonical Variate Analysis methods.
Eqn 16-15
So no weighting is involved.
.. ..
.. ; [
] ..
.
.
.
.
.
.
.
[R ] [R ]
T
T [R
]
[R
]
[R 0] T
p2
[R 0]
p1
p1 p2
Eqn 16-16
Part IV
273
Eqn 16-17
Eqn 16-18
With this weighting, the singular values in equation 16-10 correspond to the
so-called canonical angles. A physical interpretation of the CVA weighting is
that the system modes are balanced in terms of energy. Modes which are less
well excited in operational conditions might be better identified.
{ymk}{ym}Tref
Eqn 16-19
m0
Eqn 16-20
with
274
Eqn 16-21
Eqn 16-22
16.2.2
Part IV
275
[R k]
Nm
*
r
r1
Eqn 16-23
Nm
r1
where r e rt and {L}r is a column vector of Nref constant multipliers which
are constant for all response stations for the r-th mode.
{Note that in conventional modal analysis, these constant multipliers are the
modal participation factors.}
The combinations of complex exponential and constant multipliers,
r{L} Tror *r {L} T*
r are a solution of the following matrix finite difference equa
tion of order t
T
T
kr{L} Tr[I] k1
{L} Tr[F 1] kt
r
r {L} r [F t] {0}
Eqn 16-24
where [F1 ]...[Ft ] are coefficient matrices with dimension Nref x Nref .
In case the system has Nm physical modes, the order t in equation 16-24 should
be theoretically equal to 2Nm/Nref in order to find the 2Nm characteristic poles.
In practice, over specification of the model order will be needed.
Since the correlation functions are a linear combination of the characteristic
T
*
T*
solutions of equation 16-24, r{L} r or r {L} r , they are also a solution of that
equation. Hence,
[R k][I] [R k1][F 1] --- [R kt][F t] 0
Eqn 16-25
Equation 16-25 which uses all response stations simultaneously enables a glob
al least squares estimate of the coefficient matrices [F1 ]... [Ft ]... The overdeter
mination is also achieved by considering all available or selected time intervals.
Once the coefficient matrices are known, equation 16-24 can be reformulated
into a generalized eigenvalue problem resulting in Nref t eigenvalues lr, yielding
estimates for the system poles r and the corresponding left eigenvectors {L}r T .
The selection of outputs which function as references have to be chosen in such
a way that they contain all of the relevant modal information. In fact, the selec
tion of output-reference channels is similar to choosing the input-reference
locations in a traditional modal test.
276
A mn*
B mn
B mn*
r
r
r
r
jAmn
r j * j r j *
Nm
r1
Eqn 16-26
where Xmn (j) is the crosspower between m-th response station and the n-th
response station serving as a reference.
In case of autopowers (m=n), Ar mn equals Br mn. The residue Ar mn is proportion
al to the m-th component of the mode shape {}r and the residue Br mn is pro
portional to the n-th component of the mode shape {}r. Consequently, by fit
ting the crosspowers between all response stations and one reference station,
the complete mode shape can be derived.
The power spectra fitting step offers the advantage that not all responses
should be included in the time-domain parameter extraction scheme and that
consequently, mode shapes of a large number of response stations can be easily
processed by consecutively fitting the spectra. Additionally, it provides a
graphical quality check by overlaying the actual test data with the synthesized
data. In comparison with modal FRF synthesis, it can be observed in equation
16-26 that two additional terms as function of -jw need to be included for a cor
rect synthesis of the auto-and crosspowers which are assumed to be estimated
on the basis of the FFT and segment averaging. If Xmn (jw) would not be calcu
lated with the FFT segment averaging approach, but as the FFT of the correla
tion function between response m and response n estimated using equation
16-19, the last 2 terms in equation 16-26 can be neglected.
16.2.3
Part IV
277
278
16.3
References
[1]
[2] Otte D., Development and Evaluation of Singular Value Analysis Method
ologies for Studying Multivariate Noise and Vibration Problems, PhD K.U.Leu
ven, 1994.
[3] Otte D., Van de Ponseele P., Leuridan J., Operational Deflection Shapes in
Multisource Environments, Proc. 8th International Modal Analysis Conference,
p. 413-421, Florida, 1990.
[4] Abdelghani M., Basseville M., Benveniste A., In-operation Damage Mon
itoring and Diagnostics of Vibrating Structures, with Applications to Offshore
Structures and Rotating Machinery", Proc. of IMAC XV, Orlando, 1997.
[5] Desai U.B., Debajyoti P., Kirkpatrick R.D., A realization approach to sto
chastic model reduction", Int. J. Control, Vol. 42, No. 4, pp. 821-838, 1985.
[6] Kung S., A new identification and model reduction algorithm via singu
lar value decomposition", Proc. 12th Asilomar Conf. Circuits, Systems and
Computers, pp. 705-714, Pacific Groves, 1978.
[7] Brown D., Allemang R., Zimmerman R., and Mergeay, M., Parameter Es
timation Techniques for Modal Analysis", SAE Paper 790221, pp. 19, 1979.
[8] James G.H. III, Carne T.G., and Laufer J.P., The Natural Excitation Tech
nique (NExT) for Modal Parameter Extraction from Operating Structures, the
international Journal of Analytical and Experimental Modal Analysis", Vol. 10,
no 4, pp. 260-277, 1995.
[9] Hermans L., Van der Auweraer H., On the Use of Auto-and Cross-cor
relation functions to extract modal parameters from output-only data, Proc. of
the 6th International conference on Recent Advances in Structural Dynamics,
Work in progress Paper, 1997.
[10] Van der Auweraer H., Wyckaert K., Hendricx W., From Sound Quality to
the Engineering of Solutions for NVH Problems: Case Studies", Acustica/Acta
Acustica, Vol. 83, N 5, pp. 796-804, 1997.
[11] Wyckaert K., Van der Auweraer H., Hendricx W., Correlation of Acousti
cal Modal Analysis with Operating Data for Road Noise Problems", Proc. 3rd
International Congress on Air- and Structure-Borne Sound and Vibration,
Montreal (CND), June 13-15, 1994, pp. 931-940, 1994.
[12] Wyckaert K., Hendricx W., Transmission Path Analysis in View of Active
Cancellation of Road Induced Noise in Automotive Vehicles", 3rd International
Congress on Air- and Structure-Borne Sound and Vibration, Montreal (CND),
June 13-15, 1994, pp. 1437-1445, 1994.
Part IV
279
[13] Van der Auweraer H., Ishaque K., Leuridan J., Signal Processing and
System Identification Techniques for Flutter Test Data Analysis", Proc. 15th Int.
Seminar of Modal Analysis, K.U.Leuven, pp. 517-538, Leuven, 1990.
[14] Van der Auweraer H, Guillaume P., A Maximum Likelihood Parameter
Estimation Technique to Analyse Multiple Input/Multiple Output Flutter Test
Data", AGARD Structures and Materials Panel Specialists' Meeting on Ad
vanced Aeroservoelastic Testing and Data Analysis, Paper no 12, May, 1995.
[15] Van der Auweraer H., Leuridan J., Pintelon R., Schoukens J., A Frequen
cy Domain Maximum Likelihood Identification Scheme with application to
Flight Flutter Data Analysis", Proc. 8-th IMAC, pp. 1252-1261, Kissimmee,
1990.
280
Chapter 17
281
17.1
282
Animating the system's wire frame model can lead to a better understanding of
these phenomena. This makes it possible to show each motion (or acceleration)
level at the corresponding DOF, in a cyclic manner. Because of the external re
semblance of the animated representation of the vector quantity {X} with the
mode shape vector {V}, the vector {X} is called a running mode, or an operational
deflection shape.
These running modes must be interpreted entirely differently from modal
modes. Running modes only reflect the cyclic motion of each DOF under specif
ic operational conditions, and at a specific frequency. Using a modal model based
on displacement/force frequency response functions {H}, the displacement run
ning mode {X} can be described as follows.
{X i( p)} {Hi1( p)}F 1( p) {H i2( p)}F 2( p)
{H im( p)}F m( p)
2N
k1
V ikV 1k !
2N V ikV mk !F ( )
F
(
)
p
jp m p
j p k
k
1
k1
Eqn 17-1
Eqn 17-2
where,
i = the DOF counter
p = the particular angular frequency
Fj () = the force input spectrum at DOF j
m = the number of acting forces
The above equation clearly shows that running modes:
Part IV
283
17.2
simultaneously
17.2.1
Transmissibility functions
When the response signals are related to the reference by simply dividing each
response signal frequency spectrum by the reference frequency spectrum, the
result is the transmissibility function (T)
T ij()
X i()
X j()
Eqn 17-3
284
T ij()
G ij()
G jj()
Eqn 17-4
2ij()
Gij()2
G ii().G jj()
Eqn 17-5
The coherence function expresses the linear relationship between both response
signals of the measured system. This coherence function is expected to be high,
since both responses are caused by the same acting forces. In practice, however,
it can be low for the same reasons as those affecting the measurement of FRFs,
that is to say due to low signal to noise ratio for one or both of the signals, bad
signal conditioning, etc.
Another interesting reason why the coherence between two measured signals
may be low, can be derived from equation 17-1, when it is substituted in equa
tion 17-3. The linear relationship (and hence the coherence) will vary as a func
tion of the weighting factors Fj (), this can be because of changing operating
conditions during the averaging process for example. High coherence function
values in the frequency regions of interest therefore indicate both a high quality
of the measurement signals and stationary operating conditions.
Part IV
285
Absolutely scaled running mode coefficients for each DOF i can be obtained by
multiplying the transmissibility spectra by the RMS value of the reference auto
power spectrum.
Eqn 17-6
[m]
Eqn 17-7
[m/s]
Eqn 17-8
[m/s 2]
Eqn 17-9
..
17.2.2
Crosspower spectra
When it can be assumed that the operating conditions are not going to change
while measuring all response signals, then it is possible to measure just cross
power spectra between each response DOF i and a certain reference DOF j
G ij() X i()X *j ()
Eqn 17-10
286
Absolutely scaled running modes can, in this case, be obtained again by means
of the autopower spectrum of the reference station j
{X i()}
G ij()
Gjj()
Eqn 17-11
When displacements were measured, the running mode coefficient will have
units of displacement. Equations 17-8 and 17-9 can be used to derive velocity
or acceleration values.
Part IV
287
17.3
Note!
17.3.1
288
Each one of the above scaling methods may change and influence the units of
the scaled running mode. The scaling factor's units will be incorporated into
the mode shape coefficient units, which were initially obtained from the mea
surement data.
Part IV
289
17.4
Interpretation of results
A set of functions exists, that are designed to assess the validity of modes.
These include the functions of Modal Scale Factor, Modal Assurance Criterion
and Modal decomposition.
MSF jlk
Eqn 17-12
{V jk} t {V jk}
*
MAC jlk
Eqn 17-13
290
Modal decomposition
When a modal model for the same DOFs is available for a measured object, it is
possible to compare modal and running modes and to track down resonance
phenomena causing a particular running mode to become predominant. This is
termed Modal decomposition. By using a decomposition of each running
mode in a linear combination of the modal modes, it becomes clear whether or
not a running mode originates primarily from a resonance phenomenon.
The modal modes form what is termed the `basis' group of modes. The run
ning modes are in a separate group that is to be decomposed. The following
formula applies.
{X i( 0)} a 1{V 1} a 2{V 2} a n{V n} Rest
Eqn 17-14
Where
Xi is the i th mode of the group to be decomposed (running modes)
Vi is the i th mode of the basis group (modal modes)
ai are the scaling coefficients needed to satisfy the above equation.
The scaling coefficients are rescaled relative to the maximum value.
{X i( 0)}
Eqn 17-15
Eqn 17-16
a
a max a 1
.100%{V 1} a n .100%{V n} Rest
a
max
max
100%
Note!
Part IV
Take care when interpreting these values since resemblance of the modal and
the running mode may purely be coincidental. A running mode at 56 Hz will
have no connection with a modal mode at 200 Hz even if they look alike.
291
Chapter 18
Modal validation
293
18.1
Introduction
A number of means are available to validate the accuracy of modal models of
frequencies, damping values, mode shapes and modal participation factors.
These tools are V
Some validation procedures allow you to convert the complex mode shape vec
tors to normalized ones. Normalized mode shapes are obtained from the am
plitudes of the complex mode shape coefficients after a rotation over their
weighted mean phase angle in the complex plane.
294
Modal validation
18.2
r ijk
h ij,n
j
N
k1
r *ijk
!
j n *k
Eqn 18-1
[H]
j[RK]
N
k1
[R K] *
j n *k
Eqn 18-2
where [RK ] represents the matrix of residues. When Maxwell's reciprocity prin
ciple holds for the tested structure this residue matrix is symmetric and can be
rewritten as
R k a k{V k}{V k} t
Eqn 18-3
The ratio between two residue elements on the same row i but on two different
columns j and l can be computed as
r ij,k v jk
r il,k v lk MSF jlk
Eqn 18-4
This ratio MSFjlk is called the Modal Scale Factor between columns l and j of
mode k. Although this ratio should be independent of the row index i (the re
sponse station), a least squares estimate has to be computed for it when more
than one output station residue coefficient is available
MSF jlk
Eqn 18-5
Part IV
295
MAC jlk
Eqn 18-6
If a linear relationship exists between the two complex vectors {R jk} and {R lk}
the MSF is the corresponding proportionality constant between them and the
MAC value will be near to one. If they are linearly independent, the MAC val
ue will be small (near zero), and the MSF not very meaningful.
In a more general way, the MAC concept can be applied on two arbitrary com
plex vectors. This is useful in comparing two arbitrary scaled mode shape vec
tors since similar mode shapes have a high MAC value.
Modal Scale Factors and Modal Assurance Criterion values can be used to
compare two modal models obtained from two different modal parameter es
timation processes on the same test data for example. When comparing mode
shapes, the MAC values for corresponding modes should be near 100 % and
the MSF between corresponding residue vectors (mode shapes, scaled by the
modal participation factors) should be close to unity. When multiple inputs
were used, this MSF can be calculated for each input while the corresponding
MAC will be the same for all of them.
A second application for the MAC value is derived from the orthogonality of
mode shape vectors when weighted by the mass matrix:
{V k} t[M]{V} m kwhenk 1
Eqn 18-7
0otherwise
where mk represents the modal mass for mode k.
296
Modal validation
18.3
Mode participation
The relative importance of different modes in a certain frequency band can be
investigated using the concept of modal participation. For each mode, the sum
of all residue values for a specific reference expresses that mode's contribution
to the response. At the same time these sums can be added over all references,
to evaluate the importance of each mode.
Note!
These evaluations are only meaningful when the same response and reference
stations are included for all modes.
When a comparison is made of the residue sums for one mode at all the refer
ences, it evaluates the reference point selection for that mode. The reference
with the highest residue sum is the best one to excite that mode.
When these sums are added together for all references, the importance of the
modes themselves is evaluated. The mode with the highest result is the most
important one.
Finally the sums of residues can be added for all modes. Comparison of these
results between different inputs allows you to evaluate the selection of refer
ence stations in a global sense for all modes.
Part IV
297
18.4
Reciprocity of FRFs
Reciprocity of FRFs means that measuring the response at DOF i while exciting at DOF j is the same as measuring
the response at DOF j while exciting at DOF i
This is expressed mathematically as h ij() h ji()
Eqn 18-8
This means that the FRF matrix is symmetric. Note that this property is in
herently assumed when performing hammer impact testing to measure FRFs or
impulse responses.
Eqn 18-9
it becomes clear that, when this matrix is symmetric, the role of mode shape
vectors and modal participation vectors can be switched. Making an abstrac
tion of the absolute scaling of residues, this property can be expressed as fol
lows.
For a reciprocal test structure, the modal participation factors should be propor
tional to the mode shape coefficients at the input stations.
Using this proportionality between mode shapes and modal participation fac
tors, reciprocity can be checked for each mode when data for more than one in
put station has been used for the modal parameter estimation.
298
Modal validation
v*i li
RSF
i1
n
v*i vi
i1
where
Part IV
299
18.5
Eqn 18-10
1
2j dkm k
k1
dk
j n *k
v *ikv *jk
Eqn 18-11
where
mk = modal mass of mode k
dk = damped natural frequency of mode k
nk 1 2k
k
= the critical damping ratio of mode k
nk = the undamped natural frequency of mode k
At this point, it should be pointed out that equation 18-11 contains N more pa
rameters than equation 18-1, i.e. one more parameter per mode. This is due to
the fact that residues are scaled quantities whereas the modal vectors are deter
mined within a scaling factor only. In equation 18-11 the modal mass values
play the role of the scaling constants. It is clear that the value of the modal
mass depends on the scaling scheme that was used to obtain the numerical val
ues of the modal vector amplitudes.
When the residues of a proportionally damped structure are known, equations
18-10 and 18-11 can therefore be used to compute the modal mass and the mo
dal vector amplitudes once a scaling method is proposed. Indeed residues, mo
dal vectors and modal mass are related by following equation
300
Modal validation
r ijk
v ikv jk
2j dkm k
Eqn 18-12
To compute the amplitudes of one modal vector and the corresponding modal
mass from a set of residues with respect to a given input location j you need
one additional equation since the set of equations that can be written for all out
put locationsi in the form of equation 18-12 is undetermined. Therefore N
equations in N +1 unknowns are obtained. This last equation will actually de
termine the scaling of the modal vector.
Note that an eigenvector determines only a direction in the state space and has
no absolutely scaled amplitude, while a residue has a magnitude with physical
meaning. The scaling of the eigenvectors will determine the modal mass. Mo
dal stiffness is determined as the modal mass multiplied by the natural fre
quency squared. Modal damping is twice the modal mass multiplied by the
natural frequency and the damping ratio.
V
Unity mass
In this case the mode shapes and participation factors are scaled such
that the modal mass (mk ) in equation 18-12 is equal to 1.
Unity stiffness
In this case the mode shapes and participation factors are scaled such
that the modal stiffness (kk = mk k 2 ) is scaled to 1.
Unity modal A
In this case the mode shapes and participation factors are scaled such
that the scaling factor (ak ) is scaled to 1. This scaling factor is indepen
dent of the DOFs.
Unity length
In this case the mode shapes and participation factors are scaled such
that the squared norm of the vector vik is scaled to unity.
N0
v2ik 1
i1
Part IV
Unity maximum
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is the DOF with the largest
mode shape amplitude.
Unity component
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is any DOF selected by the user.
301
18.6
Mode complexity
When a mass is added to a mechanical structure at a certain measurement point
then the damped natural frequencies for all modes will shift downwards. This
theoretical characteristic forms the basis of a criterion for the evaluation of esti
mated mode shape vectors.
For each response station, the sensitivity of each natural frequency to a mass
increase at that station can be calculated and should be negative. A quantity
called the Mode Overcomplexity Value" (MOV) is defined as the (weighted)
percentage of the response stations for which a mass addition indeed decreases
the natural frequency for a specific mode,
N0
wiaik
MOV k
i0
x100%
N0
w
i0
Eqn 18-13
where
wi
aik
This MOV index should be high (near 100 %) for high quality modes. If this
index is low the considered mode shape vector is either computational or
wrongly estimated. It is called overcomplex", which means that the phase
angle of some modal coefficients exceeds a reasonable limit.
However if this MOV is low for all modes for a specific input station (say, be
low 10%), this might indicate that the excitation force direction was wrongly
entered while measuring the FRFs for that input station. This error may be
corrected by changing the signs of the modal participation factors for all modes
for that particular input.
302
Modal validation
18.7
Part IV
303
18.8
Comparison of models
When you have two groups of modes representing the same modal space then
you can compare the two groups. The comparison concerns the damped fre
quencies, the damping values, the modal phase collinearities and the MAC val
ues of the two groups. This is a useful way of comparing sets of modes gener
ated from the same data but using different estimation techniques for example.
304
Modal validation
18.9
Eqn 18-14
Removing the brackets from the notation, equation 18-14 can be split into real
and imaginary parts
X r jX i (H r jH i)(F r jF i)
Eqn 18-15
For real normal modes, the structural response must lag the excitation forces by
90_. Therefore, when the structure is excited at the correct frequency according
to one of these modes (modal tuning) the contribution of the real part of the re
sponse vector X to its total length must become minimal. Mathematically this
can be formulated in the following minimisation problem
Part IV
305
X trX r
min
|FF| ( 1 X t X X tX
r r
i i
Eqn 18-16
Substituting the expression for the real and imaginary parts of the response
18-15 in this expression yields
FH trH rF
min
|FF| ( 1 F t(H t H H tH )F
r r
i i
Eqn 18-17
Eqn 18-18
The square matrices Hr t Hr and Hi t Hi have as many rows and columns as the
number of input or reference locations that were used to create them (i.e. the
number of columns of the FRF matrix that were measured). The primary Mode
Indicator Function is now constructed from the smallest eigenvalue of expres
sion 18-18 at each spectral line. It exhibits noticeable local minima at the fre
quencies where real normal modes exist. A second MIF can be constructed us
ing the second smallest eigenvalue of 18-18 for each spectral line. It will
contain noticeable local minima if the structure has repeated modes. This can
be repeated for all other eigenvalues of equation 18-18. The number of func
tions that can be constructed is equal to the number of eigenvalues, which is the
same as the number of input stations. From these functions, you can then de
duce the multiplicity of each of the normal modes.
306
Modal validation
18.10
Summation of FRFs
An important indication of the accuracy of the natural frequency estimates is
their coincidence with resonance peaks in the FRF measurements. These reso
nance peaks can be enhanced by a summation of all available data, either by
real or imaginary parts.
Graphically comparing this summation of FRFs with values of the natural fre
quencies of modes in a display module can be useful. Problems like missing
modes, erroneous frequency estimates or shifting resonances because of mass
loading by the transducers can easily be detected this way.
Part IV
307
18.11
Synthesis of FRFs
The FRFs that you have obtained from a modal model can be synthesized in a
number of ways. Scaled mode shapes (i.e. mode shapes and modal participa
tion factors) have to be available for at least one input station for which a mode
shape coefficient is also available. Using the Maxwell-Betti reciprocity princi
ple between inputs and outputs (section 18.4) it is however possible to calculate
the FRF between any two measurement stations.
SixM*i
|2
i
correlation
S xS*
! M xM*
!
i i
i
i i
i
Eqn 18-19
with
Si = the complex value of the synthesized FRF at spectral line i
Mi = the complex value of the measured FRF at spectral line i
The LS error is the least square difference normalized to the synthesized values.
Si Mi
x Si Mi
*
LSerror i
SixS*i
Eqn 18-20
A listing of FRFs where the correlation is lower than a specified percentage and
which exhibit an error higher than a specified percentage provides useful infor
mation on the quality if the synthesized FRF.
308
Chapter 19
309
19.1
Figure 19-1
310
In theory 2 excitations and 6 responses are needed for the calculations. Practi
cal tests show that best results are obtained with at least 6 excitations (e.g. 2
nodes in 3 directions) and 12 responses need to be measured.
Reference axis system
All the rigid body properties are calculated relative to a reference axis. The ref
erence axis system is defined by the three coordinate values of its origin and
three euler angles representing its rotation.
Specification of the frequency band
Rigid body properties are calculated in a global (least squares) sense over a spe
cified frequency band between the last rigid body mode and the first deforma
tion mode (see Figure 19-1).
Mass line value
The mass line" value which is needed for the calculations, can be derived from
the measured FRFs in three ways:
1)
When the rigid body modes and deformation modes are sufficiently
spaced, the amplitude values (with the sign of the real part) of the origi
nal, unchanged measured FRFs can be used. In this case there is no need
to have the deformation modes available for the rigid body modes analy
sis.
2)
When the spacing between rigid body modes and deformation modes is
not sufficient, the FRFs have to be corrected. In this case the influence of
the first deformation modes, if significant, can be subtracted from the
original FRFs. The amplitude values (with the sign of the real part) of
synthesized FRFs are used.
3)
If accurate measured FRFs are not available in the frequency range direct
ly above the rigid body modes, then lower residual terms which lie in a
frequency band which contains the first deformation modes can be used.
Residual terms can be determined from a modal analysis. Lower residu
als represent the influence of the modes below the deformation modes, and
are therefore representative of the rigid body modes.
1.1
Coordinate transformation
If nodes, corresponding to the response DOFs used do not have global di
rections or when a reference (not coincident with the global origin) is spe
cified, then a rotation of the measured accelerations according to the glob
al/reference axis system is needed.
Part IV
311
All three directions (+X, +Y, +Z) are required. For the three measured (lo
cal) accelerations of output node o":
..
..
{X} g [T] 1
o {X} l
Eqn 19-1
where
..
{X} g is the global acceleration vector
..
..
{X} r ([T] 1
o [T] r){X} l
Eqn 19-2
where
[T] r is the rotation matrix (global to local) of node r".
1.2
System of equations
For all spectral lines of the selected band, for all response nodes P, Q,...
and for all inputs 1, 2, ... under consideration
..
X.. 1P
X1P
..
X1P
..
X1Q
..
X
.. 1Q
X1Q
)
..
X 2Px ---
X 2Py
X 2Pz
X 2Qx
X 2Qy
..
..
..
..
..
X 2Qz
)
..
1 0 0 0
Z P Y P X.. 1g
---
0 1 0 ZP 0
X P X 1g
..
0 0 1 YP XP 0
---
X 1g
..
1
0
0
0
Z
Y
Q
Q
---
.. 1g
0 1 0 ZQ 0 XQ
---
.. 1g
0 0 1 Y Q XQ 0
1g
---
)
)
)
)
---
..
x
y
z
x
y
z
X 2g x ---
X 2g y ---
..
X 2g z ---
..
2g x ---
..
2g y --- Eqn 19-3
..
2g z ---
) ---
..
where XP, YP, and ZP are the global coordinates of node P (or towards the
reference axis system).
This over-determined system of equations (number of output DOFs is
higher than or equals 6) is solved for each spectral line in a least square
sense. In this way at each spectral line, the reference acceleration matrix is
found. Further, a general solution of the reference acceleration matrix
over the total frequency band is calculated by solving in a least squares
sense the global set of equations containing all outputs and all spectral
lines.
312
2.1
Coordinate transformation
For input force 1 in the local X-direction of node i":
1.0
!
{F 1} [T] 1
i $0.0%
0.0
Eqn 19-4
[T] 1
i
is the rotation matrix (global to local) of node i"
When the reference r" is not coincident with the global origin:
{F 1}
1.0
!
$0.0%
0.0
([T] r[T] 1
i )
Eqn 19-5
[T] 1
r is the rotation matrix (global to local) of reference node r"
Similar equations are used when the input has Y-direction or Z-direction.
2.2
System of equations
For all inputs 1, 2 . . .
F1g !
1
0
0
F1g 0 1 0
F1g
0
0
1
$M1g % 0 Z1 Y 1
{F1}
M1g Z1 0 X1
Y 1 X1 0
M 1g
x
y
z
Eqn 19-6
x
y
z
Part IV
313
Xcog!
m y 0 0 0 0
0
0 Y gog
0
0 Z cog
m x 0 0 0 0
0
0 0 0 0
0
0 I xx
I
F g x 0 0 y 0 z
$ Iyy %
zz
Fg
0 y 0 x z 0
I xy
0
0 0 z 0 y x
Iyz
I xz
y
Eqn 19-7
Xcog, Ycog and Zcog are the global coordinates of the center of gravity
Ixx, Iyy Izz are the moments of inertia towards the global axis system
Ixy, Iyz Ixz are the products of inertia towards the global axis system.
This set of equations can be solved in two steps. First, the coordinates of
the center of gravity can be solved from the first three equations (per ref
erence). Afterwards, these values can be filled in the last equations to
solve the inertia moments and products.
Step 1
for each input and for each spectral line
and for each input over the total band:
Fg m.a g ! 0
m z m y x cog
ycog!
F
m.a
m
0
m
g
g
z
x
$
$z %
Fg m.ag %
m
m
0
y
x
cog
x
Eqn 19-8
Step 2
for each input and for each spectral line
and for each input over the total band:
Ixx!
I yy
Mg ycogFg zcogFg ! x 0 0 y 0 z
I zz
$Mg x cogFg zcogFg % 0 y 0 x z 0 $Ixy%
Mg x cogFg ycogFg
0 0 z 0 y x
Iyz
I xz
x
Eqn 19-9
314
If wanted, only the second set of equations is solved. In this case the coor
dinates of the center of gravity are presumed to be known and specified
by the user.
4
Eqn 19-10
{Lg } is the vector of total impulse towards the global (reference) axis sys
tem
[A] is the matrix of inertia (symmetrical)
{g } is the vector of velocity
This is an eigenvalue problem, where
the Eigenvalues : I1, I2, I3: are 3 principal moments of inertia
the Eigenvectors : {e1}, {e2}, {e3} : are directions of the 3 principal axes of
inertia.
Part IV
315
19.2
316
19.2.1
Part IV
317
Limitations
Calculating the rigid body motion for a part of the structure (for example one
single component) can sometimes prove a little awkward. The component will
indeed move as a rigid body but is not constrained to still be connected to the
rest of the structure. When applied to the tail wing of an airplane for example
this wing may rotate about a horizontal axis through the middle of the wing
but may no longer be connected to the fuselage at its base. The same may hap
pen to an engine block of a car which may be disconnected from the supports
when a rigid body motion is applied to it.
19.2.2
r fr x
2I
where
318
Part IV
319
19.3
320
References
[1]
[2]
Okuzumi, H.
Identification of the Rigid Body Characteristics of a Powerplant by Using
Experimental Obtained Transfer Functions
Central Engineering Laboratories, Nissan Motor Co., Ltd., Jun 1991
[3]
[4]
LMS International
LMS CADA-X Modal Analysis Manual Revision 3.4
LMS International, Leuven, Belgium, pp 2.6-2.7, pp 3.24-3.32, 1996
[5]
LMS International
How to Add Rigid Body Modes to an Existing Modal Model in CADA-X
LMS International Consulting reports, Ref. DVDB/sh/911295, Leuven,
Belgium,
22 pp, 1991
Chapter 20
Design
This chapter discusses the three types of analysis that can be per
formed to determine the effect of design changes on the modal be
havior of a structure. These are
Sensitivity
Modification prediction
Forced response
321
Chapter 20 Design
20.1
Eqn 20-1
Eqn 20-2
r njk a k$nk$ jk
Eqn 20-3
where
322
rmjk
rnjk
$mk
$nk
Design
$jk
$ mk$ jk
$ nk$jk
r mjkrnjk
$jk a k a k$jk r jjk
Eqn 20-4
Mode shape coefficients need only be available for the Degrees Of Free
dom which are affected by the structural changes.
The information used to obtain this scaling are: poles, (unscaled) mode shapes
and modal participation factors for a number of reference stations. The re
quired scaled mode shape coefficients can be obtained from this information as
follows For Ni points for which output data are also available (i.e. driving points), a
vector of complex modal participation factors Lkj for each mode k can be built:
L k L 1L 2...L N i
Eqn 20-5
W1 !
W2
{ W } k $ .. %
W.
N
k
Eqn 20-6
The residues Rk are defined as the product of mode shapes and modal partici
pation factors :
[ R ] k { W } kL k
Part IV
Eqn 20-7
323
Chapter 20 Design
The scaled mode shapes {V}k , used in the theoretical derivation of the previous
chapter are related to the unscaled mode shapes {W}k via a complex scaling fac
tor k for each mode :
{ V } k k{ W } k
Eqn20-8
From the definition of residues these mode shapes are scaled such that
R k {W} kL k {V} k{V} tk
Eqn 20-9
k
*
... W N kL N ik
i
W *1kW1k W *2kW 2k ... W *NikW N ik
W *1kLik
W *2kL 2k
Eqn
20-10
In the special case where only one input is considered, i.e. only one set of resi
dues is available, the scaling factor becomes -
k
L 1k
W 1k
Eqn
20-11
The scaling of equation 20-8 actually converts the generally valid modal model
of mode shape vectors W and modal participation factors L to a model of scaled
mode shape vectors V, in which the modal participation factors are absorbed
via equation 20-10. Obviously some information is lost by removing the scal
ing factors L from the model, and as a consequence, the resulting model is only
valid for reciprocal structures with a symmetric FRF matrix. The calculation of
the scaling factor k according to equation 20-10 is in fact the best compromise in
a least squares sense to approximate a non-reciprocal modal model by a re
duced reciprocal one.
324
Design
20.2
Sensitivity
An experimental modal analysis of a structure results in a dynamic model in
terms of modal parameters. The qualitative information contained in this mod
el can be used to identify dynamic problems for example by animation of the
mode shapes. Through physical insight and expertise structural modifications
can be proposed to overcome specific dynamic problems.
For structures with complex dynamic behavior, predictions about the effect of
physical changes on modal parameters are usually very difficult - if not impos
sible - to make. When unsatisfactory dynamic behavior is detected or sus
pected the designer can use trial and error procedures to try out a number of
modifications, but there is no guarantee that any of these attempts will yield
satisfying results. On the other hand numerical techniques can be employed
which use the quantitative results of a modal test to evaluate the effects of
structural changes.
These structural changes can be imposed by modifying the physical character
istics of the structure in terms of its inertia, stiffness and damping. A Sensitiv
ity analysis allows you to see how changes in these physical characteristics af
fect particular modes at various points on the structure. It computes only the
sensitivity of the modal model to structural alterations, and does not involve
actually applying any changes. A Sensitivity analysis provides you with the
means of determining the points where such modifications will have most ef
fect.
20.2.1
H ij()
jrijk
2N
k1
Eqn 20-12
Part IV
325
Chapter 20 Design
H ij
P
2N
k1
r ijk
1
j k P
2N
r ijk
k
(j
2
) P
k1
Eqn 20-13
Eqn 20-14
Using this equation and the theory of adjoined matrices, equation 20-13 can be
rewritten in the form
H ij
P
{H ic} t
Q
{H cj}
P
Eqn 20-15
H ij
P
2N
r cjk !
2N r ick ! Q
$
%
$
j k P
(j k)%
k1
k1
Eqn 20-16
Splitting up equation 20-16 into partial fractions, and identifying the corre
sponding terms of equation 20-13, gives the sensitivities for frequency (20-17)
and mode shape (20-18).
k
1 {r } t Q
ick
r ijk
P
P
326
jjdk
{r cjk}
Eqn 20-17
Design
r ijk
Q
{r ick} t
j P
P
2N
m1
jjdk
{r cjk}
{rick}tQP
2N
m1
jjdk
r cjm
m k
r ick
Q
m k
P
jj dk
{r cjm}
Eqn 20-18
So from equations 20-17 and 20-18, the residues rick and rcjk for each DOF c
that is influenced by the structural change are required in order to calculate the
sensitivity to that change. Even if not all the residues are available, the Max
well-Betti reciprocity principle can be used to calculate the required values.
The residue rick to be derived for any reference DOF c when the residues for
DOFs i and c are available for an arbitrary reference j on condition that the driv
ing point residue rjjk is also available. The driving point residue is also required if
the mode shapes are to be correctly scaled.
From the general formula of equation 20-18, it is now possible to calculate the
sensitivity value of a mode shape coefficient for DOF i when a structural
change is considered for the parameter P, which will affect DOFs a and b. The
corresponding scaled mode shape coefficients for each mode in the modal mod
el are required. From the definition of the dynamic stiffness matrix Q, the three
specific cases of P being a mass, a linear spring (stiffness) or a viscous damper
can be considered.
Mass
This is the case where P is a mass at a specific DOF a. Equations 20-17 and
20-18 are then simplified to
k
2k$ 2ak
m a
$ ik
k$ 2ak$ ik $ak
m a
Eqn 20-19
2N
m1
2k
$ am$ im
k m
Eqn 20-20
Stiffness
This is the case where P is a linear spring between DOFs a and b. Equations
20-17 and 20-18 are then simplified to
Part IV
327
Chapter 20 Design
k
($ ak $ bk) 2
k ab
$ ik
($ ak $ bk)
k ab
Eqn 20-21
2N
)$ im
($am $bm
m1
Eqn 20-22
Damping
This is the case where P is a viscous damper between DOFs a and b. Equations
20-17 and 20-18 then become
k
k($ ak $ bk)2
c ab
$ ik
($ $ bk) 2
ak
$ik ($ ak $ bk)
2
c ab
Eqn 20-23
2N
k
m
m1 k
($ am $ bm)$ im
Eqn 20-24
The imaginary parts of equation 20-19, 20-21 and 20-23 are used to compute
the sensitivities of the damped natural frequencies. The corresponding real
parts express the sensitivities of damping factors or exponential decay rates.
328
Design
20.3
Modification prediction
This section describes the use of a dynamics modification theory to predict the
effect of structural modifications on a mechanical structure's modal parameters.
These modifications can take the form of local mass, stiffness and/or damping,
FEM-like rod, truss, beam or plate reinforcements. In addition to local modifi
cations, a substructure assembly theory allows you to predict the modal model
for a structure that consists of an assembly of substructures.
Modification prediction allows you to evaluate:
V
the effect of any number and type of connections between any number
of substructures (only if installed)
the dynamics of small scale models, built up from lumped massspring-dash pot elements
Such an analysis avoids time consuming experimental trial and error proce
dures of modifying prototypes or scale models of mechanical structures, mea
suring and analyzing the dynamic behavior and evaluating the effects of these
modifications.
20.3.1
Mathematical background
The starting point for the structural modification and substructure theory is the
modal model described in section 15.1.
The first section of this theoretical background deals with the coupling and
modification of substructures using flexible coupling and general viscous
damping. It continues with the cases of rigid coupling and flexible coupling
with proportional damping.
Modal models for the assembly of substructures with flexible coupling and
viscous damping
Modal models of substructures
Consider two structures, 1 and 2. They obey the following equations of motion
in the Laplace domain :
Part IV
329
Chapter 20 Design
Eqn 20-25
Eqn 20-26
The matrices Mi , Ci and Ki are the mass, damping and stiffness matrices of the
structure 1 or 2 corresponding to the subscript i. General viscous damping is
allowed. The system matrices are symmetric. The displacement vectors are
{x1 } and {x2 }, and the force vectors {f1 } and {f2 } respectively.
The modal parameters for substructure 1 will first be derived in a general way.
For substructure 2 the same method can be used but will not be entirely re
peated.
The transformation to decouple the equations of motion can be found by ad
ding a set of dummy equations (Duncan's method) :
sM 1x 1 sM 1x 1 0
Eqn 20-27
Eqn 20-28
where
0 M1
A 1 M C
1
1
sx
y 1 x 1
1
B 1
M1 0
0 K1
0
p 1 f
1
330
Eqn 20-29
Design
V t1B 1V 1 b 1
Eqn 20-30
y 1 V 1q1
Eqn 20-31
Using expressions 20-29 and 20-30 in the equation of motion 20-28 after premultiplication with the transpose of V1 and substitution with expression 20-31
one obtains the equations of motion in modal coordinates for substructure 1 :
sa 1q 1 b 1q 1 V t1p1
Eqn 20-32
It can be seen that the equations of motion in modal space are uncoupled.
The same procedure can be repeated for substructure 2, yielding a diagonal ei
genvalue matrix 2 and an eigenvector matrix V2 . The eigenvector matrix V2
defines a transformation to modal coordinates {q2 }. The equations of motion for
substructure 2 in modal space are :
sa 2q 2 b 2q 2 V t2p2
Eqn 20-33
Substructure assembly
The system matrices of both substructures can be merged to give a structure
composed of two dynamically independent substructures. For this assembled
structure one can easily derive the modal parameters since they are the same as
those of the two substructures but gathered in one eigenvalue matrix and one
eigenvector matrix.
More explicitly this substructuring yields the following system matrices :
A
A1 0
0 A2
B
B1 0
0 B2
and
Eqn 20-34
y
y y 1
2
p
p p 1
2
Part IV
331
Chapter 20 Design
sA{ y } B{ y } p
Eqn 20-35
It can be verified that the matrices of equation 20-35 are diagonalized by the
eigenvector matrix V composed as follows :
V1 0
V 0 V
2
Eqn 20-36
0
0
2
Eqn 20-37
Eqn 20-38
where
q
q q 1
2
An expression of the type of equation 20-33 using the eigenvector and eigenva
lue matrices, yields :
sa q bq V t p
Eqn 20-39
A close look at the matrix of eigenvectors V shows that the two substructures 1
and 2 are still dynamically independent. Indeed, any force at any point of one
substructure will not induce any motion at any point of the other substructure.
The two substructures can now be connected with flexible connections mod
elled as springs and dampers. With the connection matrices Kc and Cc equation
20-35 becomes:
s(A A c){ y } (B B c){ y } p
332
Eqn 20-40
Design
where
0
0
A c
0
0
0
Cc
0
Cc
0 0
0 Cc
0 0
0 Cc
0
0
B c
0
0
0
Kc
0
Kc
0 0
0 Kc
0 0
0 Kc
Eqn 20-41
These changes can be brought together in system matrices for the modifica
tions:
0
M
A
0
0
0
M 1 0
0
C 1 0
0
0 M2
0 M 2 C2
M
0
B
0
0
0
0
0
K 1
0
0
0 M 2 0
0
0
K 2
Eqn 20-42
It is clear from the matrices of previous expression that the modifications are
not coupling the substructures, they are only modifying each substructure sepa
rately.
When the modifications of expression 20-42 are added to the system equation
of the connected structure (Eqn. 20-40), one obtains the final equation in physi
cal coordinates
s(A A c A){ y } (B B c B){y } p
Eqn 20-43
Part IV
333
Chapter 20 Design
sA m q B m q V t p
Eqn 20-44
where
A m a V tA cV V tAV
B m b V tB cV V tBV
The matrices Am and Bm for the modified structure can again be diagonalized
by a general eigenvalue decomposition. When the new eigenvalues and eigen
vectors are represented by ' and W, one has :
W tA mW a
W tB mW b
Consider then the transformation :
q W q
Eqn 20-45
Eqn 20-46
The transformation matrices V and W can be combined in one matrix Vi as Eqn 20-47
V VW
which then gives the following transformation equation :
{ y } V q
Eqn 20-48
Eqn 20-49
The natural frequencies and the damping factors can be found as the imaginary
resp. the real part of the eigenvalues in . The mode shapes are the columns
of the matrix Vi.
334
Design
Eqn 20-50
Zero damping
In case of no damping : [C] = [0], next eigenvalue problem is to be solved with
eigenvalues: r2 and with eigenvectors : {}r.
(s 2[M] [K]){X} {0}
Eqn 20-51
This system has purely imaginary poles, occurring in complex conjugate pairs.
1 j 1, ..., N j N
*
1 j 1, ..., N j N
Eqn 20-52
Eqn 20-53
The modal vectors are real, called normal modes (phase: +/ - 180_).
The equation of motion can be diagonalized, based on the orthogonality of the
modal vectors. Transformation to modal coordinates leads to an equation of
motion, with diagonal system matrices, being the modal mass and modal stif
fness ma
trices:
Eqn 20-54
["] [{" 1}...{" N}]
["] t[M]["] m ["] t[K]["] k
Eqn 20-55
{X} ["]{q}
Eqn 20-56
Eqn 20-57
Eqn 20-58
Propotional damping
In case of proportional damping, the damping system matrix is a linear com
bination of the mass system matrix and the stiffness system matrix:
Part IV
335
Chapter 20 Design
Eqn 20-59
Eqn 20-60
r r
r2
r 1
Eqn 20-61
The complex poles are solved from the real eigenvalues (-n) and the damping
factors (, ). When more than two original modes are taken into account (in
practical cases, this is always the case), the damping factors can solved in a
least squares way from the modal masses, modal stiffnesses and modal damp
ing factors.
Modal synthesis
Only mass and stiffness coupling modifications, M, K and not damping
coupling modifications can be applied. The equation of motion of the coupled
system are
(s 2([M] [M]) [K] [K]){X} {0}
Eqn 20-62
Eqn 20-63
["] m ["][q r] m
Eqn 20-64
Eqn 20-65
In modal space:
Where :
336
Design
Rigid coupling
The above theory relates to flexible coupling, but it is also possible to place
constraints on DOFs connecting substructures to create rigid coupling between
them, or to constrain a single DOF, thus fixing it rigidly to `ground'. In this
case the restrained DOFs will have zero displacement.
Constraints on the physical degrees of freedom are
[R]{Y} {0}
Eqn 20-66
{Y} ["]{q}
yields constraints in modal space:
[R]["]{q} [T]{q} {0}
Eqn 20-68
qd
[[T d][T i]] q {0}
i
Eqn 20-69
{q d}
[T d] 1[T i]
{q i} [T]{q i}
I
{q i}
Eqn 20-70
Eqn 20-71
When the eigenvalues and the eigenvectors with the independent modal
coordinates qi are solved, the dependent modal coordinates qd of the eigenvec
tors can be calculated. In a last step, the mode shapes in physical coordinates
are found by the inverse modal transformation.
Constraints can be defined in the same way as other structural modifications.
Part IV
337
Chapter 20 Design
20.3.2
338
Design
20.3.3
Part IV
339
Chapter 20 Design
340
Design
A truss element between two nodes is translated into elementary mass and
stiffness modifications. The longitudinal stiffness is related to a 6 by 6 stiffness
matrix for 6 Degrees Of Freedom (3 for each node). This matrix is obtained by
projecting the longitudinal stiffness along each of the 3 coordinate axes.
Part IV
341
Chapter 20 Design
Ib
It
Reference Node
(r)
The reference node together with the two end nodes defines the so-called refer
ence plane. The moments of inertia for bending are defined in two directions :
Ib for bending in the reference plane
Ip for bending in a plane perpendicular to the reference plane
The 2 end nodes have six Degrees Of Freedom each: 3 translational and 3 rota
tions. A beam element can therefore transmit six forces to another beam ele
ment: 3 translational forces and 3 moments. For end nodes that are not con
nected to another beam only the translational forces can be transmitted as for
example in the case for a stand-alone beam. In the same way, beams that are
positioned on a straight line (colinear beams) will not be subjected to torsion.
342
Design
The number of divisions along the first side, between c1 and c2 (a)
The number of divisions along the second side, between c2 and c3 (b)
n
3
c4
c2
c3
thickness t
When a plate is defined with a and b divisions along its two sides, a mesh of (a
x b) rectangles is created as shown in the diagram. As the corner nodes already
exist this means that ((a+1).(b+1) - 4) new nodes are generated.
If there are connection nodes defined then the mesh point situated closest to a
connection node is replaced by that node.
The plate so defined should comply with the following conditions (1 )
d
the mesh elements should not deviate too much from a rectangular
form,
i.e. each corner angle should be #900
(1) The calculation of the mass and stiffness matrices of a plate membrane described here is based on the plate
theory of Mindlin.
Part IV
343
Chapter 20 Design
Each of the corner nodes of the mesh elements has 6 Degrees Of Freedom - 3
translations and 3 rotations - and so can transmit six forces to another mesh ele
ment. This is also the case between elements of different plate membranes, as
long as they are connected either at a corner or at a common connection node.
The parameters m, k and c of this SDOF system are designed such that the mo
tion of the coupling point in the direction of this absorber is decreased
(damped) as much as possible for a certain frequency, typically at resonance.
xa e jwt
k
xr e jwt
If the motion of the coupling point in the direction of the absorber is designated
by xa and the frequency to be damped by f (= /2) then the following formu
lae apply for the equations of motion of m (xr is the relative displacement be
tween the absorber's mass and the attachment point).
344
Design
Eqn 20-72
m 2x 2
m 2 jc k
Eqn 20-73
Eqn 20-74
(k jc)m
2x a
2
m jc k
Eqn 20-75
m eq
(k jc)m
m 2 jc k
Eqn 20-76
Eqn 20-77
It can be shown that if no damping is used (c=0) the mass and stiffness of the
absorber can be designed such that the vibration of the attachment point is
eliminated entirely (xa = 0). This happens if the natural frequency of the ab
sorber equals the forcing frequency .
The most practical application of a tuned absorber is the reduction of vibration
levels at a resonance frequency n . In this case, the absorber's own natural fre
quency for optimal tuning is na
Part IV
mk 1 n
Eqn 20-78
345
Chapter 20 Design
where ! is the ratio between the absorber's mass and the equiva
lent" mass of the system at resonance :
mm
Eqn 20-79
eq
c
2 km
3
8(1 ) 3
Eqn 20-80
From equations 20-78, 20-79 and 20-80 the physical parameters m, c and k of
the attached absorber can be computed if the following values are known.
meq the equivalent mass (see further)
n
The equivalent mass of the system for a certain mode can be obtained as fol
lows:
m eq
1
.V 2i .2jd.
Eqn 20-81
where
Vi
is the scaled mode shape coefficient of the mode to be tuned at
the attachment point
d
20.3.3.9 Constraints
Physical constraints can be defined between separate DOFs or between one
DOF and itself.
Defining a constraint between two separate DOFs, applies a rigid coupling be
tween them. Defining a constraint between a DOF and itself effectively fixes it
to `ground'.
346
Design
20.3.4
Numerical problems
The eigenvalue problem mentioned above that is to be solved for the modified
system, can be subject to numerical problems. These can arise from two
sources.
Part IV
347
Chapter 20 Design
20.3.5
Units of scaling
In order to obtain correct modification prediction results, it is absolutely neces
sary to maintain a correct scaling of the original modal model using a consis
tent unit set.
The scaled mode shapes of the original structure have a physical dimension re
lated to the measurement data from which they were extracted by modal pa
rameter estimation techniques. Since this modal model is a valid description
for the relation between input forces and response displacements, the applied
modifications should be defined in a unit set which is consistent for these quan
tities. The same rule applies to the interpretation of the resulting modal model.
Erroneous results are bound to occur when the original mode shape vectors are
not scaled correctly. This might arise because of the incorrect definition of the
reference point for the data (wrong driving point residue), not using the correct
transducer sensitivity or calibration factors for the experimental FRFs (force as
well as response transducers), or the use of an inconsistent unit set during the
modal test or analysis phase. These errors may cause an entirely wrong trans
formation of the applied physical modifications to the modal space and a small
mass modification for example may grow out of proportion because of this bad
scaling.
348
Design
main plate
2
1
elem
1
elem
2
elem
3
elem
4
nodes
4
3
It
Ib
= shear modulus
Part IV
349
Chapter 20 Design
From the geometrical properties of the beam the user can calculate the cross
sectional area and the different moments of inertia. Tables listing character
istics of various types can be found.
z
1
3
n2
n1
y
x
Figure 20-1 Stiffening rib orientation and local co-ordinate system (Axes 1 2 and 3)
3 Construction of the element matrices for each beam element.
An element stiffness (full) and mass (diagonal) matrix can be built from the
relations between the 6 forces and 6 Degrees Of Freedom at each end node
(U1 , V1 , W1 , 1 , 1 , and "1 for node 1, u2 , v2 , w2 , & 2 and "2 for node 2)
U1!
U 2!
V1% T $V 2
T 2T & translation
$
%
1
W1
W
2
T t1
R t1 T t2 R t2
T1
R1
1!
2!
1% R 1$ 2
R 2T & rotation
$
%
"1
"
2
T2
R1
350
Design
t t t t t t t t t t
T1 R1 T2 R2 T3 R3 T4 R4 T5 R5
T1
R1
T1
R1
T1
R1
T1
R1
T1
R1
Figure 20-3 Assembly of element matrices
5 Perform a static condensation (see below) of the rotational DOFs.
6 Add the condensed matrices to the system matrices and continue the calcula
tion procedure as for other (lumped) modifications.
Remarks :
V
It is important to keep in mind that the basic assumption in beambending analysis is that a plane section originally normal to the neutral
axis remains plane during deformation. This assumption is true pro
vided that the ratio of beam length to beam height is greater than 2.
Furthermore, shear effects do not contribute to the elements of the stiff
ness matrix.
Static condensation
Static condensation in a dynamic analysis is based upon the assumption that
the mass at some Degrees Of Freedom can be neglected without a significant
loss of accuracy on the dynamic model in the frequency range of interest. More
explicitly, for the beam elements in the application of interest consider the rota
tional Degrees Of Freedom to be without mass. The assembled mass and stiff
ness matrices of the entire beam can then be partitioned as follows,
Part IV
351
Chapter 20 Design
KTT
K
RT
Eqn 20-82
where
T
The modal parameters describing the dynamic behavior of this structure are
then obtained by solving following eigenvalue problem,
KTT
K
RT
K TR V T
MTT [0] V T
V 2
K RR
[0] [0] V R
R
Eqn 20-83
From the bottom half of equation 20-83 a relation between the translational and
the rotational DOFs is derived.
Eqn 20-84
which can be solved to express the rotational DOFs in terms of the translational
ones,
1
V R K RR K RTV T
Eqn 20-85
Eqn 20-86
with
352
Eqn 20-87
Design
The matrices [KT] and [MTT] of equation 20-86 are used to dynamically model
the beam structure. The model will only be valid in the frequency range where
the mass effects of the rotational DOFs are negligible. Mass effects only con
tribute significantly to the dynamic behavior around and above those reso
nances where they are capable of storing a considerable amount of kinetic ener
gy.
Note that [KT] as expressed in equation 20-87 can only be computed if [KRR] is
non-singular. The stiffness matrix is singular if rigid body motion is possible.
The rigid body mode of a beam along its longitudinal axis is not naturally elim
inated by constraining its three translational DOFs so causing in general a first
order singularity. With such configurations it will not be possible to store tor
sional deformation energy in the beam therefore the corresponding off-diago
nal elements of the assembled stiffness matrix can be neglected and the diago
nal elements made relatively small. In this way the matrix becomes invertible
and the predicted dynamic behavior will reflect the inability to store torsional
deformation energy in the beam. This operation will, however, not be neces
sary when the beam is two or three dimensional, as in such cases, rigid body
motion through rotation around one of the axes is no longer possible.
Part IV
353
Chapter 20 Design
20.4
Forced response
Experimental modal analysis results in a dynamic model described by the mo
dal parameters, damped natural frequency, exponential decay rate and scaled
mode shapes (residues). These modal parameters provide valuable insight into
the dynamic behavior of a structure. Problem areas can be identified by ani
mating the mode shapes and the relative importance of the mode shapes can be
assessed by comparing their amplitudes.
In most cases however the designer is less interested in dynamic characteristics
themselves than in knowing how the structure is going to behave under normal
operating conditions. The important points to determine are V
The natural frequency of the modes of vibration which seem to be the most im
portant parameters in the modal model may well not dominate the response if
conditions are such that they are not excited.
The Forced response functions enable you to answer these questions by deter
mining the response of the modal model to known force spectra.
20.4.1
Eqn 20-88
354
Design
N0
Hij()Fj()
Eqn 20-89
j1
This means that the response at DOF i can be written as a linear combination of
the applied forces, each weighted by the corresponding FRF between input
DOF j and output DOF i. These frequency dependent weighting factors de
scribe the dynamic flexibility between two degrees of freedom i and j of a me
chanical structure.
When the modal model for that structure is available, e.g. from modal test data
or finite element calculations, the FRF can be modelled as given by
H ij()
j rijk
2N
Eqn 20-90
k1
2N
j1 k1
Eqn 20-91
Even if not all the residues are available, the Maxwell-Betti reciprocity princi
ple can be used to calculate the required values. Equation 20-4 allows the resi
due rick to be derived for any reference DOF c when the residues for DOFs i and
c are available for an arbitrary reference j on condition that the driving point resi
due rjjk is also available. The driving point residue is also required if the mode
shapes are to be correctly scaled.
Equation 20-91 represents the response at all DOFs to all forces with a contribu
tion from all modes. The contribution of each mode is given by -
Part IV
355
Chapter 20 Design
mode k ; 0 to N
; N to 2N
1
f k()
j k
f k()
N0
N0
j1
j1
N0
N0
Eqn 20-92
1
v *jkF j() p k()
v *jkF j()
j *k j1
j1
(complex conjugate modes)
The response for each DOF then taking into account the contribution of each
mode is then given by
X i()
356
2N
k1
kN
vikfk() v*ikfk()
Eqn 20-93
Chapter 21
Geometry concepts
357
21.1
surface
Figure 21-1
connection
Note that the definition of nodes and meshes for acoustic measurements are de
scribed in the Acoustic" documentation.
358
Geometry concepts
21.2
Nodes
A node is defined by its location and its orientation.
Location
The location of a node in the 3D space is defined by a set of 3 real numbers
known as the coordinates. Coordinates are always defined relative to a refer
ence coordinate system.
The reference coordinates are normally shown along with the model in the the
display window. The origin of the global coordinate system is the origin of the
3D space that contains the test structure and the global symmetry of the struc
ture should be considered when defining this.
The reference coordinate system can be either Cartesian, cylindrical or spheri
cal.
Z
Z
z
y
y
x
"
Right handed
Cartesian
Figure 21-2
Cylindrical
"
Spherical
Coordinate systems
Cylindrical
"
2
45
Spherical
"
3
45
55
Orientation
Nodal orientation is defined using a Cartesian coordinate system. In many ap
plications the orientation of the node defines the measurement directions.
Part IV
359
z
x
Figure 21-3
The origin of the nodal coordinate system coincides with the node's location. If
the principal axes of the nodal coordinate system are not coincident with the
measurement directions, in either a positive or a negative sense, then the differ
ence must be defined with Euler angles.
Euler angles
Three Euler angles are used to define the orientation of a one coordinate sys
tem, relative to a reference coordinate system with the same origin.
"xy
The first angle, "xy (Euler XY) is a rotation
about the Zr axis of the reference system. (Posi
tive from Xr axis to Yr axis). This generates a
first intermediate system indicated by a single
quote ' on the axis labels.
Zr z'
y'
Xr
"xy
"xz
The second angle "xz (Euler XZ) is a rotation
about the y' axis of the first intermediate system.
(Positive from the x' axis to the z' axis). This gen
erates a second intermediate system, indicated
by two quotes " on the axis labels.
360
z'
z"
y'y''
x"
+
"xz
Yr
x'
x'
Geometry concepts
"yz
x''X
z''
Y
+ "yz
y''
Part IV
RY
Y
Degrees of freedom
361