Академический Документы
Профессиональный Документы
Культура Документы
Key parameters in dynamic systems often change during their life cycle due to repair and
replacement of parts or environmental changes. This paper presents a new approach to
account for these changes by updating the system models. Current iterative methods
developed to solve the model updating problem rely on minimisation techniques to find
the set of model parameters that yield the best match between experimental and analytical
responses. These minimisation procedures require considerable computation time, making
the existing techniques infeasible for some applications, such as in an adaptive control
scheme, correcting the model parameters as the system changes. The proposed approach
uses frequency domain data and a neural network to estimate the updated parameters
quickly, yielding a model representative of the measured data. Besides control-related
applications, this may also be of use for manufacturing systems, where parameters change
during operation requiring repeated updates of the nominal model. Numerical simulations
and experimental results show that the neural network updating method (NNUM) has
good accuracy and generalisation properties, and it is therefore a suitable alternative for
the solution of the model updating problem of this class of systems.
7 1998 Academic Press Limited
1. INTRODUCTION
A mathematical model of a structure is verified by building a prototype of the structure,
testing it, and comparing experimental and analytical responses. Since these responses
often do not agree, the mathematical model must be modified until a good agreement is
achieved. A reliable mathematical model allows the engineer to investigate variations of
the original design and choose the best option. It also allows the design of
high-performance control laws based on an accurate model of the structure. The process
of modifying the mathematical model in order to achieve a good agreement with the
measured data is called model updating. Model updating differs from system identification
in the sense that updating requires a good initial model of the structure and can yield
mathematical models that are physically realisable, while system identification techniques
do not require, nor do they yield a physically plausible model of the structure.
Model updating of mechanical systems has been an active field of research during the
past 15 years [1]. Among the many techniques developed to solve the model updating
problem, the ones that result in physically plausible models rely on minimisation
techniques to find the set of model parameters that yield the best match between
experimental and analytical responses. The computational time required by the
& '
H1 (v1 ) H2 (v1 ) . . . HN (v1 )
$ %
k1 k2 . . . kN
X = H1 (v2 ) H2 (v2 ) . . . HN (v2 ) , and Y = ,
c1 c2 . . . cN
H1 (v3 ) H3 (v3 ) . . . HN (v3 )
where H(vi ) is the frequency response function evaluated at vi . The number of rows of
X is the number of neurons in the input layer (NI) while the number of rows of Y is the
number of neurons in the output layer (NO). The number of columns of X and Y is the
number of Nt of sample pairs used to train the network, and the number of neurons in
the hidden layer is NH, equal or smaller than Nt .
The accuracy of a trained network is verified by solving the direct problem N times with
a new randomly generated set of model parameters. The responses corresponding to these
N sets of model parameters are input to the network and the corresponding updated
c w
Input Output
x y
b b
1 N yp,k
a d
− yp,k
epavg = s d , p = 1, 2, . . . , NO, (3)
N k=1 yp,k
where epavg is the mean absolute error in the estimation of the pth parameter, y a
is the actual output of the network, and y d is the desired output. This measure
indicates how much error, on average, can be expected from the network when
estimating model parameters.
, Maximum absolute error, defined as
b b
a d
yp,k − yp,k
epmax = max d , k = 1, 2, . . . , N, p = 1, 2, . . . , NO. (4)
k yp,k
Various issues have to be addressed when training a network. The most relevant to this
work are given below.
(1) The choice of training. The mapping constructed by the network reflects the data
presented to it; therefore, it is essential that the training data represents the mapping
being approximated. The training data should represent the largest possible range
of input data. However, in view of equation (2), care should be taken to avoid
vectors that are very close to each other since this will result in a numerical
ill-conditioning of H †. The technique being proposed assumes that the engineer
knows the boundaries within which the model parameters vary. It was found during
this research that a random generation (within the assumed boundaries of variation)
of the model parameters used to train the network yields the best results. The
training data should reflect the probability distribution of the model parameters
being updated, if this information is known. This is often the case when material
of geometric properties, or environmental conditions are known to vary about a
mean value with a certain variance.
The type of data used for training is also very important. It should be sensitive to
changes in the model parameters, and, if possible, be such that ambiguities are
avoided. For example, input data that lead to the case where two similar input
vectors result in very different outputs, or where two very different input vectors
result in similar outputs, should be avoided.
(2) Measuring the distance between two vectors proved to be one of the most important
issues in developing this technique. The solution to this problem involves the
normalisation and weighting of the different positions of the input vector, as
discussed later. The difficulty arises when the different rows of the input matrix
140 . . . .
have very different sensitivities to changes in the model parameters, or when they
differ in orders of magnitude. This problem can be illustrated using the Euclidean
norm
0 1
NI 1/2
6 7 67
x1 2 s1 c1
x= and c = ,
x2 2 s2 c2
where x1 and x2 are the average values of the first and second rows of x and s1 and
s2 are the expected variations of those rows, then the distance is given by
d = ((x1 − c1 2 s1 )2 + (x2 − c2 2 s2 )2 )1/2. (6)
If x1c1 or if s1s2 , the distance of the first row will dominate the distance measure.
This should be avoided since it is assumed that the whole input vector is relevant
to the updating process.
(3) The order of magnitude of the elements of the weighting matrix should be checked.
High orders of magnitude are a symptom of either improper distance measurement
or choice of input data. Either case can result in a matrix H such that its entries
are all numerically very similar to each other, causing the elements of W to have
high orders of magnitude in order to enhance the difference among the elements of
H. High orders of magnitude in W yield poor generalisation characteristics, since
the output of the hidden layer is greatly enhanced by W, and any small deviation
in H will greatly affect Y.
(4) The choice of output values also affects the performance of the network. The output
vector should be such that its elements have approximately the same order of
magnitude, to avoid numerical ill-conditioning of W. One way to avoid this problem
is to use the logarithm of the model parameters as the output of the network.
Another way is to use multiplying factors of the updated parameter, for example,
0.8k, 1.0k, and 1.2k.
Most of the remarks above seem obvious, but often are not taken into account, leading
to failure when using a neural network as a function approximator or classifier.
3. PROPOSED METHOD
The proposed technique fills a gap in the existing literature by addressing the model
updating problem of structures having parameters that change with time. Most of the
existing model updating methods use minimisation techniques to search for the solution,
solving the direct problem many times during the procedure but not making any use of
these intermediary solutions. The key difference between the technique proposed here and
existing techniques is that the solutions of the direct problem, that are discarded by existing
techniques, are used by the neural network to build a map between frequency response
functions and model parameters. If the model parameters change later in time, this map
can be used to update these parameters without solving the direct problem again. The
updated parameters generate an updated model consistent with the experimental data and
that can be used to adapt the control scheme acting on a system, or as the new model of
a part as circumstances may require during a manufacturing process. Frequency response
141
functions are chosen over modal data because it is difficult to identify modal parameters
in structures with high modal density and high damping.
A radial basis function neural network is used to approximate the map between
frequency domain responses and modal parameters. The goal is to use analytical frequency
responses to train the network and then use it to generate the parameters being updated
based on measured frequency responses. It is assumed that the parameters being updated
are expected to vary between known boundary values. Combinations of these parameters
are generated and the frequency responses of each combination are calculated and used
to train the network to reproduce the corresponding parameter combination [6]. Instead
of using frequency responses directly to train the network, first their real and imaginary
components are integrated over selected frequency intervals and then the values of these
integrals are used to train the network. This procedure is explained in Section 3.6, and
is done in order to avoid ambiguities arising from using Minkowski-like distance measures,
a necessary step when using RBFNN. Also, integrating the response minimises the effect
of low-mean noise present in the signal. The computational load necessary to train the
network is of order N 3, where N is the number of parameter combinations used to train
the network, while the load required to generate the parameters being updated is of order
N. Most of the computational load required to update the parameters is transferrred to
the training phase, so fast estimates can be generated by the training network for online
updating.
The issues of normalisation and choice of the spread constants, choice of input data used
to train the neural network, influence of the number of parameters being updated and
measured FRFs in the accuracy of the method, and the analysis of the computational load
involved in using this method are addressed next.
Amplitude (m/N)
4
L1 R3
L2 R2
3
C1 C3
2
R1 L3
analysing Fig. 2, where the FRFs of three different one degree of freedom systems are
sampled at v1 , v2 , and v3 . The vectors for each sampled FRF are
89 89 89
R1 C1 L1
R = R2 , C = C2 , and L = L2 .
R3 C3 L3
Measuring the Euclidean distance between R and C and between L and C yields
where rm is the scaling factor for the mth row of the input matrix. The choice of the weight
factors depends on engineering knowledge of the system at hand. The response of a
cantilevered beam is used to illustrate the use of the typical results obtained and weight
factors. As shown in Fig. 4, the accuracy in updating the damping present in the second
mode is lower than the one present in the first mode. Choosing r1 = r2 = 0.5, the weights
of the integrals of the real and imaginary components of the first mode, result in a
better accuracy for the damping of the second mode (Fig. 5). There is always a trade-off
when using weight factors: a higher accuracy for the second frequency range leads to a
lower accuracy for the first frequency range, and there is an optimum point beyond where
there is a generalised loss of accuracy in the updating process. This is seen for r1 = r2 = 0.2
(Fig. 6).
20
10
Real (FRF)
–10
–20
0 100 200 300 400 500 600 700 800 900 1000
10
0
Imag (FRF)
–10
–20
–30
–40
0 100 200 300 400 500 600 700 800 900 1000
Frequency (rad/s)
Figure 3. Frequency intervals (shaded areas) used to integrate the real and imaginary components of the
frequency response function of a cantilevered beam, showing nominal (——) and boundary (– – –) FRFs.
144 . . . .
40
30
30
20
Amplitude V/V (dB)
10
–10
–20
–30
0 50 100 150
Frequency (Hz)
30
20
Amplitude V/V (dB)
10
–10
–20
–30
0 50 100 150
Frequency (Hz)
m3 x3
b3 k3
m2 x2
b2 k2 x1
m1
k1
P(t)
Figure 7. Diagram of the truck suspension modeled with three degrees of freedom.
–3σ 3σ
5000 –90
(a) (b)
4500
–100
4000
Number of occurrences
–110
Amplitude V/V (dB)
3500
3000
–120
2500
–130
2000
1500 –140
1000
–150
500
0 –160
–0.15 –0.1 –0.05 0 0.05 0 2 4 6 8
Error (%) Frequency (Hz)
Figure 8. (a) Generalisation characteristics of the network trained to update one parameter (m3 ) of the truck
suspension example, and (b) worst-case approximation. ——, Exact; – – –, updated.
146 . . . .
–3σ 3σ –3σ 3σ
Number of occurrences
4000 5000
(a) (b)
3000 4000
3000
2000
2000
1000 1000
0 0
–3 –2 –1 0 1 2 –6 –4 –2 0 2 4
Error (%) Error (%)
–100
Amplitude V/V (dB)
–110 (c)
–120
–130
–140
–150
–160
–170
0 1 2 3 4 5 6 7 8
Frequency (Hz)
Figure 9. Generalisation characteristics of the network trained to update (a) k2 and (b) k3 of the truck
suspension example, and (c) worst-case approximation. ——, Exact; – – –, updated.
–3σ 3σ –3σ 3σ
4000 5000
Number of occurrences
–3σ 3σ –3σ 3σ
2000 5000
Number of occurrences
(c) (d)
4000
1500
3000
1000
2000
500
1000
0 0
–40 –20 0 20 40 –20 –10 0 10 20
Error (%) Error (%)
–100
–110 (e)
Amplitude V/V (dB)
–120
–130
–140
–150
–160
–180
–170
0 1 2 3 4 5 6 7 8
Frequency (Hz)
Figure 10. Generalisation characteristics of the network trained to update four parameters of the truck
suspension example: (a) k2 , (b) k3 , (c) k2 , (d) k3 , (e) worst-case approximation. ——, Exact; – – –, updated.
148 . . . .
–3σ 3σ –3σ 3σ
3000 3000
Number of occurrences
(a) (b)
2500 2500
2000 2000
1500 1500
1000 1000
500 500
0 0
–10 0 10 20 –15 –10 –5 0 5 10
–3σ 3σ –3σ 3σ
2500 1500
Number of occurrences
(c) (d)
2000
1000
1500
1000
500
500
0 0
–20 –10 0 10 20 –20 –10 0 10 20
Error (%)
–3σ 3σ
2500 –100
Number of occurrences
2000
–120
1500 –130
–140
1000 –150
–160
500
–170
0 –180
–20 –10 0 10 20 0 2 4 6 8
Error (%) Frequency (Hz)
Figure 11. Generalisation characteristics of the network trained with one FRF to update five parameters of
the truck suspension example. (a) m3 , (b) k2 , (c) k3 , (d) b2 , (e) b3 , (f) worst-case approximation. ——, Exact; – – –,
updated.
flops2 = 8 × NI × Nt × N,
149
where NI is the number of input neurons, Nt is the number of training samples, and N
is the number of samples presented to the network. In the training phase N = Nt .
Step 3 involves calculating the exponential of each term of the matrix of distances. The
computational load is given by
flops3 = NH × N,
where NH is the number of neurons in the hidden layer (in the present work NH = Nt ),
and N is the number of samples presented to the network. As mentioned above, in the
training phase N = Nt .
Step 4 involves calculating the pseudo-inverse of the matrix obtained in step three and
multiplying it by the target vectors. Considering the case when NH = Nt , the number of
operations necessary to invert a matrix is
flopsi = 2 × Nt3 ,
–3σ 3σ –3σ 3σ
2500 3000
Number of occurrences
(a) (b)
2000 2500
2000
1500
1500
1000
1000
500 500
0 0
–10 –5 0 5 10 15 –8 –6 –4 –2 0 2 4
–3σ 3σ –3σ 3σ
3000 3000
Number of occurrences
(c) (d)
2500 2500
2000 2000
1500 1500
1000 1000
500 500
0 0
–15 –10 –5 0 5 10 15 –10 –5 0 5 10 15 20
Error (%)
–3σ 3σ
2000 –100
Number of occurrences
1500 –120
–130
1000 –140
–150
500 –160
–170
0 –180
–10 –5 0 5 10 0 2 4 6 8
Error (%) Frequency (Hz)
Figure 12. Generalisation characteristics of the network trained with two FRFs to update five parameters of
the truck suspension example. (a) m3 , (b) k2 , (c) k3 , (d) b2 , (e) b3 , (f) worst-case approximation. ——, Exact; – – –,
updated.
150 . . . .
–3σ 3σ –3σ 3σ
3000 3000
Number of occurrences
(a) (b)
2500 2500
2000 2000
1500 1500
1000 1000
500 500
0 0
–10 –5 0 5 10 15 –10 –5 0
–3σ 3σ –3σ 3σ
3500 2000
Number of occurrences
(c) (d)
3000
2500 1500
2000
1000
1500
1000 500
500
0 0
–20 –10 0 10 –10 –5 0 5 10 15 20
Error (%)
–3σ 3σ
2000 –100
Number of occurrences
1500 –120
–130
1000 –140
–150
500 –160
–170
0 –180
–10 –5 0 5 10 0 2 4 6 8
Error (%) Frequency (Hz)
Figure 13. Generalisation characteristics of the network trained with three FRFs to update five parameters
of the truck suspension example. (a) m3 , (b) k2 , (c) k3 , (d) b2 , (e) b3 , (f) worst-case approximation. ——, Exact;
– – –, updated.
flopsm = 2 × NO × Nt2 ,
Adding the results above, the number of operations necessary to train a network,
excluding the generation of the training set, is approximately given by
It has been established in this work that the updating process is fast. The computational
load involved in the updating process, assuming that one sample is input to the trained
network and the parameters are updated based on this input, is
flopse = Nt × (NI + 2NO + 1). (10)
For example, a network trained with 800 samples to solve the truck problem which has
six inputs and five outputs, and that was verified with 5000 samples, takes 1 billion floating
point operations to be trained, 250 billion operations to be verified and 13 600 floating
point operations to update the parameters after being trained. A personal computer is
capable of performing 30 × 106 operations, meaning the neural network can be trained in
less than a minute, verified in about 2 h, and that the parameter updating takes only a
fraction of a second to compute.
4. NUMERICAL EXAMPLE
Recently, Yang and Brown [5] presented a technique for handling damping when
updating a model and used an interesting and challenging example to test their technique.
This system is a 15 degree of freedom lumped parameter system with high modal density
(Fig. 14). Their paper updates the parameters m11 through m15 and k1 and k6 for two
different damping configurations: low and high damping. The high damping case
corresponds to damping coefficients seven times higher than those used for the low
7 1
13 18
6 1 11
8 2
14 19
7 2 12
9 3
15 20
8 3 13
10 4
16 21
9 4 14
11 5
17 22
10 5 15
12 6
Figure 14. Diagram of the test case used by Yang and Brown [5].
152 . . . .
T 2
Yang’s example, nominal values
Parameter Nominal values
m1 to m10 0.01175 kg
m11 to m15 0.00059 kg
k1 to k18 175127 N/m
k19 192639 N/m
k20 157614 N/m
k21 210152 N/m
k22 140101 N/m
c1 to c12 17.5127 Ns/m
c13 to c22 1.75127 Ns/m
damping case. The response of a nominal system is calculated and used as the ‘measured’
response. The parameter being updated are then modified, simulating a mismodeling of
the structure, and a new response is calculated. The goal is to use the ‘measured’ response
to update the mismodeled system. The parameters obtained by the updating procedure are
then compared to the ones used to generate the ‘‘measured’’ response and a good
agreement is verified. The nominal parameters for this model are shown in Table 2.
Here, the mass parameters are assumed to vary 20% about the nominal values according
to a normal distribution centred at the nominal value with variance sm2 = 9.2 × 10−6 N/m2.
The stiffness parameters are assumed to vary 30% about the nominal values also according
to a normal distribution centred at the nominal values with variance sk2 = 0.4 (kg/m)2.
Following the suggestion given in the original paper, transfer functions between inputs at
co-ordinates 1 and 2 and the displacements of masses 1, 6, 11, 12, 13, 14, and 15 are used
to update the model. Figure 15 shows the frequency intervals selected for updating: [82.1
138.9], [189.4 227.3], [233.6 328.4], and [328.4 524.1] rad/s. Note that some high frequency
modes are not considered in the updating procedure. The accuracy of the updated response
0.06 0.1
0.04 0.08
Imag (FRF)
Real (FRF)
0.02 0.06
0 0.04
–0.02 0.02
–0.04
0 500 1000 1500 0 500 1000 1500
–3
× 10
0.01 15
0.005 10
Imag (FRF)
Real (FRF)
0 5
–0.005 0
–0.01 –5
0 100 200 300 400 500 0 100 200 300 400 500
Frequency (rad/s) Frequency (rad/s)
Figure 15. Frequency interval used to integrate the FRFs and train the network to update Yang’s example.
Lower figures show zoomed FRF. ——, Lower boundary; – – –, upper boundary.
153
Figure 16. Generalisation characteristics of the network trained with 528 sample pairs to update the low
damping variation of Yang’s example. (a)–(e) m11 to m15, respectively; (f) k1 , (g) k6 ; (h) worst-case approximation.
——, Exact; – – –, updated.
of the high-frequency modes depends on the accuracy of the structural model and
on the accuracy of the parameter estimate produced by the updating procedure.
The network’s generalisation characteristics for both damping cases are shown in
Figs 16 and 17, and the error measures for the two cases are summarized in Table 3.
Contrary to most existing techniques, the increase in damping does not pose any
extra difficulty to the proposed technique. The accuracy of the NNUM is
corroborated further by the estimation error of mostly 25%, verified in Figs 16
and 17. Furthermore, these figures show that the response of the high-frequency
modes not used in the updating procedure are accurate even for the worst-case
estimation.
Based on the figures presented in [5], the method proposed by the Yang and Brown
needs about eight iterations to reach an error of 5%. The accuracy achieved by the
NNUM is similar to the one achieved by Yang and Brown, and has the advantage
of being able to update the model quickly many times.
154 . . . .
Figure 17. Generalisation characteristics of the network trained with 528 sample pairs to update the high
damping variation of Yang’s example. (a)–(e) m11 to m15, respectively; (f) k1 , (g) k6 ; (h) worst-case approximation.
——, Exact; – – –, updated.
T 3
Error measures for the two different variations of Yang’s problem
Nt = 528 low c Nt = 528 high c
e avg(%) e max(%) 3s(%) eavg(%) e max(%) 3s(%)
m11 1.25 5.74 3.45 1.38 4.85 3.55
m12 0.79 5.68 2.42 1.01 3.16 2.03
m13 1.07 6.07 3.56 1.45 4.54 3.73
m14 1.31 6.74 4.51 1.40 5.77 4.41
m15 1.95 13.25 7.79 1.31 5.06 3.98
k1 0.31 1.52 0.69 1.05 2.47 1.55
k6 0.56 1.94 1.39 1.16 3.12 2.08
155
5. EXPERIMENTAL EXAMPLE
The last example is a flexible frame designed to study and actively control the vibrations
of solar panels [11, 12]. The frame consists of rectangular and circular thin-walled
aluminum tubes connected by octagonal aluminum elements (Figs 18 and 19). Each tube
is pinned and bolted into the connection element to avoid slippage. The frame is attached
to a concrete block that serves as ground, as long as the level of excitation is low so its
natural frequencies are not excited.
The structure is modeled with beam and single-point mass elements, resulting in a model
with 96 active degrees of freedom (Fig. 20). The assumptions in the model are that the
octagonal joints behave as single-point masses, implying that the joint is a rigid member
0.8
E39 E18 E35 E21 E36 E24 E37 E27
0.7
0.6 E8 N12 E9 E10 N14 E11 N15
N11 N13
Y (m)
0.5
E15 E31 E17 E32 E20 E33 E23 E34 E26
0.4
0.3 E4 N7 E5 E6 N9 E7 N10
N6 N8
0.2
E38 E16 E28 E19 E29 E22 E30 E25
0.1 N3
N1 N2 E1 E2 N4 E3 N5
Figure 20. Finite element mesh for the flexible frame experiment.
156 . . . .
T 4
Physical parameters of the components of the flexible frame
Circular beams Rectangular beams Joints Pin and bolt
E = 69 × 10 Pa 9 9
E = 69 × 10 Pa m = 9.7 g m13 g
r = 2710 kg/m3 r = 2710 kg/m3
R = 3.2 mm b = 2.86 cm
r = 2.54 mm h = 3.2 mm
and that its rotary inertia is negligible, and that the beams meet at the centre of the
octagonal joints, resulting in longer beams in the finite element (FE) model. This causes
an increase of mass in the FE model and a reduction of its stiffness, since the coefficients
of the stiffness matrix are inversely proportional to the length of the beam. These
assumptions lead to a decrease of the natural frequencies of the FE model. The rotary
inertia and applied torque to the bolts are neglected in the model. The masses in the FE
model are determined by measurement. One joint was disassembled and its components
(pins, bolts, and octagonal connection element) weighed, and the average values obtained
for this joint are used for all remaining joints. Nodes N1, N6, N11, and N16 of the finite
element model have all degrees of freedom constrained, and elements E38 and E39 in
Fig. 20 are rectangular aluminum beams used in previous research to actively control the
frame’s vibration. Since the structure is very lightly damped, proportional damping is
assumed (C = aM + bK). The physical and geometric parameters of the frame are listed
in Table 4.
The objective of this experiment is to verify how the proposed method behaves when
presented with noisy, experimental data. Frequency responses are experimentally obtained
by exciting the frame at node N12 with an impact hammer and measuring accelerations
at nodes N3 and N20 with two accelerometers (Figs 18 and 20). The details of the
experimental set-up are presented in Appendix A. The measured and predicted undamped
FE responses of the accelerometer located at node N20 are shown in Fig. 21, along with
1
0.8
Coherence
0.6
0.4
0.2
(a)
0 5 10 15 20 25 30 35 40 45 50
60
Amplitude V/V (dB)
40
20
0
–20
–40
(b)
–60
0 5 10 15 20 25 30 35 40 45 50
Frequency (Hz)
Figure 21. Experimental data obtained from the flexible frame experiment: (a) coherence and (b) frequency
responses between N20 and N12. ——, Measured; –·–·–·, initial FE model.
157
400 400
300 300
200 200
Imag (FRF)
Real (FRF)
100 100
0 0
–100 –100
–200 –200
–300 –300
0 100 200 300 400 0 100 200 300 400
100 150
50 50
Imag (FRF)
Real (FRF)
0 0
–50 –50
–100 –100
0 100 200 300 0 100 200 300
Frequency (rad/s) Frequency (rad/s)
Figure 22. Frequency intervals used to integrate the FRF and train the network to update the model of the
flexible frame. Lower figures show zoomed frequency responses. ——, Lower boundary; – – –, upper boundary.
its coherence information. Data collected from the accelerometer located at node N20 is
used because its signal had the highest quality. As expected, the finite element model has
natural frequencies lower than those measured experimentally.
The FE model needs to be updated in order to predict the behaviour of the structure
accurately. The model is adjusted by updating the a and b damping coefficients and by
finding an equivalent length (L) for the beams to compensate for the added mass and
reduced stiffness. In addition to these three parameters, the mass of the bolt + pin
combination is also updated. This is necessary because the pins are of different lengths and
the bolts are not identical. The frequency intervals selected to be used are [65 86] [86 114],
and [190 260] rad/s (Fig. 22). The low-frequency modes were not used because of low
measured coherence in these modes.
The equivalent length is used to scale the mass and stiffness matrices of the beam
elements as follows. The coefficients of the mass matrix are of the form CArLn , where C
is a number, r is the mass density, and Ln is the length of the beam element. Based on
this, the mass matrix of each beam element is multiplied by L/Ln , where L is the equivalent
length used to correct the FE model. The coefficients of the stiffness matrix are inversely
proportional to the length of the element. For scaling purpose, the coefficient 12EI/Ln3 , is
T 5
Statistic characteristics assumed for the variables used in updating the frame model
Variable Mean value Variance Minimum value Maximum value
2
L 27 cm 1.3 cm 25 cm 29 cm
mbp 3.2 g 0.9 g2 2.0 g 4.4 g
a 1.0 0.55 0.2 1.8
b 8.0 × 10−6 4 × 10−6 2 × 10−6 14 × 10−6
158 . . . .
–3σ 3σ –3σ 3σ
2500 2000
Number of occurrences
(a) (b)
2000 1500
1500
1000
1000
500 500
0 0
–10 –5 0 5 10 –50 0 50 100
–3σ 3σ –3σ 3σ
2500 2000
Number of occurrences
(c) (d)
2000 1500
1500
1000
1000
500 500
0 0
–100 –50 0 50 100 150 200 –200 0 200 400
Error (%) Error (%)
50
(e)
Amplitude V/V (dB)
–50
0 5 10 15 20 25 30 35 40 45 50
Frequency (Hz)
Figure 23. Generalisation characteristics of the network trained to update four parameters of the flexible frame
model. (a) L, (b) Mbp , (c) a, (d) b, (e) worst-case approximation. ——, Exact; – – –, updated.
used because the most important modes of the beams in the frequency range being
considered are primarily bending. The stiffness matrix of each beam element is multiplied
by (Ln /L)3. Since the mass of the initial FE model is higher than the real mass and since
the stiffness coefficients are lower than the real ones, it is expected, based on the expressions
above, that the equivalent length will be shorter than its nominal value. This would result
in the reduction of the mass density and in an increase of the apparent Young’s modulus,
making the updated model closer to the real structure.
The parameters being updated are assumed to vary according to Gaussian distributions
with characteristics shown in Table 5. The equivalent length is L, the mass of the
combination bolt + pin is mbp , and a and b are the coefficients of the proportional damping
T 6
Error measures of the network trained to update the flexible frame model
L mbp a b
avg
e (%) 1.11 11.7 10.5 54.9
e max(%) 8.19 108 231 476
3s(%) 4.84 28.12 49.22 256.25
159
80
60
40
Amplitude V/V (dB)
20
–20
–40
–60
0 5 10 15 20 25 30 35 40 45 50
Frequency (Hz)
Figure 24. Measured (——), initial (– – –) and updated (- · - ·) responses of the flexible frame experiment. As
it can be seen, there is a very significant improvement in the model’s predicted response.
model used in the model. The network is trained with 1100 sample pairs and verified with
5000 samples. The results in Fig. 23 and Table 6 show that the NNUM updates the
parameters L and mbp accurately, but not the damping parameters a and b. This is
explained by the low sensitivity of the response with respect to these parameters in the
expected interval of variation. The parameter combinations that result in large errors are
the ones very close to the assumed boundaries of parameter variations and therefore not
representative of the majority of cases.
Figure 24 shows the response of the model updated using the measured data. The
parameters updated based on the measured FRF are L = 27.29 cm, mbp = 2.63 g, a = 0.77,
and b = 5.1 × 10−6.
This example demonstrates the versatility and accuracy of the NNUM when used
to update a complex structure using experimental data. This example uses global
equivalent parameters to update the structure due to the large amount of uncertainties,
not allowing for localised updating of parameters. However, this is a more complex case
since all the effects are averaged, making it more difficult for the NNUM to update the
model.
6. CONCLUSIONS
Existing model updating literature lacks tools to update systems quickly once they are
in use. The NNUM solves this problem by using a neural network to represent the
mapping between the system’s response and the system’s parameters. The fidelity of this
mapping is a function of many variables. The data used to train the neural network must
reflect the expectations of how the parameters vary; all the problems solved in this paper
assume a normal distribution with known mean and variance, however, any probability
160 . . . .
density distribution can be used. The expressions for the estimated computational load
show that the updating of parameters is a fast procedure and that most of the computation
is performed when training the network. Fast updating of the model is the main advantage
of this method. The main drawback of this approach is the intensive computational load
required to train the neural network. Another drawback is the lack of mathematical tools
to assure convergence and accuracy of the network’s estimation as a function of the
number of inputs and training pairs. These drawbacks, however, did not prevent the
solution of practical numerical and experimental problems. In studying these problems,
trade-offs between accuracy and computational load have been analysed, leading to
general guidelines to be followed when using this technique. When used to update large
models, typical from finite element analysis, the proposed technique should perform
similarly to the examples shown here. Its computational load will be mostly a function of
how the FRF components are computed and on the number of parameters being updated,
since this will determine the number of training samples necessary for a good
approximation of the solution.
Real and imaginary components of the frequency response are integrated over selected
frequency intervals and used to provide information about the system to the network. This
choice of input data solves the problem of ambiguity when measuring the distance between
two vectors. It also reduces the influence of low-mean measurement noise over the
estimation of parameters. The issues of normalisation and weighting of the input data were
analysed and general guidelines on how to proceed were given. The effect of the number
of training samples, parameters being updated, and number of FRFs used to train the
network, on the accuracy of the network were analysed and illustrated with examples. The
accuracy increases with the number of training samples and the number of FRFs used in
the procedure. For a constant training set size, the accuracy decreases with the number
of parameters being updated. In general, the smallest possible number of parameters
should be updated, and the network should be trained using the maximum possible number
of FRFs and parameter combinations. The network has very good performance when the
parameters being updated vary within the assumed boundaries of variation, and it can be
used for non-linear systems with modification of input data to include forward and
backward frequency responses.
ACKNOWLEDGEMENTS
MJA gratefully acknowledges the support of the Brazilian National Council for
Scientific and Technological Development (CNPq) and the Research Support Foundation
of the State of São Paulo (FAPESP). DJI acknowledges the support of the Samuel Herrick
Endowment and the Air Force Office of Scientific Research.
REFERENCES
1. M. I. F and J. E. M 1995 Finite Element Model Updating in Structural
Dynamics. Kluwer Academic Publishers.
2. S. S and M. B 1989 Adaptive Control—Stability, Convergence, and Robustness.
Englewood Cliffs, NJ: Prentice Hall.
3. K. J. Å̈ and B. W 1989 Adaptive Control. Reading, MA: Addison Wesley.
4. Z. P. S and P. H 1993 Transactions of the Canadian Society of Mechanical
Engineers 17, 567–584. Neural network based selection of dynamic system parameters.
5. M. Y and D. B 1996 Proceedings of the 14th IMAC, 576–584. An improved procedure
for handling damping during finite element model updating.
161
6. M. J. A 1996 Model updating using neural networks. PhD thesis, Virginia Polytechnic
Institute and State University.
7. A. C and R. U 1993 Neural Networks for Optimization and Signal Processing.
John Wiley.
8. B. K 1992 Neural Networks and Fuzzy Systems. Englewood Cliffs, NJ: Prentice Hall.
9. D. E. R and J. L. MC 1986 Parallel Distributed Processing. Cambridge, MA:
M.I.T. Press.
10. J. P and I. W. S 1991 Neural Computations 3, 246–257. Universal approximation
using radial-basis-function networks.
11. D. J. L and D. J. I 1993 Smart Materials and Structures 2, 82–95. Modeling and control
simulations for a slewing frame containing self-sensing active members.
12. J. D, D. J. I and E. G 1992 Journal of Intelligent Materials, Systems and
Structures 3, 166–185. A self-sensing piezoelectric actuator for collocated control.
APPENDIX A
The flexible frame is clamped to a concrete block that serves as ground for the
experiment. The structure is excited with an impact hammer at node N12 and two
accelerometers collect data at nodes N3 and N20 (Fig. 20). Data is collected from the
hammer and the accelerometers and input to a Tektronix Fourier analysis (Fig. 18). The
Fourier analyser collects the data from the hammer and accelerometers, filtering them
above 50 Hz, and computes the frequency response function between the signal of each
accelerometer and the signal from the impact hammer. The window used when acquiring
the data is the boxcar window. The software used in the Instrument Program (IP) provided
with the Fourier analyser, and a Hanning window is used when acquiring the data. The
hardware used in this experiment is as follows.
, Two accelerometers with the following characteristics:
Brand, Kistler;
Model, 8630B50;
Range, 250 g;
Sensitivity at 100 Hz, 99.3 mV/g 1;
Resonant frequency, 22 kHz;
Weight, 7.5 g.
, One Fourier analyser with the following characteristics:
Brand, Tektronix;
Model, 2630;
A/D converter accuracy, 12 bits.
, One impact hammer with the following characteristics:
Brand, Kistler;
Model, 9722A500;
Threshold, 0.009 kg rms;
Sensitivity, 10 mV/N;
Resonant frequency 70 kHz.