Вы находитесь на странице: 1из 10

Functional Networks in Structural Engineering

S. Rajasekaran1

Abstract: In this paper, functional networks 共FN兲 proposed by Castillo as an alternative to neural networks are discussed. Unlike neural
networks, the functions are learned instead of weights. In general, topology is selected based on data, domain knowledge 共properties of
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

the function such as associativity, commutativity, and invariance兲, or a combination of the two. The object of this paper is to show the
application of some functional network architectures to model and predict the behavior of structural systems which are otherwise modeled
in terms of differential or difference equations or in terms of neural networks. In this paper, four examples in structural engineering and
one example in mathematics are discussed. The results obtained by functional networks are compared with those obtained by neural
networks for the first four examples, and it is shown that functional networks are more efficient and powerful and take much less computer
time as compared to predictions by conventional neural networks such as the back-propagation network.
DOI: 10.1061/共ASCE兲0887-3801共2004兲18:2共172兲
CE Database subject headings: Beams; Neural networks; Vibration analysis; Space frames; Functional analysis.

Introduction networks. In this paper, four examples in structural engineering


and one example in mathematics are discussed. The results ob-
Artificial neural networks 共ANNs兲 are inspired by the behavior of tained by functional networks are compared with those obtained
the brain, they consist of one or several layers of neurons, or by neural networks for the first four examples, and it is shown
computing units, connected by links. ANNs have been recognized that functional networks are more efficient and powerful and take
as a powerful tool to learn and reproduce systems in various fields much less computer time as compared with predictions by neural
of applications. One of the main properties of neural networks is network methods such as the back-propagation network.
their ability to learn from data. The process of selecting the num- Functional networks are introduced in the next section with
ber of hidden layers and neurons in the hidden layer is by trial and general methodology, including their selection of initial topology
error until a good fit to the data is obtained. Functional networks and learning methods. In the subsequent section, the associativity
共FNs兲 do not suffer from this drawback. functional network is introduced and the methodology is ex-
Functional networks were introduced by Castillo 共Castillo plained. Next, the method is applied to five examples: 共1兲 a mul-
1998; Castillo et al. 2000a兲, Gomez 共Castillo and Ruiz-Cobo timodal function; 共2兲 deflection of a beam; 共3兲 the weight of space
1992兲, and Castillo et al. 共Castillo et al. 1998, 2000b兲 as a pow- trusses; 共4兲 forced vibration of a spring mass damper system; and
erful alternative to ANN. Unlike neural networks 共Adeli and 共5兲 a simply supported beam subjected to a given loading. Finally,
Huang 1996兲, functional networks use domain knowledge in ad- some conclusions are drawn.
dition to data knowledge. The network’s initial topology is de-
rived based on modeling of the properties of the real world. Once
this topology is available, functional equations allow one to ob- Functional Networks
tain a much simpler equivalent topology. Although functional net-
works also can deal only with data, the class of problems where The main property of the neural network 共NN兲 is its ability to
functional networks are most convenient is the class where the learn from data by using structural and parametric learning meth-
two sources of knowledge about the domain and data are avail- ods. In NNs, the learning process is achieved through estimating
able. Castillo et al. 共2000a兲 have applied FNs to two structural the connection weights by minimizing the error function 共Pao
engineering examples: 共1兲 prediction of shear, moment, slope, and 1989兲. Functional networks 共Castillo 2000a兲 is a more generalized
deflection of a beam; and 共2兲 deformation response of a vibrating version of neural networks bringing together domain knowledge
mass. and data. There is no restriction of neural function in functional
The object of this paper is to show the application of some neurons, and arbitrary functions are allowed. Another important
functional network architectures in modeling and predicting the property of functional networks is the possibility of dealing with
behavior of structural systems that are otherwise modeled in functional constraints of the model. The functional network uses
terms of differential or difference equations or in terms of neural two types of learning: 共1兲 structural learning; and 共2兲 parametric
learning. In structural learning, the initial topology of the net-
1
Professor of Infrastructure Engineering, PSG College of Technology, work, based on some properties available to the designer, is ar-
Coimbatore 641004, Tamil Nadu, India. E-mail: sekaran@hotmail.com rived at and a simplification is made using a functional equation
Note. Discussion open until September 1, 2004. Separate discussions to a simpler architecture. In parametric learning, neuron functions
must be submitted for individual papers. To extend the closing date by
are estimated by considering the combination of shape functions.
one month, a written request must be filed with the ASCE Managing
Editor. The manuscript for this paper was submitted for review and pos- A functional network consists of the following elements
sible publication on April 16, 2002; approved on July 30, 2003. This 共Fig. 1兲:
paper is part of the Journal of Computing in Civil Engineering, Vol. 18, 1. Storing units:
No. 2, April 1, 2004. ©ASCE, ISSN 0887-3801/2004/2-172–181/$18.00. • One layer of input storing units. This layer contains input

172 / JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Fig. 2. Associativity functional network
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

m
Fig. 1. Functional network f i共 x 兲 ⫽ 兺 a i j␾ i j共 X 兲
j⫽1
(2)

where ␾⫽shape functions with algebraic expressions


(1,x,x 2 , . . . x n ), trigonometric functions such as
关 1,sin(x),cos(x),sin(2x),cos(2x),sin(3x),cos(3x)兴, or exponential
data X 1 , X 2 , X 3 , etc. functions. The associative optimization function may lead to a
• Intermediate layer units storing intermediate information, system of linear or nonlinear algebraic equations.
f 4 , f 5 . These units evaluate a set of input values coming
from the previous layer and deliver a set of output values of
the next layer.
• A layer of output units, f 6 .
Associativity Functional Network
2. Layer of computing units, f 1 , f 2 , f 3 : A neuron in the com-
puting unit evaluates a set of input values coming from a Assume that for two inputs x 1 ,x 2 , the output x 3 is given. We can
previous layer. construct a functional network as shown in Fig. 2 using the func-
3. A set of directed links: The functions are not arbitrary, but tions f 1 , f 2 , and f 3 as
are determined by the structure of the network, such as x 7 ms
⫽ f 4 (x 4 ,x 5 ,x 6 ), as explained in Fig. 1.
In addition to data, information about other properties of the
f s共 x s 兲 ⫽ 兺 a si ␾ si
i⫽1
(3)

function, such as associativity, commutativity, and invariance, are for s⫽1,2, where m s can be of any order.
used in selecting the final network. In a given functional network, ␾ si can be polynomial, trigonometric, exponential, or any ad-
neural functions are arbitrary, but in neural networks they are missible function, and herein these are called shape functions. In
sigmoidal, linear or radial basis, and other functions. In functional this example, we use only polynomial expressions such as
networks, functions in which weights are incorporated are 具 1,x,x 2 ,x 3 ... 典 . The function f 3 can be expressed as
learned, and in neural networks, weights are learned. In some
2
functional networks, the learning method leads to a global mini-
mum in a single step. Neural networks work well if the input and f 3共 x 3 兲 ⫽ 兺 a 3i ␾ 3i
i⫽1
(4)
output data are normalized in the range of 0–1, but in functional
networks there is no such restriction. It can be pointed out that From the input functions, we can construct
neural networks are special cases of functional networks.
f̂ 3 共 x 3 兲 ⫽ f 1 共 x 1 兲 ⫹ f 2 共 x 2 兲 (5)
The following eight-step procedure is used for working with
functional networks 共FNs兲: Then, the error in the jth data is given by
• Step 1: Statement of the problem
e j ⫽ f 1共 x 1 j 兲 ⫹ f 2共 x 2 j 兲 ⫺ f 3共 x 3 j 兲 (6)
• Step 2: Initial topology
• Step 3: Simplification of initial topology using functional The error can be written in matrix form as
equations a 11
• Step 4: Arrive at conditions to hold for uniqueness
a 12

¦§
• Step 5: Data collection
a 13
• Step 6: Parametric learning by considering the linear combi-
nation of shape functions ¯
• Step 7: Model validation a 21
e j ⫽ 具 1,x 1 j ,x 1 j , ...1,x 2 j ,x 2 j ,...,⫺1,⫺x 3 j 典 a
2 2
(7)
• Step 8: If step 7 is satisfactory, the model is ready to be used. 22
The learning method of a functional network consists of ob- a 23
taining the neural functions based on a set of data D⫽(I i ,O i ) ••
(i⫽1,2, . . . n). The learning process is based on minimizing the a 31
Euclidean norm of the error function, given by a 32
n
1 or
E⫽ 兺
2 i⫽1
共 O i ⫺F 共 i 兲兲 2 (1)
e j ⫽ 具 b j 典 兵 aគ 其 (8)
The approximate neural function f i (x) may be arranged as The sum of the squares of the error for all the data is given by

JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004 / 173

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Fig. 5. Functional network applied to beam problem 共Example 2兲

or R⫽ 具 aគ 典 关 A 兴 兵 aគ 其 ⫹ 具 aគ 典 关 ⌽ 0 兴 兵 ␭ 其 ⫺ 具 ␭ 典 兵 ␣ 其 (14)
We want to minimize R; thus
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

⳵R ⳵R
⫽2 关 A 兴 兵 aគ 其 ⫹ 关 ⌽ 0 兴 兵 ␭ 其 ⫽0, ⫽ 关 ⌽ 0 兴 T 兵 aគ 其 ⫽ 兵 ␣ 其
⳵ 兵 aគ 其 ⳵␭
(15)
or

冋 2关A兴
关 ⌽0兴 T
关 ⌽0兴
关0兴
册再 冎 再 冎
兵 aគ 其
兵␭其
兵0其
⫽ ␣
兵 其
(16)

or
关 G 兴兵u其⫽兵␯其 (17)
Fig. 3. Multimodal function surface 共Example 1兲 Note that the 关 G 兴 matrix is symmetric. Once we solve for un-
knowns 兵 u 其 , for any given x 1i , x 2i , one can write

冉兺 冊
f̂ 3 共 x 3i 兲 ⫽ f 1 共 x 1i 兲 ⫹ f 2 共 x 2i 兲 ⫽a 31⫹a 32x 3i (18)
ndata ndata

E⫽ 兺
j⫽1
e Tj e j ⫽ 具 aគ 典
j⫽1
兵 b j 其 具 b j 典 兵 aគ 其 ⫽ 具 aគ 典 关 A 兴 兵 aគ 其 (9) or
x 3i ⫽ 共 f̂ 3 共 x 3i 兲 ⫺a 31兲 /a 32 (19)
To have uniqueness of the solution assuming initial values 兵 x k0 其
and the function ␣ k , we must have If we assume higher order functions for f 3 (x 3i ), nonlinear equa-
mk
tions have to be solved for x 3i using the bisection or Newton-

兺 a ki ␾ ki共 x k0 兲 ⫽␣ k
Raphson method. This is time consuming; hence, for all the prob-
f k 共 x k0 兲 ⫽ (10) lems considered in this paper, only a first-order function has been
i⫽1
assumed for f 3 (x 3i ).
and, writing it in matrix form, we get

具 aគ 典 冋 兵 ␾ 10其
兵0其
兵0其
兵0其
兵 ␾ 20其
兵0其
兵0其
兵0其
兵 ␾ 30其
册 ⫺ 具 ␣ 1 ␣ 2 ␣ 3 典 ⫽0 (11)
Numerical Examples

Example 1: Multimodal Function


or 具 aគ 典 关 ⌽ 0 兴 ⫺ 具 ␣ 典 ⫽0 (12) The object of this example is to show how functional network can
Using the Lagrangian multiplier technique, we define an aug- be applied to predict the value of a multimodal function and ar-
mented function as rive at a simple equation instead of the triple sinc function. For
the learning of the FN to be possible, some data are needed.
R⫽E⫹ 具 aគ 典 关 ⌽ 0 兴 兵 ␭ 其 ⫺ 具 ␭ 典 兵 ␣ 其 (13)
Consider, for example, that we have available data generated such
that for a given x and y, z is calculated based on the multimodal
function 共triple sinc function兲 given by
f 共 x,y 兲 ⫽g 共 x⫺.225,y⫺.275兲 ⫹g 共 x⫺.775,y⫺.575兲

⫹g 共 x⫺.35,y⫺.725兲 (20)
where
sin 40冑x 2 ⫹y 2
g 共 x,y 兲 ⫽1.25 ⫺40冑x 2 ⫹y 2 (21)
40冑x 2 ⫹y 2
According to step 1, the statement of the problem is given.
Because there are two inputs and one output, the initial topology
is chosen as shown in Fig. 2. There is no further simplification
possible for this problem, as given in step 3. To hold uniqueness
according to step 4, the initial values for x, y, and z are assumed
as 0.2 and the function values ␣ are chosen as 0.2. Step 5 is the
Fig. 4. Comparison Z values predicted by functional networks with
data collection stage and this is obtained from Eq. 共20兲. Fig. 3
actual values 共Example 1兲
depicts the shape of this function g(x,y), which has an obvious

174 / JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Table 1. Inputs and Output for Beam Problem 共Example 2兲
Bending moment at point Deflections at center
Load at point 2 3 4 5 6 7 8 Computed Functional Neural author Neural 共10兲
2 8.75E⫺2 7.50E⫺2 6.25E⫺2 5.00E⫺2 3.75E⫺2 2.50E⫺2 1.25E⫺2 1.57E⫺1 1.57E⫺1 1.57E⫺1 1.77E⫺1
3 7.50E⫺2 1.50E⫺1 1.25E⫺1 1.00E⫺1 7.50E⫺2 5.00E⫺2 2.50E⫺2 2.93E⫺1 2.93E⫺1 2.93E⫺1 2.99E⫺1
4 6.25E⫺2 1.25E⫺1 1.88E⫺1 1.50E⫺1 1.13E⫺1 7.50E⫺2 3.75E⫺2 3.90E⫺1 3.90E⫺1 3.90E⫺1 3.99E⫺1
5 5.00E⫺2 1.00E⫺1 1.50E⫺1 2.00E⫺1 1.50E⫺1 1.00E⫺1 5.00E⫺2 4.27E⫺1 4.27E⫺1 4.27E⫺1 4.29E⫺1
3⫹5 1.25E⫺1 2.50E⫺1 2.75E⫺1 3.00E⫺1 2.25E⫺1 1.50E⫺1 7.50E⫺2 7.20E⫺1 7.20E⫺1 7.05E⫺1 6.84E⫺1
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

global maximum. An eventually decreasing sine oscillation forms corresponding to these bending moment patterns may also be
the symmetric relief of a parabolic hill. Parametric learning in found in Table 1. Considering the bending moment patterns as
step 6 is carried out by considering the linear combination of the inputs and the computed central deflections as the output, our aim
shape functions using Eq. 共18兲. The values obtained from func- is to apply functional networks to learn so as to predict the de-
tional networks are compared with their actual values in Fig. 4. flection at the center corresponding to the given test input pattern.
The maximum, minimum, and average absolute errors using func- The architecture was similar to the one shown in Fig. 2 with
tional networks were found to be 11.67, 0.015, and 2.48%, re- seven inputs and one output. This problem was solved with initial
spectively. The correlation coefficient between the actual values values of 1 for x i and 0.2 for ␣, and the deflections obtained by a
and the values obtained by the functional networks is 0.9898. This functional network are compared with those obtained by the
is step 7, where the model is validated. writer using a back-propagation neural network. The back-
The same problem was also solved using a back-propagation propagation network consisted of seven input neurons, 10 hidden
neural network with two input neurons, five hidden neurons 共se- neurons 共chosen arbitrarily兲, and one output neuron. The network
lected arbitrarily兲 and one output neuron with a momentum factor was trained with a momentum factor of 0.9, a learning rate of 0.6,
of 0.9, a learning rate of 0.6, and a sigmoidal gain of 1. However, and a sigmoidal gain of 1 for 5,000 iterations using the first four
the network parameters such as hidden neurons, learning rate, and training patterns only. Once weights were determined, the net-
momentum factor can be optimized by considering the genetic work was used to infer the result for the fifth pattern. Maximum
algorithm, and this is beyond the scope of the paper. The back- and minimum absolute error obtained by the BPN were 1.45 and
propagation network 共BPN兲 was trained for 1,000 iterations for 3 0.034, respectively. Papadrakakis and Lagaros 共Ramaswamy et al.
min on a Pentium III computer, whereas the time taken by the 2002兲 also used the back-propagation neural network for the same
functional networks was less than 5 s. The values obtained by the problem. The maximum and average error obtained by Pa-
BPN are also plotted in Fig. 4 for comparison. The maximum, padrakakis and Lagaros 共Ramaswamy et al. 2002兲 using neural
minimum, and average absolute errors were found to be 15.72, networks were 13.07 and 4.5%, respectively, which may be due
0.005, and 4.35%, respectively. The correlation coefficient be- the fact that the BPN chosen by them may not have been near
tween the actual values and the values obtained by the neural optimal.
network is 0.964. It is clear that the functional networks predict The values obtained by both the functional and neural net-
values very close to the actual values for almost all of the data works are shown in Table 1. In this problem only first-order func-
and the correlation is quite good. Moreover, the complex equation tions were used both for inputs and outputs, and the error was
for z can now be simplified and given as zero in the functional network. This is not surprising, because
deflection at the center of the beam 共from the strength of materials
1
z⫽ 兵 0.35481⫺1.0048x⫹1.1537x 2 ⫹0.4283 approach兲 is a linear function of the moments 共for uniform step
共 ⫺0.01091兲 size兲, which agrees with Eq. 共23兲. The computer time by BPN was
28 s, whereas for the FN it was only 3 s using a Pentium III
⫺1.40725y⫹1.32993y 2 ⫹0.1978其 (22)
computer. Again, the functional network is found to be superior to
the back-propagation neural network with reference to computer
Example 2. Deflection of a Simply Supported Beam time.
The deflection output ␦ for the simply supported beam can be
The objective of this example is to show how functional networks
given in terms of the moments at seven points, 2– 8, in equation
can be applied to find the deflection of a simply supported beam
form as

冋再 兺 册冒
with moments at various points as inputs without resorting to the
7
strength of materials approach. A simply supported beam of span
0.8 m and EI⫽0.02498 kNm2 共Ramaswamy et al. 2002兲, as ␦⫽ 共 a i1 ⫹a i2 I i 兲 ⫺a o1 a o2 (23)
i⫽1
shown in Fig. 5, is equally divided into eight parts, and a unit load
is successively applied at points 2– 4, generating the bending mo- where the undetermined coefficients for both input and output are
ment patterns given in Table 1. The computed central deflections given in Table 2.

Table 2. Undetermined Coefficients for Both Input and Output 共Example 2兲


Inputs I
Coefficient 1 2 3 4 5 6 7 Output
a i1 0.595 0.574 1.368 3.139 3.32 ⫺1.65 ⫺5.99 1.75
a i2 ⫺0.395 ⫺0.774 ⫺1.168 ⫺2.939 ⫺3.12 1.85 6.19 ⫺1.95

JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004 / 175

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Table 3. Prediction of Weight of Space Truss by Functional and Neural Networks 共Example 3兲
Input Output (W/ P)
Number B/L D/L D/G S/B Actual Functional Neural 共10兲 Percent error in functional Percent error in neural 共10兲
1 2.50E⫺1 8.00E⫺2 7.10E⫺1 3.30E⫺1 7.33E⫺2 7.36E⫺2 0.073 0.4092 1.43
2 3.00E⫺1 5.00E⫺2 7.10E⫺1 2.20E⫺1 1.08E⫺1 1.07E⫺1 0.0745 0.925 30.85
3 5.00E⫺1 9.00E⫺2 7.10E⫺1 4.00E⫺1 6.40E⫺2 6.59E⫺2 0.07 ⫺2.96 ⫺9.51
4 5.60E⫺1 7.00E⫺2 7.10E⫺1 2.10E⫺1 7.08E⫺2 7.06E⫺2 0.074 0.282 ⫺5.84
5 6.50E⫺1 6.00E⫺2 7.10E⫺1 3.10E⫺1 7.30E⫺2 7.09E⫺2 0.0730 2.87 ⫺0.02
6 6.70E⫺1 1.30E⫺1 5.00E⫺1 5.40E⫺1 5.24E⫺2 5.29E⫺2 0.063 ⫺0.95 ⫺21.0
7.10E⫺1 7.00E⫺2 7.10E⫺1 1.70E⫺1 6.92E⫺2 6.75E⫺2 ⫺11.47
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

7 0.077 2.45
8 7.70E⫺1 6.00E⫺2 6.10E⫺1 2.40E⫺1 6.79E⫺2 7.21E⫺2 0.0738 ⫺6.18 ⫺8.66
9 8.20E⫺1 7.00E⫺2 6.10E⫺1 2.70E⫺1 6.28E⫺2 5.85E⫺2 0.0734 6.84 ⫺17.01
10 8.80E⫺1 7.00E⫺2 5.00E⫺1 3.70E⫺1 5.67E⫺2 5.76E⫺2 0.0694 ⫺1.58 ⫺22.5
11 9.30E⫺1 5.00E⫺2 7.10E⫺1 2.00E⫺1 7.73E⫺2 7.99E⫺2 0.0794 ⫺3.36 ⫺2.81
12 1.00E⫹0 8.00E⫺2 5.00E⫺1 5.00E⫺1 5.37E⫺2 5.22E⫺2 0.0677 2.79 ⫺26.23
13 9.30E⫺1 5.00E⫺2 7.10E⫺1 5.00E⫺1 7.48E⫺2 7.52E⫺2 0.0711 ⫺0.534 4.95

Example 3. Predicting the Weight of Space Trusses one output was the weight of the space truss/total load acting
Using Functional Networks (W/ P) on the structure. Total load acting on the space truss was
2,583.28 kN.
The object of this example is to show how functional networks
In this example, there were four inputs as discussed previously
can be applied to a practical problem to predict the weight of the
and one output, W/ P. The architecture of the functional network
truss for minimum weight design. Optimization of space trusses
is similar to the one shown in Fig. 2 with four inputs and one
to a minimum weight is imperative for success in competitive
output. The network was learned by using the available input and
bidding. Recent developments in nontraditional optimization
output data for 12 space trusses 共out of thirteen available兲 de-
techniques such as genetic algorithms, evolution strategies, and
signed in the past with initial values of 1 for x i and 0.2 for ␣. A
simulated annealing offer a powerful means of optimizing the
quadratic function is used for all four input data and a linear
topology, configuration, and member sizes of space trusses. In
function is used for the output data. The data are shown in
size optimization, the cross-sectional areas of members are nor-
Table 3.
mally chosen as the design variables. The objective function,
The first twelve data were used for learning and the last data
which is the weight, is to be minimized under certain behavioral
for inferring, and the computer time was less than 5 s. The same
constraints on stress and displacements. Thirteen double-layer
example was also trained by using a back-propagation neural net-
space trusses with a topology of diagonal top chords over square
work with four input nodes, six hidden nodes 共chosen arbitrarily兲,
bottom chords were considered by Papadrakakis and Lagaros
and one output node. A momentum factor of 0.9, a learning rate of
共Ramaswamy et al. 2002兲 and with the following input data: 共1兲
0.6, and a sigmoidal gain of one were used, and the network was
aspect ratio⫽shorter span/longer span (B/L)⫽0.93; 共2兲 depth/
trained for 5,000 iterations taking 50 s of computer time on a
shorter span (D/b)⫽0.05; 共3兲 depth/grid dimension (D/G)
Pentium III with 366 MHz speed and 64 MB of RAM. It can be
⫽0.71; and 共4兲 average column spacing/span (S/B)⫽0.5. The
seen from Table 3 and Fig. 6 that the functional network outper-
forms the neural network in execution time. It is observed that the
maximum, minimum, and average absolute errors in the FN are
found to be 6.74, 1.14, and 2.46%, respectively, as compared to a
maximum error of 3.02% obtained by the BPN—whereas very
high errors, up to 30.85%, in BPN neural networks were obtained
by Papadrakakis and Lagaros 共Ramaswamy et al. 2002兲. The
weight of the space truss is given in terms of four inputs as
W
P
⫽ 冋再 兺
i⫽1
4

共 a i1 ⫹a i2 I i ⫹a i3 I i2 兲 ⫺a o1 册冒 a o2 (24)

Table 4. Undetermined Parameters for Both Inputs and Output


共Example 3兲
Inputs I
Coefficient 1 2 3 4 Output
a i1 0.199 ⫺0.8521 0.2887 0.2207 ⫺0.215
a i2 0.00518 ⫺0.0281 ⫺0.3497 ⫺0.0234 0.0157
Fig. 6. Comparison of W/ P 共actual兲 with functional and neural a i3 ⫺0.0112 ⫺0.0247 0.5945 0.0714 —
networks 共Example 3兲 a i4 0.00642 1.105 ⫺0.3335 ⫺0.0686 —

176 / JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Table 5. Observed Displacements of System 共Example 4兲
t u t u t u t u
0.0 0.2 0.04 0.354 0.08 0.482 0.12 0.554
0.16 0.556 0.2 0.476 0.24 0.310 0.28 0.072
0.32 ⫺0.226 0.36 ⫺0.554 0.4 ⫺0.882 0.44 ⫺1.18
0.48 ⫺1.416 0.52 ⫺1.562 0.56 ⫺1.6 0.6 ⫺1.52
0.64 ⫺1.32 0.68 ⫺1.014 0.72 ⫺0.62 0.76 ⫺0.172
0.8 0.302 0.84 0.762 0.88 1.172 0.92 1.504
0.96 1.734 1.0 1.848 1.04 1.842 1.08 1.732
1.12 1.508 1.16 1.224 1.2 0.90 1.24 0.568
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

1.28 0.26 1.32 0.002 1.36 ⫺0.186 1.4 ⫺0.290


1.44 ⫺0.312 1.48 ⫺0.26 1.52 ⫺0.148 1.56 0.0

Table 5 shows the observed displacements of the system.


Using the approach of Castillo 共1998兲, the solution of the kth
order differential equation with constant coefficients in u(t) can
be obtained as
Fig. 7. Spring mass damper k k⫹m

u j⫹k ⫽ 兺 a i u i⫹ j⫺1 ⫹ i⫽k⫹1


i⫽1
兺 a i ␾ i⫺k (26)

The undetermined parameters for third-order inputs are given in where m⫽number of basic or shape functions. Eq. 共25兲 leads to
Table 4. periodic functions; hence, the shape functions can be assumed in
terms of trigonometric functions as
Example 4. Vibrating Mass Example 具 ␾ 1 ,␾ 2 ,␾ 3 ,...␾ 7 典
The objective of this example is to show how FNs can be applied ⫽ 具 1,sin共 t 兲 ,cos共 t 兲 ,sin共 2⫻t 兲 ,cos共 2t 兲 ,sin共 3⫻t 兲 ,cos共 3⫻t 兲 典
to model the difference equation. Consider the system shown in
Fig. 7, which contains a mass M, a spring stiffness K, and a (27)
damper with a coefficient of C with external loads F applied. The The parameters a 1 ,a 2 ,...,a 9 can be determined by minimizing
differential equation of the system is written as the Euclidean norm of the error as

where
M ü⫹Cu̇⫹Ku⫽F 共 t 兲 (25a)
E⫽
n⫺k


j⫽1
冋 ū j⫹k ⫺ 兺
k

i⫽1
a i u i⫹ j⫺1 ⫺
k⫹m


i⫽k⫹1
a i ␾ i⫺k 册 2

(28)

d 2u du For a second-order equation such as Eq. 共33兲 and assuming m


ü⫽ ; u̇⫽ (25b)
dt 2 dt ⫽5 共number of time steps⫽40) we can write (dt⫽the time step兲:

再 冎冋 册再 冎
u3 u1 u2 1 sin共 0 兲 cos共 0 兲 sin共 0 兲 cos共 0 兲 a1
u4 u2 u3 1 sin共 dt 兲 cos共 dt 兲 sin共 2⫻dt 兲 cos共 2⫻dt 兲 a2
⫽ • (29)
u i⫹1 u i⫺1 ui 1 sin共共 i⫺2 兲 ⫻dt 兲 • • •
u 41 u 39 u 40 1 sin共 38⫻dt 兲 cos共 38⫻dt 兲 sin共 76⫻dt 兲 cos共 76⫻dt 兲 a7

Using the principle of least squares 共Schillong and Harris 2000兲, output neuron, u i⫹1 , with five hidden neurons. A learning rate of
we get 0.1 and a momentum factor of 0.2 with a sigmoidal gain of 1 were
used, and the network was trained for 25,000 iterations for 6 min
兵 u 其 j⫹k ⫽ 关 T 兴 兵 a 其 (30a)
on a Pentium III, whereas the time taken by the functional net-
or 关 T 兴 T 兵 u 其 j⫹k ⫽ 兵 y 其 ⫽ 关 T 兴 T 关 T 兴 兵 a 其 ⫽ 关 G 兴 兵 a 其 (30b) work was only 10 s. Fig. 8 shows the comparison of displace-
ments predicted by the functional networks and the neural net-
兵 a 其 ⫽ 关 G 兴 ⫺1 兵 y 其 (30c) works with actual values. The maximum absolute error obtained
Once 兵a其 are determined, using Eq. 共26兲 we can predict the dis- by the BPN was 6.4%. The back-propagation neural network
placements as shown in Fig. 8. The unknown values a for the 共BPN兲 was also used by Castillo 共Castillo et al. 2000a,b兲; this
problem are given in Table 6. required 20,000 iterations with maximum absolute errors of 0.085
A back-propagation neural network was also used by the 共for three input neurons with a bias neuron and three hidden neu-
writer with three input neurons, u i⫺2 , u i⫺1 , and u i , and one rons兲 and 0.036 共for three input neurons with a bias neuron and

JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004 / 177

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Fig. 9. Simply supported beam

vided into ns divisions with each element width as u and


Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

considering the equilibrium and compatibility of the beam ele-


ment, we can write
F⫽V 共 x⫹u 兲 ⫽V 共 x 兲 ⫹A 共 x,u 兲 (32a)
G⫽m 共 x⫹u 兲 ⫽m 共 x 兲 ⫹V 共 x 兲 u⫹B 共 x,u 兲 (32b)
Fig. 8. Comparison of predicted displacement by functional
networks and neural networks with actual values 共Example 4兲 1
H⫽␪ 共 x⫹u 兲 ⫽␪ 共 x 兲 ⫹ 共 m 共 x 兲 u⫹V 共 x 兲 u 2 /2兲 ⫹C 共 x,u 兲
EI
(32c)
six hidden neurons兲. Functional networks contained only seven 1
parameters (m⫽5), as compared with BPN, which contained 13 R⫽w 共 x⫹u 兲 ⫽w 共 x 兲 ⫹␪ 共 x 兲 u⫹ m 共 x 兲 u 2 /2)⫹V 共 x 兲 u 3 /6
EI
and 25 parameters for three and six hidden neurons, respectively.
The correlation coefficient obtained between values using a func- ⫹D 共 x,u 兲 (32d)
tional network and the actual values was 0.999719, whereas the
correlation coefficient obtained between values using a neural where
network and the actual values was 0.9927.
A 共 x,u 兲 ⫽ 冕 x
x⫹u
q 共 s 兲 ds (33a)


Example 5. Beam Subjected to Lateral Load x⫹u
In this example, it is shown that functional network architectures B 共 x,u 兲 ⫽ q 共 s 兲共 x⫹u⫺s 兲 ds (33b)
x
can be efficiently applied to model and predict the behavior of
systems originally stated in terms of differential or difference
equations. C 共 x,u 兲 ⫽ 冕x
x⫹u
B 共 x,s⫺x 兲 ds (33c)

Step 1: Statement
The differential equations of the beam shown in Fig. 9 may be
written as
D 共 x,u 兲 ⫽ 冕x
x⫹u
C 共 x,s⫺x 兲 ds (33d)

V ⬘ 共 x 兲 ⫽q 共 x 兲 , m ⬘ 共 x 兲 ⫽V 共 x 兲 , ␪ ⬘ 共 x 兲 ⫽m 共 x 兲 /EI Step 2: Network for the Problem


The final network for the beam problem is arrived at by consid-
w ⬘ 共 x 兲 ⫽␪ 共 x 兲 (31)
ering the preceding equations and is shown in Fig. 10; further
The aim of this example is to construct an appropriate functional simplification of the functional network is not possible.
network and find the shear, moment, slope, and deflection for a
beam with different boundary conditions based on the data for a Step 3: Data Collection
simply supported beam. A similar problem 共with different span, The vectors of shear (V), moment 共m兲, rotation 共␪兲, and deflec-
loading, and boundary conditions兲 has been solved by Castillo tion 共w兲 are measured in a simply supported beam corresponding
et al. 共2000a兲 using the least squares method and minimizing the to a load unknown to the analyst. For the simulation of the data,
error. They obtained a Vandermonde type matrix, which was we assumed a length of the beam⫽4 m, EI⫽5.17, uniformly dis-
poorly conditioned. But herein we consider an alternate way to tributed load on the left half span of the beam⫽0.02 kN, and
find a least square fit avoiding linear algebraic equations alto- uniformly distributed load on the right half span of the beam
gether by using orthonormal polynomials. ⫽0.03 kN, and we use the structural analysis package SAP 2000
In Eq. 共31兲, EI denotes the flexural rigidity of the beam; q, V, 共SAP 2000 2000兲.
m, ␪, and w⫽load 共upward positive兲, shear 共left up and right
down positive兲, moment 共sagging positive兲, rotation 共counter- Step 4: Learning
clockwise positive兲, and deflection 共upward positive兲 at any sec- Here, we are interested in reproducing the set of data of several
tion x from the left end of the beam. Assuming the beam is di- beams for a given u 共assume load is to be kept constant兲 and

Table 6. Values of a for Example 4


a1 a2 a3 a4 a5 a6 a7 a8 a9
⫺0.14452 1.0793 ⫺421.94 445.2 480.72 ⫺290.56 ⫺22.201 45.771 ⫺36.481

178 / JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004

J. Comput. Civ. Eng., 2004, 18(2): 172-181


具 x␾ i 共 x 兲 "␾ i 共 x 兲 典
P i⫽ (36a)
具 ␾ i 共 x 兲 "␾ i 共 x 兲 典
具 x␾ i 共 x 兲 "␾ i⫺1 共 x 兲 典
q i⫽ (36b)
具 ␾ i⫺1 共 x 兲 "␾ i⫺1 共 x 兲 典
There will be 4m unknowns, and to estimate 4m parameters we
have to minimize the function as
ns
1
E⫽
2 n⫽1兺兵 共 F⫺V 共 x⫹u 兲兲 2 ⫹ 共 G⫺m 共 x,u 兲兲 2 ⫹ 共 H⫺␪ 共 x,u 兲兲 2
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

⫹ 共 R⫺w 共 x,u 兲兲 2 其 (37)


Least squares fit is used for minimizing the function; i.e., find 关S兴
as
关 S 兴 ⫽ 关 兵 S 1其 兵 S 2其 兵 S 3其 兵 S 4其 兴 (38)
where the vectors 兵 S i 其 are defined as
兵 S 1 其 ⫽ 兵 V 共 x⫹u 兲 ⫺V 共 x 兲 其 (39a)
兵 S 2 其 ⫽ 兵 m 共 x⫹u 兲 ⫺m 共 x 兲 ⫺V 共 x 兲 u 其 (39b)
兵 S 3 其 ⫽ 兵 EI共 ␪ 共 x⫹u 兲 ⫺␪ 共 x 兲兲 ⫺m 共 x 兲 ⫺u⫺V 共 x,u 兲 u 2 /2其
Fig. 10. Functional network 共Example 5兲 (39c)
兵 S 4 其 ⫽ 兵 EI共 w 共 x⫹u 兲 ⫺w 共 x 兲 ⫺␪ 共 x 兲 u 兲 ⫺m 共 x 兲 u 2 /2⫺V 共 x 兲 u 3 /6其
different boundary conditions. The usual way of finding A, B, C, (39d)
D is to find the coefficients of the least squares polynomial by The constants a, b, c, and d in Eq. 共5兲 may be obtained as
solving Gx⫽y. However, this method suffers from an important
numerical drawback; the Vandermonde-type matrix G is poorly k⫽1 S 1k ␾ jk
兺 ns k⫽1 S 2k ␾ jk
兺 ns k⫽1 S 3k ␾ jk
兺 ns
a j⫽ ; b j⫽ ; c j⫽
conditioned, particularly for large values of m. Fortunately, there k⫽1 ␾ jk ␾ jk
兺 ns k⫽1 ␾ jk ␾ jk
兺 ns k⫽1 ␾ jk ␾ jk
兺 ns
is an alternative way to find the least squares fit that avoids a
linear algebraic system altogether. The key is to use an orthonor- k⫽1 S 4k ␾ jk
兺 ns
d j⫽ (40)
k⫽1 ␾ jk ␾ jk
兺 ns
mal polynomial, which does not suffer from ill conditioning. The
functions A, B, C, and D in Eq. 共33兲 can be approximated in terms
of orthogonal polynomials, as given in Schillong and Harris This method effectively eliminates the need to solve an ill-
共2000兲: conditioned linear algebraic system. By using orthonormal poly-
nomials, the coefficient matrix a, b, c, and d becomes diagonal
m m
and each component of a, b, c, d are uncoupled from the others;
A 共 x,u 兲 ⫽ 兺
i⫽1
a i ␾ i (x k ) ; B 共 x,u 兲 ⫽ 兺 b i␾ i共 x k 兲
i⫽1
hence, each can be solved for independently, as in Eq. 共40兲. The
use of orthonormal polynomials provides a numerically well con-
m m
ditioned way to find both the least squares polynomial 共when m
C 共 x,u 兲 ⫽ 兺
i⫽1
c i ␾ i 共 x k 兲 ; D 共 x,u 兲 ⫽ 兺 d i␾ i共 x k 兲
i⫽1
(34) ⬍n) and the interpolating polynomial 共when m⫽n). Of course,
higher-order interpolating polynomials are rarely used, because
where the orthogonal function ␾ i⫹1 can be determined as cubic splines provide a smoother fit between samples, particularly
near the ends of the data interval. Hence, this method is recom-
␾ i⫹1 共 x 兲 ⫽ 共 x⫺p i 兲 ␾ i ⫺q i ␾ i⫺1 for k⭓2 (35)
mended rather than the polynomial expression suggested by
This three-term recurrence relation in Eq. 共35兲 can generate or- Castillo et al. 共2000b兲. For the problem under consideration using
thogonal polynomials, and this is a very important property. m⫽5, a, b, c, and d are obtained as given in Table 7.
For i⫽1, ␾ 1 (x)⫽1; and ␾ 2 (x)⫽(x⫺p 1 ); and for any other i, Fig. 11 shows the predicted vectors such as shear, moment,
the coefficients needed to generate the remaining polynomials slope, and deflection of the beam using the functional network
now can be determined by induction. The coefficients P and q are model as discussed previously. Because the approximate func-
determined as tions in Eq. 共34兲 depend only on the applied load, they are valid

Table 7. Unknown Coefficients for Orthogonal Polynomial 共Example 5兲


i a b c d
1 ⫺0.50E⫺2 ⫺0.50E⫺3 ⫺0.33285E⫺4 ⫺0.18169E⫺5
2 ⫺0.7518E⫺3 ⫺0.75188E⫺4 ⫺0.30865E⫺5 0.20700E⫺5
3 0.2812E⫺17 0.44448E⫺18 0.17998E⫺5 0.59987E⫺6
4 0.28043E⫺3 0.28043E⫺4 0.32382E⫺5 ⫺0.16953E⫺5
5 ⫺0.50655E⫺7 ⫺0.26805E⫺18 ⫺0.22242E⫺6 0.21483E⫺5

JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004 / 179

J. Comput. Civ. Eng., 2004, 18(2): 172-181


Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

Fig. 11. Inferred values for beam with different boundary conditions 共Example 5兲: 共a兲 simply supported beam; 共b兲 propped cantilever beam; 共c兲
cantilever beam

for any boundary condition. Thus, once the coefficients in Eq. deflection of a cantilever beam assuming the left end is fixed and
共40兲 have been obtained for a particular boundary condition, the the right end of the beam is free. The values obtained by the
functional network can be used to predict shear, moment, rotation, functional network are compared with the values 共values at five
and deflection for any other beam with different boundary condi- sections of the beam are given in Table 8兲 using SAP 2000 共SAP
tions provided the load remains unchanged. For the same prob- 2000 2000兲; the correlation coefficient is almost equal to 1, and
lem, we consider the left end is pinned (m⫽0;w⫽0) and vary the accuracy is quite good.
the values of V(0), ␪共0兲 by trial and error till the other boundary
conditions 关 ␪(L)⫽0;w(L)⫽0 兴 are satisfied. This is very easily
carried out using the Microsoft Excel package 共Microsoft 1997兲. Conclusions
Fig. 11共b兲 shows the predicted values for shear, moment, slope,
and deflection of the propped cantilever beam. Similarly, Fig. Functional networks, as introduced by Castillo et al. 共Castillo and
11共c兲 shows the predicted values for shear, moment, slope, and Ruiz-Cubo 1992; Castillo 1998兲 proved to be a powerful alterna-

Table 8. Comparison of Values Obtained by Functional with SAP 2000 共Example 5兲


Actual value sap Functional values
Section x/L Shear Moment Rotation Deflect Shear Moment Rotation Deflect
0.00 0.045 0.000 ⫺0.0125 0.00 0.045 0.000 ⫺0.0125 0.00
0.25 0.025 0.035 ⫺0.00883 ⫺0.0113 0.025 0.035 ⫺0.00883 ⫺0.0113
0.50 0.005 0.050 ⫺0.0003 ⫺0.016 0.0036 0.050 ⫺0.00030 ⫺0.0161
0.75 ⫺0.025 0.040 0.0088 ⫺0.117 ⫺0.0245 0.0399 0.00885 ⫺0.0116
1.00 ⫺0.055 0.000 0.0132 0.00 ⫺0.055 0.000 0.0132 0.00

180 / JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004

J. Comput. Civ. Eng., 2004, 18(2): 172-181


tive to standard neural networks. Neural functions are learned in H ⫽ function;
functional networks rather than weights, as in BPN. In this paper, I ⫽ input;
a general methodology to build and work with functional net- K ⫽ stiffness;
works, a network-based alternative to the neural network para- L ⫽ longer span;
digm, is illustrated. For four examples both functional networks l ⫽ span;
and conventional BPN have been applied to solve the problem, M ⫽ mass;
and the results are compared. In this paper only a conventional m ⫽ moment;
BPN has been used for comparison with the FN. It is to be noted ns ⫽ number of divisions;
that there are other sophisticated network types like sequential O ⫽ output;
learning neural networks 共SLNN兲 with a single hidden neuron, P ⫽ term in recurrence equation;
simplified fuzzy art MAP 共SFAM兲, micro ART MAP, etc., with p ⫽ load;
Downloaded from ascelibrary.org by Jaypee University of Engineering - Guna on 11/29/17. Copyright ASCE. For personal use only; all rights reserved.

different training algorithms and hybrid schemes that may be q ⫽ uniformly distributed load;
more accurate than BPN, and it will be interesting to see how the s ⫽ spacing of columns;
FN compares with them. In the last example, it is seen that, if the t ⫽ time;
loading remains unchanged, functional networks—if proper u ⫽ displacement;
boundary conditions are applied—will lead to the solution of a V ⫽ shear force;
problem with assumed boundary conditions. Thus, four structural W ⫽ weight of space truss;
engineering applications and one mathematics application have w ⫽ deflection;
illustrated the method of functional networks and demonstrated ␣ ⫽ constant;
their power. ␦ ⫽ deflection;
␪ ⫽ slope; and
␾ ⫽ shape function.
Acknowledgments

The writer thanks the management and Dr. S. Vijayarangan, Prin- References
cipal, of the PSG College of Technology for giving the necessary
facilities to carry out the work reported in this paper. Sincere Adeli, H., and Huang, S. 共1996兲. Machine learning neural networks,
thanks are due to the All India Council for Technical Education genetic algorithms, and fuzzy systems, Wiley, New York.
for granting the Emeritus Fellowship during the course of this Castillo, E. 共1998兲. ‘‘Functional networks.’’ Neural Process. Lett., 7,
work. The writer thanks the anonymous reviewers for their sug- 151–159.
gestions to improve the standard of this manuscript. Castillo, E., Cobo, A., Gutierrez, J. M., and Pruneda, E. 共1998兲. An intro-
duction to functional networks with applications, Kluwer, Boston.
Castillo, E., Cobo, A., Manuel, J., Gutierrez, J. M., and Pruneda, E.
共2000a兲. ‘‘Functional networks: A new network-based methodology.’’
Notation Comput. Aided Civ. Infrastruct. Eng., 15, 90–106.
Castillo, E., Gutierrez, J. M., Cobo, A., and Castillo, C. 共2000b兲. ‘‘Some
The following symbols are used in this paper: learning methods in functional networks.’’ Comput. Aided Civ. Infra-
A ⫽ matrix; struct. Eng., 1, 427– 439.
a ⫽ undetermined parameters; Castillo, E., and Ruiz-Cobo, R. 共1992兲. Functional equations in science
B ⫽ width of space truss; and engineering, Marcel Dekker, New York.
b ⫽ undetermined parameters; Microsoft Excel package users manual. 共1997兲. Microsoft, Redmond,
b ⫽ shorter span; Wash.
C ⫽ damping coefficient; Pao, Y. 共1989兲. Adaptive pattern recognition and neural networks,
c ⫽ undetermined parameter; Addison-Wesley, Reading, Mass.
D ⫽ depth of space truss; Ramaswamy, G. S., Eekhout, M., and Suresh, G. R. 共2002兲. ‘‘Chapter 9:
optimization techniques.’’ Analysis, design, and construction of steel
d ⫽ undetermined parameter;
space frames, Thomas Telford, London, 173–208.
E ⫽ Euclidean error norm; SAP 2000: Structural Analysis Package users manual. 共2000兲. Computers
F ⫽ force; and Structures, Berkeley, Calif.
f ⫽ function; Schillong, R., and Harris, S. L. 共2000兲. Applied numerical methods for
g ⫽ function; engineers using MATLAB and C, Brooks/Cole Thomson Learning,
g ⫽ grid dimension; Florence, Ky.

JOURNAL OF COMPUTING IN CIVIL ENGINEERING © ASCE / APRIL 2004 / 181

J. Comput. Civ. Eng., 2004, 18(2): 172-181

Вам также может понравиться