Академический Документы
Профессиональный Документы
Культура Документы
network algorithm
Bang-cheng Zhang1,2, Jian-qiao Lin2, Zhen-chen Chang1*, Xiao-jing Yin2, Zhi Gao3
1.Changchun Rail Way Vehicles Co., Ltd, Jilin, Changchun, 130012, China
E-mail: changzhenchen@cccar.com.cn
2.School of Mechatronic Engineering, Changchun University of Technology, Jilin, Changchun 130012, China
E-mail: zhangbangcheng@ccut.edu.cn
ljq_nemoy@163.com
yinxiaojing2011@163.com
3.Institute Of Technology, Changchun University of Technology, Jilin, Changchun 130012, China
E-mail: gaozhi@ccut.edu.cn
Abstract: In order to improve the accuracy of multi sensor data fusion, a new data fusion algorithm based on BP neural
network is proposed, which can prevent the network from not convergence and improve the performance of the network.
The calibration data is used as the experimental data which tested in the sensor integrated test stand with Pt100
temperature sensor. The simulation results show that the improved BP neural network for Pt100 temperature sensor data
fusion has better accuracy of data fusion compared with the standard BP neural network. The proposed algorithm can be
applied in multi-sensor data fusion.
Key Words: Multi sensor, BP neural network, Data fusion, Matlab simulation
This work is supported by the Foundation of Department of Education of mi ( n ) is connection weight of the input layer to the hidden
JiLin Province of China under Grant 2014117,and the Foundation of
layer, ij ( n ) is connection weight of the hidden layer to the
Department of Science and Technology of JiLin Province of China under Grant
20150204073GX. output layer weights. And vIi ( n) , u Jj (n) are the input and
Zhen-chen, Chang, Communication author ,E-mailchangzhenchen@cccar.com.cn
978-1-4673-9714-8/16/$31.00 2016
c IEEE 3842
output of the hidden layer respectively. f ( ) is the which is the weight of the ( n 1) moment value. In order
activation function, the error of the output layer of the j to make the effect of the extreme value of the time,
neuron can be written as: should be less than .The change direction of the
e j ( n ) = d j ( n ) vJj ( n ) (2) connecting weights of the two moments is considered, this
can be used to improve the convergence rate for the higher
The total error of the network is obtained as follow: learning rate coefficient. The "inertia effect" is also stronger,
1 J the ability to inhibit the emergence of the network has also
e ( n ) = e 2j ( n ) (3)
2 j =1 been enhanced. The addition of momentum method not
only considers the effect of the error in the gradient, but
where d j ( n ) is the network expected output. also considers the variation tendency of the error surface, it
When the error is propagated along the network, the allows the neglect of small changes in the network. From
connection weights ( ij ) of the hidden layer and the output the stability of the learning process, the momentum term of
this method is equivalent to the damping term which
layer are adjusted, according to the steepest descent
decreases the oscillation of the learning process improving
method, calculate the error on ij gradient, let gradient is the convergence and finding a better solution.
e ( n )
, and then along the direction of reverse to adjust 3 IMPROVED BP ALGORITHM FOR
ij ( n ) MULTI SENSOR DATA FUSION
the connection weight.
3.1 Data preprocessing
Finally get the correction of the connection weight:
In solving practical problems in BP network, the
ij ( n ) = e j ( n ) v Ii ( n ) (4)
network always comes up a situation that the network does
When the error is propagated forward, the connection not converge or converge slowly. It is necessary to improve
weight ( mi ) is adjusted between the input layer and the the convergence rate of the network before the data enter
hidden layer: the network. There is not have a generally accepted method
of preprocessing, the most frequently used methods are the
mi ( n ) = Ii ( n ) x ( n ) (5) most value method, the sum value method, the peak value
where is the learning rate, is the local gradient. In BP method ,etc. Because of this paper is only for a temperature
network, the data is transmitted from the input layer to the parameter, in meeting the accuracy requirements, in order
output layer through the hidden layer. When the connection to improve the performance of network, the temperature
weights of the network are training, the weights of the data is preprocessed by the method of the maximum value
network are corrected along the direction of the error from method, and the formula is obtained as follows:
the input layer to the output layer through the hidden layer. x xmin
With the continuous learning, the error is gradually xi = i (8)
xmax xmin
reduced, and the training will not stop until the error is
reduced to a predetermined request, or the iteration where xi and xi are untreated and treated data respectively,
numbers achieve a predetermined number of learning.
xmax xmin are the maximum and minimum value before
2.2 Additional momentum method the data processing, the processed data input to the network
The momentum BP method is introduced into the and training.
momentum factor (0 < < 1) based on the standard BP 3.2 BP neural network design
algorithm, it makes the connection weights correction BP network can include one or more hidden layers, but
quantity that has certain inertia: it has been proved that the network can achieve arbitrary
( n ) = (1 ) e ( n ) + ( n 1) (6) nonlinear mapping by appropriate increase of the number of
nodes, so that the single hidden layer can satisfy the
Due to the increase of a factor (n 1) , which
requirements of most of the application. Because the test
means the connection weight update direction and sample is less, it can be used in three layers including one
magnitude are related with not only the calculated gradient, input layer, one hidden layer and one output layer.
but also related to the direction and magnitude of the last The number of nodes in the input layer depends on the
update, this factor adding to update the weight has a certain dimension of the input vector. Because the input of the
inertia, certain anti shock ability and speed up the network is the value of the 4 Pt100 temperature sensors, the
convergence ability. number of the input layer nodes is 4.
The improved algorithm of the momentum BP method There is no ideal analytic formula for the number of
is used in this paper, the corresponding connection weight nodes in the hidden layer. The usual method is to use
correction formula is obtained as follows: empirical formula to estimate the value of the formula
( n ) = (1 ) e ( n ) + ( n 1) + ( n 2 ) (7) M = log 2 n , n is the input layer neuron number, which is
This method is based on the ordinary added 2 of the number of hidden layer neurons.
momentum ( n 1) , and then add a term ( n 2 ) , Since this paper is to fuse the data from multiple
temperature sensors, the output layer neuron number is 1.
3.3 Basic steps of data fusion for improved BP Fig 1. Network training flow chart
algorithm The network is constructed by the Pt100 temperature
sensor training data and test data. The training data and test
Step (1): The training data xi is normalized
data are obtained from comprehensive test-bed. 15 set of
preprocessing according to the way of chapter 3.1, get xi ; data is simulated in the Matlab. The simulation data are
shown in the following table:
the xi as the input of neural network, combined with the Table 1. Sensor data
connection weights to calculate the output of the output Calibration
Num Sensor1 Sensor2 Sensor3 Sensor4
vIi ( n ) and neural network error e ( n ) ; value
1 12.62 11.28 12.41 11.97 12
Step (2): Determine whether the error is less than the
default value, if it is, the error will be propagated back 2 12.81 12.84 13.06 12.89 13
along the network, and modify the output layer and the
hidden layer and the hidden layer and the input layer 3 14.25 15.83 14.55 15.29 15
( n) ( n) 4 19.82 19.58 18.09 19.41 19
connection weights ij and mi , if not, jump out of
the network directly, and the training is completed; 5 21.26 21.91 20.19 21.51 21
Step (3): Calculate the output of network by using the
modified connection weights, then repeat step(2) and 6 23.19 24.31 24.64 23.55 24
step(3); 7 25.55 25.07 26.38 26.35 26
Step (4): When the error e ( n ) or learning times ( )
8 29.09 29.69 28.63 29.31 30
meet predetermined requirements, the network training is
done. 9 32.91 32.86 32.90 31.325 33
0.5
on BP neural network can improve the accuracy of data
0.4
acquisition effectively. From the point of view of the error
0.3 between the results of the training and the expectation, the
0.2
improved momentum BP method is used to train the
network, and the root mean square value of the error is
0.1
smaller than the standard momentum method; From the
0
0 500 1000 1500 2000 2500 3000 point of view of data fusion, the fusion error of the
Iteration number
improved momentum BP method is small, and the fusion
Fig 3. Comparison of the RMS value of network training error
accuracy is improved. The network performance is affected
by the training data capacity, a large amount of data is used
By Fig 2 and Fig 3, it can be seen that the error convergence
rate of the improved BP momentum method is fast and the RMS to train the network, which will make the performance of
error values is small, so the network performance of the improved the network better. When the actual working environment
BP momentum is better than the standard BP method. of the sensor becomes worse, the improved momentum
Put the test data into the training network and fuse the data, method can further improve the network's ability to resist
fusion results is depicted in figure 4. interference.
55
REFERENCES
Standard BP network
50
Improved BP network
Calibration value
[1] M. G. Huang, S. C. Fan, D. Z. Zheng, W. W. Xing.
Research progress of multi-sensor data fusion
45
technology[J]. Transducer and Microsystem
40 Technologies, 2010,03:5-8+12.
[2] Z. Z. Zeng, Y. N. Wang. Multi-sensor information
Temperature/
35
fusion approach based on the neural network
30
algorithm with orthogonal basis functions[J]. Chinese
25
Journal of Sensors and Actuators,
2007,06:1368-1370.
20
[3] L. Gu, R. G. Guan. Dynamic positioning method for
15 parallel machine based on Kalman filtering data
fusion[J]. Chinese Journal of Mechanical
10
0 5
Test point
10 15 Engineering, 2007,07:195-201.
[4] X. J. Ding, D. Y. Zhou, C. H. Hu, Q. Wang. A new
Fig 4. data fusion comparison chart fuzzy algorithm for fusing data from active/ Passive
radars[J]. Jour-nal of Northwestern Polytechnical
University, 2006,02:190-194.
[5] J. H. Lan, B. H. Ma, T. Lan, Z. Y. Zhou. D-S evidence
reasoning and its data fusion application in target