Вы находитесь на странице: 1из 5

The application of multi sensor data fusion based on the improved BP neural

network algorithm
Bang-cheng Zhang1,2, Jian-qiao Lin2, Zhen-chen Chang1*, Xiao-jing Yin2, Zhi Gao3
1.Changchun Rail Way Vehicles Co., Ltd, Jilin, Changchun, 130012, China
E-mail: changzhenchen@cccar.com.cn
2.School of Mechatronic Engineering, Changchun University of Technology, Jilin, Changchun 130012, China
E-mail: zhangbangcheng@ccut.edu.cn
ljq_nemoy@163.com
yinxiaojing2011@163.com
3.Institute Of Technology, Changchun University of Technology, Jilin, Changchun 130012, China
E-mail: gaozhi@ccut.edu.cn
Abstract: In order to improve the accuracy of multi sensor data fusion, a new data fusion algorithm based on BP neural
network is proposed, which can prevent the network from not convergence and improve the performance of the network.
The calibration data is used as the experimental data which tested in the sensor integrated test stand with Pt100
temperature sensor. The simulation results show that the improved BP neural network for Pt100 temperature sensor data
fusion has better accuracy of data fusion compared with the standard BP neural network. The proposed algorithm can be
applied in multi-sensor data fusion.
Key Words: Multi sensor, BP neural network, Data fusion, Matlab simulation

paper, and the improved BP algorithm is applied to improve


the accuracy of data fusion on the basis of the traditional BP
1 INTRODUCTION
neural network.
In the case of the complex work environment, such as During the training of BP neural network, the
high pressure, electromagnetic interference and sensor oscillation phenomenon in network training is not inhibited
direct interference and other factors, the signal obtained by by the single additive momentum. The BP neural network
the sensor detection target is submerged in a large number algorithm of multi momentum term is adopted to improve
of noise, uncorrelated signal and clutter. By using multiple the convergence speed of the network, the accuracy and anti
sensors to detect the target and fuse the collected data, it can disturbance ability of the network.
break through the limitation of a single sensor Pt100 temperature sensor as the test object, this paper
measurement, avoid the single sensor information blind adopts the improved BP neural network to fuse data of
area, and improve the accuracy of the measurement [1]. multi Pt100 sensor. The experimental data is calibration
At present, there are some methods of data fusion data of Pt100 in the sensor integrated test bench, and the
including BP neural network, Kalman, D-S evidential multi-sensor data fusion of the BP neural network is
theory and fuzzy logic. Aiming at the dynamic inertial verified. The experimental results show that the improved
measurement data and machine frame encoder information, BP neural network can increase the accuracy of data fusion
the Kalman filter algorithm was used to compensate the effectively, it can be applied in the engineering.
dynamic measuring error and calculation error, so as to
locate the pose (position and orientation) of the parallel 2 BP NEURAL NETWORK ALGORITHM
machine tool moving platform accurately [3]. The 2.1 Standard BP algorithm
covariance matching relationship based on Kalman filter
was used in the literature [4], fuzzy reasoning was used to The connection weight of standard BP neural network
get the weights of data fusion, and then the state estimation is modified by the reverse direction of the gradient of the
of the Kalman filter of each sensor was fused to obtain the error function. When the input data x(n) , the data is
required target information. Within analyzing D-S transmitted through the network, the output of the output
evidential theory for data fusion in theory, D-S evidential layer of the j neuron is:
theory was used to data fusion for target recognition and
had very good results [5]. In view of fact that the structure of vJi ( n ) = f ( u Jj ( n ) ) (1)
the BP neural network algorithm is simple and fast, BP
I M
neural network algorithm is used for data fusion in this
where uJj ( n ) = ij ( n ) vIi ( n ) , vIi ( n ) = mi ( n ) xMm ( n ) ,
i =1 m =1

This work is supported by the Foundation of Department of Education of mi ( n ) is connection weight of the input layer to the hidden
JiLin Province of China under Grant 2014117,and the Foundation of
layer, ij ( n ) is connection weight of the hidden layer to the
Department of Science and Technology of JiLin Province of China under Grant
20150204073GX. output layer weights. And vIi ( n) , u Jj (n) are the input and
Zhen-chen, Chang, Communication author ,E-mailchangzhenchen@cccar.com.cn

978-1-4673-9714-8/16/$31.00 2016
c IEEE 3842
output of the hidden layer respectively. f ( ) is the which is the weight of the ( n 1) moment value. In order
activation function, the error of the output layer of the j to make the effect of the extreme value of the time,
neuron can be written as: should be less than .The change direction of the
e j ( n ) = d j ( n ) vJj ( n ) (2) connecting weights of the two moments is considered, this
can be used to improve the convergence rate for the higher
The total error of the network is obtained as follow: learning rate coefficient. The "inertia effect" is also stronger,
1 J the ability to inhibit the emergence of the network has also
e ( n ) = e 2j ( n ) (3)
2 j =1 been enhanced. The addition of momentum method not
only considers the effect of the error in the gradient, but
where d j ( n ) is the network expected output. also considers the variation tendency of the error surface, it
When the error is propagated along the network, the allows the neglect of small changes in the network. From
connection weights ( ij ) of the hidden layer and the output the stability of the learning process, the momentum term of
this method is equivalent to the damping term which
layer are adjusted, according to the steepest descent
decreases the oscillation of the learning process improving
method, calculate the error on ij gradient, let gradient is the convergence and finding a better solution.
e ( n )
, and then along the direction of reverse to adjust 3 IMPROVED BP ALGORITHM FOR
ij ( n ) MULTI SENSOR DATA FUSION
the connection weight.
3.1 Data preprocessing
Finally get the correction of the connection weight:
In solving practical problems in BP network, the
ij ( n ) = e j ( n ) v Ii ( n ) (4)
network always comes up a situation that the network does
When the error is propagated forward, the connection not converge or converge slowly. It is necessary to improve
weight ( mi ) is adjusted between the input layer and the the convergence rate of the network before the data enter
hidden layer: the network. There is not have a generally accepted method
of preprocessing, the most frequently used methods are the
mi ( n ) = Ii ( n ) x ( n ) (5) most value method, the sum value method, the peak value
where is the learning rate, is the local gradient. In BP method ,etc. Because of this paper is only for a temperature
network, the data is transmitted from the input layer to the parameter, in meeting the accuracy requirements, in order
output layer through the hidden layer. When the connection to improve the performance of network, the temperature
weights of the network are training, the weights of the data is preprocessed by the method of the maximum value
network are corrected along the direction of the error from method, and the formula is obtained as follows:
the input layer to the output layer through the hidden layer. x xmin
With the continuous learning, the error is gradually xi = i (8)
xmax xmin
reduced, and the training will not stop until the error is
reduced to a predetermined request, or the iteration where xi and xi are untreated and treated data respectively,
numbers achieve a predetermined number of learning.
xmax xmin are the maximum and minimum value before
2.2 Additional momentum method the data processing, the processed data input to the network
The momentum BP method is introduced into the and training.
momentum factor (0 < < 1) based on the standard BP 3.2 BP neural network design
algorithm, it makes the connection weights correction BP network can include one or more hidden layers, but
quantity that has certain inertia: it has been proved that the network can achieve arbitrary
( n ) = (1 ) e ( n ) + ( n 1) (6) nonlinear mapping by appropriate increase of the number of
nodes, so that the single hidden layer can satisfy the
Due to the increase of a factor (n 1) , which
requirements of most of the application. Because the test
means the connection weight update direction and sample is less, it can be used in three layers including one
magnitude are related with not only the calculated gradient, input layer, one hidden layer and one output layer.
but also related to the direction and magnitude of the last The number of nodes in the input layer depends on the
update, this factor adding to update the weight has a certain dimension of the input vector. Because the input of the
inertia, certain anti shock ability and speed up the network is the value of the 4 Pt100 temperature sensors, the
convergence ability. number of the input layer nodes is 4.
The improved algorithm of the momentum BP method There is no ideal analytic formula for the number of
is used in this paper, the corresponding connection weight nodes in the hidden layer. The usual method is to use
correction formula is obtained as follows: empirical formula to estimate the value of the formula
( n ) = (1 ) e ( n ) + ( n 1) + ( n 2 ) (7) M = log 2 n , n is the input layer neuron number, which is
This method is based on the ordinary added 2 of the number of hidden layer neurons.
momentum ( n 1) , and then add a term ( n 2 ) , Since this paper is to fuse the data from multiple
temperature sensors, the output layer neuron number is 1.

2016 28th Chinese Control and Decision Conference (CCDC) 3843


The transfer function has sigmoid function and linear Training
function. Because the sigmoid function is mapped to (-1,1) Data
or (0,1) range from negative infinity to positive infinity, it
has a nonlinear amplification function, so the transfer
function of the hidden layer is chosen as the Log-sigmoid
Training
function and the transfer function of the output layer is Network
linear function.
BP neural network is used to determine the weights of Training again
the iterative update, the initial value is too large or too small
which can affect the performance of the neural network,
usually the initial weight is defined as a small nonzero If error set value No
value, the empirical value is (2.4/ F,2.4/ F) , where F is
the number of neurons connected to the input end of the
weight. So the initial weight is the random number over
(-0.6, 0.6). Test Data Yes
The expected error of the network training is 0.01, the
learning rate is = 0.8 , the maximum learning number is
Fusion Result
1000, the momentum factors are = 0.7 , = 0.5 .

3.3 Basic steps of data fusion for improved BP Fig 1. Network training flow chart
algorithm The network is constructed by the Pt100 temperature
sensor training data and test data. The training data and test
Step (1): The training data xi is normalized
data are obtained from comprehensive test-bed. 15 set of

preprocessing according to the way of chapter 3.1, get xi ; data is simulated in the Matlab. The simulation data are

shown in the following table:
the xi as the input of neural network, combined with the Table 1. Sensor data
connection weights to calculate the output of the output Calibration
Num Sensor1 Sensor2 Sensor3 Sensor4
vIi ( n ) and neural network error e ( n ) ; value
1 12.62 11.28 12.41 11.97 12
Step (2): Determine whether the error is less than the
default value, if it is, the error will be propagated back 2 12.81 12.84 13.06 12.89 13
along the network, and modify the output layer and the
hidden layer and the hidden layer and the input layer 3 14.25 15.83 14.55 15.29 15
( n) ( n) 4 19.82 19.58 18.09 19.41 19
connection weights ij and mi , if not, jump out of
the network directly, and the training is completed; 5 21.26 21.91 20.19 21.51 21
Step (3): Calculate the output of network by using the
modified connection weights, then repeat step(2) and 6 23.19 24.31 24.64 23.55 24
step(3); 7 25.55 25.07 26.38 26.35 26
Step (4): When the error e ( n ) or learning times ( )
8 29.09 29.69 28.63 29.31 30
meet predetermined requirements, the network training is
done. 9 32.91 32.86 32.90 31.325 33

10 35.92 35.35 34.06 34.23 35


4 DATA SIMULATION 11 37.31 38.51 37.87 37.99 38
Four Pt100 temperature sensors are installed in the
12 42.94 42.486 41.73 42.91 43
sensor detection test bench and the temperature of the test
bed is adjusted. Four sensors are monitored in real time, and 13 47.91 46.78 47.53 46.68 48
the data are stored in the database. The BP neural network
training process is shown in figure 1. 14 49.97 50.31 50.59 50.17 50

15 52.60 52.04 51.87 52.44 52


This paper simulates the temperature data fusion by
using the standard BP method and improved BP method.
BP neural network is designed and train to get the final
network by using the above method. The comparison of
RMS error values are given in figure 3.

3844 2016 28th Chinese Control and Decision Conference (CCDC)


Table 2 results of data fusion (/)
1

Standard Improved Calibration


0.9 Improved BP network Number
Standard BP network BP BP value
0.8 1 12.18 12.92 12
0.7
2 12.49 13.27 13
3 15.49 15.20 15
0.6
4 20.01 19.17 19
Error RMS

0.5 5 22.65 21.45 21


6 23.88 24.45 24
0.4
7 26.79 26.41 26
0.3 8 29.74 29.80 30
0.2
9 31.68 32.62 33
10 34.61 34.89 35
0.1
11 37.08 37.72 38
0
0 50 100 150 200 250 300 350 400
12 43.53 42.82 43
Iteration number
13 47.39 47.74 48
14 50.31 50.16 50
Fig 2. Local amplification of the RMS of network training error
15 51.28 51.95 52
1 The average error of the fusion results of the standard
0.9 Improved BP network
momentum method is 0.654, while the error of the modified
Standard BP network BP momentum method is 0.299. The fusion accuracy of the
0.8
relative standard momentum method is improved.
0.7
5 CONCLUDING REMARKS
0.6

The simulation results show that the data fusion based


Error RMS

0.5
on BP neural network can improve the accuracy of data
0.4
acquisition effectively. From the point of view of the error
0.3 between the results of the training and the expectation, the
0.2
improved momentum BP method is used to train the
network, and the root mean square value of the error is
0.1
smaller than the standard momentum method; From the
0
0 500 1000 1500 2000 2500 3000 point of view of data fusion, the fusion error of the
Iteration number
improved momentum BP method is small, and the fusion
Fig 3. Comparison of the RMS value of network training error
accuracy is improved. The network performance is affected
by the training data capacity, a large amount of data is used
By Fig 2 and Fig 3, it can be seen that the error convergence
rate of the improved BP momentum method is fast and the RMS to train the network, which will make the performance of
error values is small, so the network performance of the improved the network better. When the actual working environment
BP momentum is better than the standard BP method. of the sensor becomes worse, the improved momentum
Put the test data into the training network and fuse the data, method can further improve the network's ability to resist
fusion results is depicted in figure 4. interference.

55
REFERENCES
Standard BP network

50
Improved BP network
Calibration value
[1] M. G. Huang, S. C. Fan, D. Z. Zheng, W. W. Xing.
Research progress of multi-sensor data fusion
45
technology[J]. Transducer and Microsystem
40 Technologies, 2010,03:5-8+12.
[2] Z. Z. Zeng, Y. N. Wang. Multi-sensor information
Temperature/

35
fusion approach based on the neural network
30
algorithm with orthogonal basis functions[J]. Chinese
25
Journal of Sensors and Actuators,
2007,06:1368-1370.
20
[3] L. Gu, R. G. Guan. Dynamic positioning method for
15 parallel machine based on Kalman filtering data
fusion[J]. Chinese Journal of Mechanical
10
0 5
Test point
10 15 Engineering, 2007,07:195-201.
[4] X. J. Ding, D. Y. Zhou, C. H. Hu, Q. Wang. A new
Fig 4. data fusion comparison chart fuzzy algorithm for fusing data from active/ Passive
radars[J]. Jour-nal of Northwestern Polytechnical
University, 2006,02:190-194.
[5] J. H. Lan, B. H. Ma, T. Lan, Z. Y. Zhou. D-S evidence
reasoning and its data fusion application in target

2016 28th Chinese Control and Decision Conference (CCDC) 3845


recognition[J]. Journal of Tsinghua
University(Science and Technology), 2001,41(2),
53-55.
[6] Y. J. Yang, S Liu. Data aggregation in WSN based on
SOFM neural network[J].Chinese Journal of Sensors
and Actuators, 2013,12:1757-1763.
[7] N. Wang, W. C. Li, Y. Li. Data fusion method based on
neural network[J], OME Information ,2010,3:36-42.
[8] G. Yang, S. Li, Z.Y. Chen, J. Xu, Z. Yan.
High-accuracy and privacy-preserving oriented data
aggregation algorithm in sensor networks[J]. Chinese
Journal of Computers,2013,01:189-200.
[9] C. X. Gu, Y. Q. Li, C. D. Gu. Remote sensing image
watermarking based on PCA and data fusion[J].
Computer Science, 2012,07:290-292.
[10] Y. J. Xiong, M. X. Shen, M. Y. Lu,Y. H. Liu, Y. W.
Sun, L. S. Liu. Algorithm of real time data fusion for
green-house WSN system[J]. Transactions of the
Chi-nese Society of Agricultural Engineering,
2012,23:160-166.
[11] Z. Q. Jiao, W. L. Xiong, L. Zhang, B. G. Xu.
Multi-sensor data fusion method based on belief
degree and its applications[J].Journal of Southeast
University(Natural Science Edition),
2008,S1:253-257.
[12] W. B. Xie, Y. C. Wang, Y. Q. Zheng. A
multi-granularity data fusion based algorithm for line
detection[J]. Computer Science, 2007,09:213-217.
[13] J. S. Liu, R. H. Li, H. Chang. Multi-sensor data fusion
based on correlation function and least square[J].
Control and Decision, 2006,06:714-716+720.
[14] Z. W. Sun, J. J. Liu, Z. C. Ji. Agent based D-S data
fusion in wireless sensor network[J].Computer
Engineering & Science,2014,10:1919-1924.

3846 2016 28th Chinese Control and Decision Conference (CCDC)

Вам также может понравиться