Вы находитесь на странице: 1из 9

Applied Soft Computing 11 (2011) 11081116

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Classication of material type and its surface properties using digital signal processing techniques and neural networks
Nadir N. Charniya a, , Sanjay V. Dudul b
a b

Dept. of Electronics Engineering, B.N. College of Engineering, Pusad 445 215, Maharashtra, India P.G. Dept. of Applied Electronics, Sant Gadge Baba Amravati University, Amravati, Maharashtra, India

a r t i c l e

i n f o

a b s t r a c t
A novel method for the classication of material type and its surface roughness by means of a lightweight plunger probe and optical mouse is presented in this paper. An experimental prototype was developed which involves bouncing or hopping of the plunger based impact probe freely on the plain surface of an object under test. The time and features of bouncing signal are related to the material type and its surface properties, and each material has a unique set of such properties. During the bouncing of the probe, a time varying signal is generated from optical mouse that is recorded in a data le on PC. Some dominant unique features are then extracted using digital signal processing tools to optimize neural network based classier used in the existing system. The classier is developed on the basis of application of supervised structures of neural networks. For this, an optimum Multilayer Perceptron Neural Network (MLP NN) model is designed to maximize accuracy under the constraints of minimum network dimension. Conjugate-gradient learning algorithm, which provides faster rate convergence, has been found suitable for the training of the MLP NN. The optimal parameters of MLP NN model based on various performance measures that also includes the receiver operating characteristics curve and classication accuracy on the testing data sets even after attempting different data partitions are determined. The classication accuracy of MLP NN is found reasonable consistently in respect of rigorous testing using different data partitions. 2010 Elsevier B.V. All rights reserved.

Article history: Received 15 December 2006 Received in revised form 21 August 2007 Accepted 27 February 2010 Available online 6 March 2010 Keywords: Classication Multilayer Perceptron Neural Network Signal processing

1. Introduction This paper presents a novel application of the power of neural networks and signal processing in the implementation of a system for classication of material type and its surface roughness. Surface roughness is a very important parameter for the products of manufacturing industry such as laminating and painting applications, micromachining of polycrystalline silicon lms. This parameter is also important to meet surface nish requirements, determination of surface wear, surface metrology, surface topography, etc. Characterization of material and its surfaces for quality control is an important aspect of many manufacturing processes. Signicant work has already been done in the classication and determination of material types and surface roughness [17]. McKerrow and Kristiansen [1] developed a continuous transmission frequency modulated ultrasonic sensing system for the classication of surface roughness. Ultrasonic sensing in air provides range, area, and angle information because the surface geometry deter-

Corresponding author. Tel.: +91 07233 249890; fax: +91 07233 246316. E-mail addresses: nncharniya@gmail.com, nrs1234@rediffmail.com (N.N. Charniya), svdudul@gmail.com (S.V. Dudul). 1568-4946/$ see front matter 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2010.02.010

mines the characteristics of the echo. The authors introduced the spatial-angle-lter model to explain the impact of surface roughness on the echo. On the basis of the model, a set of features was obtained for classifying surfaces. Aoshima et al. [2] proposed the object discrimination system using a neural network with inputs for distance and sensitivity information of an ultrasonic sensor. The system could discriminate four kinds of objects such as, oor, acryl plate, foaming polystyrol and curtain cloth. Zhang et al. [3] used bre optic sensors for surface roughness and displacement measurement with the aid of neural networks. He proposed the use of the compact non-destructive sensors in industry for manufacturing process monitoring and automation. A novel smart tactile sensor that recognizes the nature of the surfaces has been experimented by Baglio et al. [4]. The developed system is based on two bimorph piezo-ceramic elements, used both as actuator and sensor that allow the unknown surface to be stimulated and the response signal sensed. Differences in the distribution of energy content in the power spectral densities (PSD) of response signal have been used for the development of the recognition system. The sensors could be mounted on the end-effectors of robots for determination of material type. Using the system six different materials that includes steel, plexiglass, wood, glass, aluminium, and stone were classied. McMath et al. [5] described an experimental work

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

1109

on a tactile sensor based pattern recognition system using neural networks. The work presents the recognition of tactile images of letters embossed on wooden blocks. Development of humanlike multi-modal soft nger with the ability to sense the texture of objects such as, paper and wood has been described by Tada et al. [6]. The ngertip has silicon rubber layers of different hardness; strain gauges and polyvinylidene uoride (PVDF) lms that are randomly distributed as tactile sensors. To collect tactile sensor data the soft ngertip was made to rub various objects, with the help of robotic hand. Variances of the different PVDF lms have been used as the features for discrimination of textures. Brenner et al. [7] has described the effectiveness of application of neural networks for the classication of metal type and its surface roughness using a Dynamic Touch Sensor (DTS). Use of neural networks for the sensor with dynamic properties is proposed since it learns how to separate classes of signals by examples. The DTS produces signals based in the vibration induced by a sensor needle sliding across the surface at a xed velocity and pressure. The spectral energy of the sensor signal was used as the feature for the classication. Commercial mouse has been successfully modied/tailored and effectively used for various sensing applications such as, tactile, pressure, force, position, ow rate, liquid level, slip, and texture. This paper presents an optical mouse and plunger probe based low cost, portable novel system for the classication of material type and its surface roughness using neural network. It researches on the classication through bouncing signal (fading multiple sequential impacts) generated by release of a lightweight plunger probe on the contact surface. The authors have chosen optical mouse sensor to detect the bouncing of the plunger probe with the developments like [815]. Ng [8] has described the optical mouse as a cost effective optical displacement sensor with high linearity and low error. Its efcacy in measuring the viscoelastic elongation of polyethylene has been successfully demonstrated. Lott et al. [9] modied optical mouse to develop a system with sub-millisecond accuracy capable of detecting minute position changes. Akamatsu [10] modied computer mouse to add tactile and force display. Donatell et al. [11] developed a low-cost device that can be used by patients with lower back pain to both record and provide real-time biofeedback of lumbar position in the midsagittal and frontal planes during exercises at home. Chen [12] invented an economical head-operated computer mouse for people with disabilities, that employ two tilt sensors placed in the headset to determine head position and to function as simple head-operated computer mouse. Sooyong [13], Bonarini et al. [14], and Palacin et al. [15] have used optical mouse sensor for application in robot path planning since they are inexpensive, reliable, accurate, and very fast. Motivation for using plunger probe is the outcome of the literature [16,17]. Tong et al. [16] investigated the acoustic characteristics from impact sound by a controlled impactor on tiled walls. This technique was used for non-destructive inspection of bonding defects in the tile-walls, with the aid of neural networks. Experimental setup developed by Wu and Siegel [17] used a solenoid-driven hammer with an accelerometer and microphones embedded in the hammer head. The subtle differences of hammer impact force and sound feature were used to discriminate defective regions of structure from normal one. Thus, in the earlier investigations different forms of the impact probes have been successfully used for identication of materials or defects in the structure. In this paper it is investigated that the signals due to bouncing or hopping (diminishing oscillations) of plunger probe on material, possess properties that are an important index of the classication and recognition of material type and its surface characteristics. The signal features of different materials and their surfaces have been analyzed and classied in this paper. The signal processing

technique has been applied to optimize the articial neural network adopted in the existing system. In this investigation Welchs method of PSD [18] is used as a basic tool for feature extraction to improve the efciency of the neural network i.e., minimizing the computing resources (fewer nodes required in the input and hidden layers). An optimal classier based on different performance measures, using Multilayer Perceptron Neural Networks (MLP NN) for classication of the material type and its surface roughness, has been designed. The data sets for classication were obtained from the experimental setup developed by the authors. Optimal design of the classier is investigated using the MLP NN trained with back-propagation algorithm on the data sets. Back-propagation algorithm was tested with several different numbers of hidden units and incremental results were also obtained (corresponding to how well the different variants of back-propagation performed after a periodic number of epochs). The generalization performance of the network is validated meticulously on the basis of important parameters [19], such as mean square error (MSE), normalized mean square error (NMSE), confusion matrix, percent classication accuracy (PCLA) and area under receiver operating characteristics (AROC) on the testing instances, even after attempting different data partitions. The receiver operating characteristics (ROC) [2025] enables the user to evaluate the classier in terms of the trade-offs between sensitivity and specicity. Woods and Bowyer [20] have described the method of generating ROC curve for neural networks. Peterson and Coleman [21] used AROC for the determination of suitable neural network based classiers used in cancer research. Downey et al. [22], Oberti et al. [23], and Alvarenga et al. [24] used the ROC to evaluate the performance of neural classiers. Neural network based classication scheme has been developed by Azimi-Sadjadi et al. [25] for classifying underwater mines and mine-like targets from the acoustic backscattered signals. In this application ROC curves were generated for the determination of excellent classier. 2. Structure of detection system and its working mechanism A personal computer based on an Intel Celeron microprocessor running at 1.6 GHz was used for the experiment. The monitor display area was 640 pixels wide by 480 pixels high. The system consists of an optical mouse and plunger based impact probe. A commercial USB optical mouse featuring the Agilent ADNS-2051 sensor was used in this experimental work. These sensors are inexpensive, accurate, very fast and reliable [13,14]. The resolution of mouse is 800 counts per inches while moving at speed up to 14 inches per second. A polished steel probe is used as impact probe that houses a sensitive spring at the bottom. To guide vertical movement of the probe an open-ended minimum friction tube with a hole is used (Figs. 13). The probe is inserted in the tube. An optical mouse is xed on the outer surface of the tube with the mouse sensor on the hole such that it can detect the movements of the probe. The distance from mouse lens reference plane to the probe surface is kept typically 2.4 mm [13]. The sensor in the mouse gets electric power from computer. During the experiment, the mouse gain was set to 1 and the normal ballistic scaling (a characteristic that increases gain during rapid movements) was turned off. This insured that cursor movements (x-coordinates) exactly mimicked the probe hopping. The assembly thus formed is xed on a stand. The bottom tip of the probe is convex to ensure point contacts with the surface of the plain object and to provide maximum sensitivity during bouncing. An electromagnet is used to hold the probe at a desired distance from the surface of an object under test. The probe is released ver-

1110

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

Fig. 1. The system with plunger based impact probe and optical mouse.

Fig. 3. Side view of the system.

tically from a xed distance (1 inch) on the plain surface of an object, towards gravity by de-energizing the electromagnet. The thickness of the plain objects for classication is kept identical. The objects under test are placed on the same platform throughout the experimentation. After release, the probe bounces in the tube due to impact and spring action, resulting in multiple point contacts on the surface of the plain object for some time (23 s). The relative displacements, with respect to time (bouncing signal) are sensed by the optical mouse and transmitted to PC through the USB port. The time and features of bouncing or hopping signals (Fig. 4) are related to the surface properties, and each material has a unique set of such properties. The signals from different materials and their surfaces are recorded on PC, in data les for feature extraction using signal processing, and neural networks based classication task. A program for cursor position detection and recording in data les has been custom-written in the C language. The feature extraction was carried out using MATLAB and its Signal Processing Toolbox. Neu-

ral Network Toolbox in MATLAB (version 7.0), and Neurosolutions (version 5.03) were used for the classication. 3. Signal processing The signal is normalized between 1 and +1, prior to application of the signal processing techniques. Signal between the start point and the end point is extracted. Length of signal (number of samples between start point and end point) is obtained. Inspection of Fig. 4 shows that there is apparently very little difference between the traces from two different surfaces of a material. This is normal and it is the role of the signal processing and articial neural network to determine subtle differences in the signals and classify them. Digital signal processing technique is applied to the obtained signal, prior to the training of the articial neural network, with the aim of reducing the computational resources of the implemented articial neural network, i.e., fewer nodes required in the input and hidden layers.

Fig. 2. Disassembled view of the system.

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

1111

Fig. 4. Signals from bouncing of probe on polished steel and milled steel surfaces.

The signal between the start point and end point is divided into three equal frames. It was found that middle frame gave more signicant features, unlike other frames. PSD of the middle frame is calculated using Welchs method that is the improved estimator of the PSD [18,26,27]. The method consists of dividing the time series data into K = N/M segments of M samples (possibly overlapping) each as dened Eq. (1): x(i) (n) = x(n + iM M ), 0 n M 1, 1iK (1)

Fig. 5. PSD plots for frames of bouncing signals from polished steel and milled steel surfaces.

error in surface roughness discrimination. Therefore, the rst 16 points starting from maximum value of the PSD curve along with the length of the signal (total 17 inputs) form the input layer of the neural network. Out of several features extracted, these two features were found to be more signicant and dominant in this case. 4. Classication of material type and its surface roughness 4.1. Preparation of training and testing data partitions The experiment involves the classication of six different plain blocks (identical in thickness) including polished steel (PS), milled steel (MS), polished aluminium (PA), milled aluminium (MA), polished hard plastic (PP) and milled hard plastic (MP) surfaces. Sets of features were collected in a le for 80 records from each object (total 480 records for 6 plain objects). A typical scatter plot for maximum magnitude of PSD versus length of signal is shown in Fig. 6. It shows that the clusters are not linearly separable. Therefore, neural network is used for the classication. From the scatter plot it is also clear that these features are the reliable indicators for the classication of material type and its surface roughness. These 480 sets of features are then used as inputs to the neural networks for the classication of plain objects. Two different data partitions

Modied periodogram of each segment is computed using Eqs. (2) and (3), JM () = where U= 1 M
M 1 (i )

1 | MU

M 1

x(i) (n)w(n)ejn |2 ,
n=0

i = 1, 2, . . . , K

(2)

w2 (n)
n=0

(3)

w(n) is a window function. The spectrum estimate is dened by Eq. (4):


() = Bxx

1 K

JM ()
i =1

(i)

(4)

Averaging of the PSD obtained using Eq. (4) tends to decrease the variance of the estimate relative to a single periodogram estimate of the entire data record. The data is divided into eight segments with 50% overlap between them. A Hamming window is used to compute the modied periodogram of each segment. The resulting PSD curves for different surface properties are shown in Fig. 5. As a result of application of the PSD, the signal information is now more explicit and easier for the user to access in comparison to time-domain based results that require all of the signal data points. It is possible that the classication can be made according to structure of spectrum [28] of the signals because the different surfaces have very different structures of the spectra. Based on this, an empirical decision is made to select the rst 32 points starting from peak value of the PSD plot as the main area of interest on the trace, and these 32 points are further decimated to 16 points to reduce the number of inputs to the MLP NN. If the numbers of points are further reduced the smoothness of the PSD curve is lost (leading to loss of detailed information) and it lead to

Fig. 6. A typical scatter plot for maximum magnitude of PSD versus length of signal.

1112 Table 1 Data partition scheme for the MLP NN. Data partition Set-1 (normal tagging) Set-2 (reverse tagging) Training instances

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

Testing instances 241:480 (240 samples) 1:240 (240 samples)

1:240 (240 samples) 241:480 (240 samples)

are used with different tagging orders. In the rst case, the rst 50% samples (1:240) are used for training; the next 50% samples (241:480) for testing of classier. In the second case, the last 50% samples (241:480) are used for training and the rst 50% samples (1:240) for testing of classier. Table 1 highlights the data partition schemes employed in order to design a classier. 4.2. Design of a MLP NN based classier The biggest advantage of using the MLP NN resides in its simplicity and the fact that it is well suited for online implementation [29,30]. MLP NN is based on processing elements (PEs), which compute a non-linear function of the scalar product of the input vector and a weight vector. The neural network design mainly consists of dening the topology (i.e., the arrangement of PEs, connections, and patterns into the neural network) and the architecture (i.e., the selection of the number of PEs for each layer necessary for the specic application of the topology) of the network. In MLP networks, the PEs in a layer are connected to all the PEs in the following layer through unidirectional links represented by connection weights. MLP NN consist of input, output, and one or more hidden layers with a predened number of neurons. The neurons in the input layer only act as buffers for distributing the input signals xi to neurons in the hidden layer. Each neuron j in the hidden layer sums up its input signals xi , after weighting them with the strengths of the respective connections wji from the input layer and computes its output yj as a function fa of the sum, given by Eq. (5): y j = fa ( wji xi ) (5)

the trained NN is validated on different testing data sets that has not been presented to the NN while training. Conjugate-gradient (CG) method is a kind of second-order optimization method, which may be regarded as being somewhat intermediate between the method of steepest descent and Newtons method. Use of the CG method is motivated by the desire to accelerate the typically slow rate of convergence experienced with the method of steepest descent, while avoiding the computational requirements associated with the evaluation, storage, and inversion of the Hessian matrix in Newtons method [33,38,39]. It is well known that Newtons method locally approximates the function f by a quadratic function and minimizes the approximated quadratic function exactly at each iteration. At iteration k, the function f is approximated by the truncated Taylor series given by Eq. (7): f (x) f (xk ) + ( f (xk )) (x xk ) +
T

1 (x xk )T H (xk )(x xk ) 2

(7)

This quadratic function is minimized at: xk = xk H 1 (xk ) f (xk ) (8)

H in the Eq. (8) is known as Hessian matrix. Evaluation of the inverse of Hessian matrix is costly when the dimension of the problem is large. The CG method uses approximations to the true inverse of the Hessian matrix to avoid the direct computation of the inverse. Among second-order optimization methods, it is widely acknowledged that the CG is perhaps the only method that is applicable to large-scale problems, that is, problems with hundreds or thousands of adjustable parameters [40,41]. The basic algorithm steps of CG back-propagation algorithm are as follows: (a) Select the rst search direction p0 to be the negative of the gradient g0 : p0 = g0 where gk f (x)|x = xk (10) (9)

where fa is one of the activation functions used in ANN architecture. Training a neural network consists of adjusting the network weights using different learning algorithms. A learning algorithm gives wji (t ) in the weight of a connection between neurons i and j at time t. The weights are then updated according to the following Eq. (6): wji (t + 1) = wji (t ) + wji (t + 1) (6)

f(x) is error function, which is the performance function of the feed forward neural network. (b) Take a step according to xk = (xk+1 xk ) = k pk (11)

by selecting learning rate k to minimize the function along the search direction xk+1 = xk + k pk where k = (12)

The MLP requires the determination of the activation functions and the thresholds of the PEs as well as of the connection weights. First, the activation functions and the thresholds are dened by a recursive optimization procedure [31]. Then, the connection weights are computed by means of a learning algorithm. There are many available learning algorithms in the literature [19,32,33]. The choice of the number of hidden layers and the number of units in each of the hidden layers is critical [34]. It is shown that a MLP network having a single hidden layer could classify a set of points perfectly if they were linearly separable [3537]. The MLP NN having two hidden layers can generate arbitrary decision regions, which may be non-convex and disjoint. However, it does so at the cost of the added computational complexity. The trade-off between accuracy and complexity of the model should be resolved carefully. The choice of the number of hidden layers and the number of units in each of the hidden layers is made on the basis of rigorous computer simulation based experiments. In order to gauge the real performance of MLP NN, it should be re-trained a number of times with different random initialization of connection weights [19]. This ensures true learning, helps avoid local minima and entails generalization. The performance of

f (x)T |x = xk pk
pT 2 f (x)|x = xk pk

Tp gk k

pT A p k k k

(13)

for the steepest descent back-propagation method. (c) Select the next search direction according to pk = gk + k pk1 where k =
Tg gk k T gk1 pk1

(14)

(15)

Eq. (15) has been given by Fletcher and Reeves [38]. (d) If the algorithm has not converged, continue from Step (b). Comparative study of various training algorithms for two hidden layer MLP NN with 10 neurons in each hidden layer, has been carried out. The comparison of convergence of the learning curves

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

1113

Fig. 8. Number of neurons in the rst hidden layer versus AROC and PCLA shows maxima at 19 hidden-1 neurons.

Fig. 7. Comparison of different learning curves for the training of MLP NN.

lating the AROC assesses the performance of classier. It is noticed that the values for AROC range from 0.5 for chance to 1.0 for a perfect classier. 4.3. Experimental determination of optimal MLP NN classier In accordance with the earlier discussion, a three-layer MLP NN is chosen as a classier. In a rigorous experimental study, the number of hidden neurons in both the hidden layers is gradually increased from 2 to 40 and every time, the network is run three times with different random weight initializations for 1000 epochs. Variation of PCLA and AROC with the number of neurons in the rst hidden layer (hidden-1) and second hidden layer (hidden-2) for the testing data set are graphed in Figs. 8 and 9 respectively. It is found that the performance of the selected model is optimal for 19 neurons in the hidden-1 and 20 neurons in the hidden-2 with regard to the PCLA and AROC for the testing data set. Increase in the number above 19 in the hidden-1 and above 20 in the hidden-2 did not show any improvement in the performance of the classier. Similarly, increase in the number of hidden layers above two did not improve the performance of the classier signicantly. On the contrary, it takes more time for training because of higher complexity of the classier. The NN model (17-19-20-6) was then re-trained (three times with different weight initialization) with different number of epochs. It was found that the network does not show any further improvement in PCLA and AROC on testing data set after 950; in facts it decreases. It is thus demonstrated that the best network should have 19 neurons in the hidden-1 and 20 neurons in the hidden-2. In addition, the transfer function of layer should be hyperbolic-tangent (Tanh) and the network should be trained using

for the maximum number of epochs set to 1000 is as shown in Fig. 7. Step learning, quick-propagation, momentum learning, and CG learning were used for comparison. The CG learning was found to converge faster than the other methods for training of the MLP NN. Details about the various training algorithms and their parameters can be found in [19,32,33]. The possible parameter variations chosen for the MLP NN are depicted in Table 2. When a NN has been trained, the next step is to evaluate it. The entire data set is usually randomized rst. The training data are next split into two partitions; the rst partition is used to update the weights in the network, and the second partition is used to assess the performance. The learning and generalization ability of the estimated NN based classier is assessed on the basis of certain performance measures such as MSE, NMSE, PCLA, and AROC [19]. Nevertheless, for classiers the PCLA and AROC are the most crucial parameters. The MSE and NMSE are dened by Eqs. (16) and (17) respectively as follows: MSE =
P j =0 N (d i=0 ij

yij )

NP

(16)

where P = number of output neurons, N = number of exemplars in the data set, yij = network output for exemplar i at neuron j, and dij = desired output for exemplar i at neuron j. NMSE =
P j =0

P N MSE N
N d2 i=0 ij

N d i=0 ij

(17) /N

The performance of a classier is measured in terms of classication error. The PCLA is 100 minus the percentage classication error. ROC enables the user to evaluate a model in terms of the trade-offs between sensitivity and specicity. It is the one of the established method to evaluate the performance of classier. CalcuTable 2 Variable parameters of MLP NN. Parameter Number of hidden layers Number of hidden neurons Learning-rate parameter Momentum constant Transfer function in output layer Learning rule Typical range (13) (240) (01) (01) Tanh, LinTanh, Softmax, Linear Momentum, CG, step, quick-propagation

Fig. 9. Number of neurons in the second hidden layer versus AROC and PCLA shows maxima at 20 hidden-2 neurons.

1114

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116 Table 5 Confusion matrix of MLP NN (17-19-20-6); classier on testing instances (set-2: reverse tagging). Desired Output MP MP PP MA PA MS PS Percent correct PP 39 1 0 0 0 0 97.5 MA 0 0 39 1 0 0 97.5 PA 0 0 1 39 0 0 97.5 MS 0 0 0 0 39 1 97.5 PS 0 0 0 0 2 38 95.0 38 2 0 0 0 0 95.0 PP 0 0 38 2 0 0 95.0 MA 1 39 0 0 0 0 97.5 PA 0 0 2 38 0 0 95.0 MS 0 0 0 0 39 1 97.5 PS 0 0 0 0 1 39 97.5

Table 3 Optimal parameters of MLP NN classier; maximum epochs = 1000, supervised learning, number of inputs = 17. Parameter Processing elements Transfer function Learning rule Hidden-1 19 Tanh CG Hidden-2 20 Tanh CG Output layer 6 Tanh CG

Table 4 Confusion matrix of MLP NN (17-19-20-6); classier on testing instances (set-1: normal tagging). Desired Output MP MP PP MA PA MS PS Percent correct 2 38 0 0 0 0 95.0

CG learning algorithm. The optimal parameter settings for the MLP NN based classier are displayed in Table 3. The performance of the MLP with the training set is not an unbiased estimate of its performance on the universe of possible inputs, and an independent test set is required to evaluate the network performance after training. The designed classier is evaluated on the testing instances of set-1 with respect to the confusion matrix displaying the classication results of a network. There are 240 instances in the testing data set. The classication performance represented in Table 4 for a MLP is encouraging. On this basis, it concludes that for this pattern-classication problem the use of 19 neurons in the hidden-1 and 20 neurons in the hidden-2 is adequate. It is also observed that for larger number of hidden neurons, though MSE and NMSE are slightly lower, the average rate of correct classication does not show any further improvement; in fact, it is slightly worse. In order to conrm whether the proposed conguration of the MLP NN model is really consistently capable of near optimum classication, different data partition as in set-2 (reverse tagging order: interchanging training and testing data sets) is used to train the classier. Table 5 portrays the confusion matrix of the MLP classier on test data set-2 with consistency in the performance. Table 6 displays the various important performance measures of MLP classier on different data sets. To what extent the classier is able to correctly classify the exemplars is the most important criterion for its proper evaluation. This is expressed as % correct in the table.
Table 6 Performance measures of MLP NN based classier on different data sets. Performance MP PP

The other performance measures such as MSE and NMSE are shown just as a matter of record. It is seen from Table 6, that the MLP classier gives good performance even on different testing data sets, which is desirable. The MLP classier is seen to provide consistency in the classication accuracy when different data partitions are used for training and testing. Careful inspection of Tables 46 shows that the MLP NN based classier satises almost all the essential qualities and tests of a near-perfect (near-optimal) classier up to the end-users expectations. More importantly, its performance is seen to be consistently good. ROC is one of the best ways to evaluate a classier. This enables us to evaluate the classier in terms of the trade-offs between sensitivity and specicity. For specied data sets, calculating the AROC assesses the classication performance. For a perfect classier, the AROC must approach unity. Fig. 10 demonstrates the ROC curve for MLP NN based classier on testing instances (data partition: set1). Here sensitivity or detections is plotted against (1 specicity) or false alarms. From the graph, it is seen that the classier is able to produce 95% correct detections at 4.95% false alarms. Thus, the specicity of this classier is calculated as Specicity = (1 false alarms) 100% and it is 95.05%. The best way to evaluate a classier is to inspect the AROC. Here it comes to about 0.973. In view of these facts, it may be inferred that the chosen conguration of the network is capable to operate as a successful classier. Its performance is consistently good showing independency on specic data partition chosen for training the MLP NN model. It is seen from Table 6, that the MLP classier gives good performance even on the testing data sets, which is desirable. The MLP classier is seen to provide consistency in the classication accuracy when different data partitions are used for training and testing.

MA 0.04956 0.05943 100

PA 0.04998 0.05987 100

MS 0.05155 0.05877 100

PS 0.04898 0.05883 100

(a) Performance for training set, samples 1240 (240 samples) MSE 0.04798 0.04991 NMSE 0.05762 0.05781 Percent correct 100 100

(b) Performance for testing set-1 (normal tagging) MSE 0.05385 NMSE 0.17197 Percent correct 95.00

0.05353 0.17717 97.50

0.05433 0.17672 97.500

0.05455 0.17917 97.500

0.05333 0.17618 97.500

0.05323 0.17952 95.00

(c) Performance for testing set-2 (reverse tagging) MSE 0.05233 NMSE 0.17262 Percent correct 95.00

0.05323 0.17482 95.00

0.05334 0.17632 97.50

0.05343 0.17482 95.00

0.05323 0.17635 97.50

0.05445 0.17756 97.50

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116

1115

Careful inspection of the confusion matrices and the performance measure tables show that the conguration of the MLP NN based model successfully classied the features of the bouncing signals leading to the classication of material type and its surface roughness. The performance and accuracy of the system can be further increased by dedicated precise fabrication of the detection system in place of the assembled prototype used at present.

References
[1] P.J. McKerrow, B.E. Kristiansen, Classifying surface roughness with CTFM ultrasonic sensing, IEEE Sensors Journal 6 (October (5)) (2006) 12671279. [2] S. Aoshima, N. Yoshizawa, T. Yabuta, K. Hanari, The object discrimination system using a neural network with inputs for distance and sensitivity information of an ultrasonic sensor, in: Proceedings of First IEEE International Conference on Sensors, vol. 1, Orlando, Florida, June 1214, 2002, pp. 532536. [3] K. Zhang, C. Butler, Q. Yang, Y. Lu, A bre optic sensor for the measurement of surface roughness and displacement using articial neural networks, IEEE Transactions on Instrumentation and Measurement 46 (August (4)) (1997) 899902. [4] S. Baglio, G. Muscato, N. Savalli, Tactile measuring systems for the recognition of unknown surfaces, IEEE Transactions on Instrumentation and Measurement 51 (June (3)) (2002) 522531. [5] W.S. McMath, M.D. Colven, S.K. Yeung, E.M. Petriu, Tactile pattern recognition using neural networks, in: Proceedings of the International Conference IECON, Maui, HI, USA, November, 1993, pp. 13911394. [6] Y. Tada, K. Hosoda, Y. Yamasaki, M. Asada, Sensing the texture of surface by anthropomorphic soft ngertips with multi-modal sensors, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robotics and Systems, Las Vegas, Neveda, October, 2003, pp. 3135. [7] D. Brenner, J.C. Principe, K.L. Doty, Neural network classication of metal surface properties using dynamic touch sensor, in: Proceedings of IJCNN, vol. 1, Seattle, 1991, pp. 189194. [8] T.W. Ng, The optical mouse as a two-dimensional displacement sensor, Sensors and Actuators A 107 (2003) 2125. [9] G.K. Lott, M.J. Rosen, R.R. Hoy, An inexpensive sub-millisecond system for walking measurements of small animals based on optical computer mouse technology, Journal of Neuroscience Methods 161 (2007) 5561. [10] M. Akamatsu, Touch with a mouse: a mouse type interface device with tactile and force display, in: Proceedings of 3rd IEEE International Workshop on Robot and Human Communication, Nagoya, Japan, 1994, pp. 140144. [11] G.J. Donatell, D.W. Meister, J.R. OBrien, J.S. Thurlow, J.G. Webster, F.J. Salvi, A simple device to monitor exion and lateral bending of the lumbar spine, IEEE Transactions on Neural Systems and Rehabilitation Engineering 13 (March (1)) (2005) 1823. [12] Y.-L. Chen, Application of tilt sensors in humancomputer mouse interface for people with disabilities, IEEE Transactions on Neural Systems and Rehabilitation Engineering 9 (September (3)) (2001) 289294. [13] L. Sooyong, Mobile robot localization using optical mice, in: Proceedings of IEEE Conference on Robotics, Automation and Mechatronics, vol. 2, Singapore, December, 2004, pp. 11921197. [14] A. Bonarini, M. Matteucci, M. Restelli, Automatic error detection and reduction for an odometric sensor based on two optical mice, in: Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, April, 2005, pp. 16751680. [15] J. Palacin, I. Valganon, R. Pernia, The optical mouse for indoor mobile robot odometry measurement, Sensors and Actuators A 126 (2006) 141147. [16] F. Tong, S.K. Tso, X.M. Xu, Tile-wall bonding integrity inspection based on time-domain features of impact acoustics, Sensors and Actuators A 132 (2006) 557566. [17] H. Wu, M. Siegel, Correlation of accelerometer and microphone data in the coin tap test, IEEE Transactions on Instrumentation and Measurement 49 (June (3)) (2000) 493497. [18] P.D. Welch, The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modied periodograms, IEEE Transactions on Audio and Electroacoustics 15 (June) (1967) 7073. [19] J. Principe, N. Euliano, C. Lefebvre, Neural and Adaptive Systems: Fundamentals Through Simulations, John Wiley and Sons, 1999. [20] K. Woods, K.W. Bowyer, Generating ROC curves for articial neural networks, IEEE Transactions on Medical Imaging 16 (3) (1997) 329337. [21] L.E. Peterson, M.A. Coleman, Machine learning-based receiver operating characteristics (ROC) curves for crisp and fuzzy classication of DNA microarrays in cancer research, International Journal of Approximate Reasoning 47 (January (1)) (2008) 1736. [22] T.J. Downey, D.J. Meyer, R.K. Price, E.L. Spitznagel, Using the receiver operating characteristic to assess the performance of neural classiers, in: Proceedings of International Joint Conference on Neural Networks, vol. 5, Washington, DC, USA, 1999, pp. 36423646. [23] F. Oberti, A. Teschioni, C.S. Regazzoni, ROC curves for performance evaluation of video sequences processing systems for surveillance applications, in: Proceedings of International Conference on Image Processing, vol. 2, 1999, pp. 949953.

Fig. 10. ROC curve for MLP NN based classier on test data set.

Careful inspection of Tables 46 as well as Fig. 10 shows that the MLP NN based classier satises the essential qualities and tests of a near-perfect (near-optimal) classier up to the end-users expectations.

5. Conclusion This paper presents successful detection of the bouncing signals, from the surface of different plain objects with identical thickness, using the neural network based sensor system that includes plunger probe and optical mouse. This novel system has resulted in low cost, non-destructive classication of material type and its surface properties, since, it involves optical sensing, obviating the need of Analog to Digital Converter. Delay of 23 s in the classication due the system mechanism is the main constraint of the system. The sensor has good immunity against noise and baseline drift unlike analog sensors. Digital signal processing technique applied to the signal, prior to the implementation of the articial neural network, has successfully reduced the computational resources. As a classier, the best three-layered MLP NN (17-19-20-6), with Tanh transfer functions in hidden layers is seen to perform efciently. It works as an excellent classier for the given task under study. When the trained classier is examined on the testing instances, it produces maximum and minimum surface roughness classication accuracies of 97.5% and 95% respectively. Classication accuracy for the type of material is 100%. The AROC is close to unity. Even after changing the data partition with reverse tagging order, the classier is consistently seen to maintain its efcient performance. It is worthwhile to notice that the classier has consistently performed elegantly as a near-optimal classier even after repeating the simulation experiments a number of times on different data partitions. From the results of the ROC analyses, a reasonable trade-off between specicity and sensitivity is observed. It is seen that the classier has a specicity of 95.05% at a sensitivity of 95%, which is indeed encouraging. The performance of the classiers was at its best when CG algorithm was used for supervised learning. The classier is seen to deliver the best performance on Tanh activation function used for the neurons of the output layer. This is obvious because for classication, the output processing elements must be non-linear for generation of arbitrary complex decision regions.

1116

N.N. Charniya, S.V. Dudul / Applied Soft Computing 11 (2011) 11081116 [34] I.V. Turchenko, Simulation modeling of multi-parameter sensor signal identication using neural networks, in: Proceedings of the 2nd IEEE International Conference on Intelligent Systems, Soa, Bulgaria, June, 2004, pp. 48 53. [35] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2 (5) (1989) 359366. [36] G. Huang, Y. Chen, H. Babri, Classication ability of single hidden layer feedforward neural networks, IEEE Transactions on Neural Networks 11 (May (3)) (2000) 799801. [37] M. Stinchcombe, H. White, Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions, in: Proceedings of International Joint Conference on Neural Networks, vol. 1, Washington, DC, USA, 1989, pp. 613617. [38] R. Fletcher, C.M. Reeves, Function minimization by conjugate gradients, Computer Journal 7 (1964) 149154. [39] E.M. Johansson, F.U. Dowla, D.M. Goodman, Backpropagation learning for multilayer feedforward neural networks using the conjugate gradient method, International Journal of Neural Systems 2 (4) (1992) 291301. [40] R. Fletcher, Practical Methods of Optimization, John Wiley, New York, 1987. [41] L.M. Saini, M.K. Soni, Articial neural network based peak load forecasting using conjugate gradient method, IEEE Transactions on Power System 17 (August (3)) (2002) 907912.

[24] A.V. Alvarenga, W.C.A. Pereira, A.F.C. Infantosi, C.M. de Azevedo, Classication of breast tumours on ultrasound images using morphometric parameters, in: Proceedings of IEEE International Workshop on Intelligent Signal Processing, September, 2005, pp. 206210. [25] M.R. Azimi-Sadjadi, D. Yao, Q. Huang, G.J. Dobeck, Underwater target classication using wavelet packets and neural networks, IEEE Transactions on Neural Networks 11 (3) (2000) 784794. [26] A.V. Oppenheim, R.W. Schafer, Digital Signal Processing, Prentice-Hall of India, Pvt. Ltd., New Delhi, 1991. [27] MATLAB, Signal Processing Toolbox Users Guide, The MathWorks, Inc., Natick, MA, 2002. [28] J. Lan, T. Lan, S. Nahavandi, A novel application of a microaccelerometer for target classication, IEEE Sensors Journal 4 (August (4)) (2004) 519524. [29] R.P. Lippmann, An introduction to computing with neural nets, IEEE ASSP Magazine (April) (1987) 422. [30] E.B. Baum, On the capabilities of multilayer perceptrons, Journal of Complexity 4 (1988) 193215. [31] P. Arpaia, P. Daponte, D. Grimaldi, L. Michaeli, ANN-based error reduction for experimentally modeled sensors, IEEE Transactions on Instrumentation and Measurement 51 (February (1)) (2002) 2329. [32] S. Haykin, Neural Network: A Comprehensive Foundation, McMillan, New York, 1994. [33] N.K. Bose, P. Liang, Neural Network Fundamentals with Graphs, Algorithms, and Applications, Tata McGraw-Hill Publishing Company Ltd., New Delhi, 2001.