Академический Документы
Профессиональный Документы
Культура Документы
a r t i c l e in f o
a b s t r a c t
Article history:
Received 24 May 2007
Received in revised form
10 May 2008
Accepted 30 September 2009
Available online 28 October 2009
Feature extraction and feature selection are two important issues in sensor-based condition monitoring
of any engineering systems. In this study, acoustic emission signals were rst collected during grinding
operations, next processed by autoregressive modeling or discrete wavelet decomposition for feature
extraction, and then the best feature subsets are found by three different feature selection methods,
including two proposed ant colony optimization (ACO)-based method and the famous sequential
forward oating selection method. Posing monitoring as a classication problem, the evaluation is
carried out by the wrapper approach with four different algorithms serving as the classier. Empirical
test results were shown to illustrate the effectiveness of feature extraction and feature selection
methods.
& 2009 Elsevier Ltd. All rights reserved.
Keywords:
Acoustic emission
Wheel condition monitoring
Feature extraction
Autoregressive model
Discrete wavelet decomposition
Feature selection
Ant colony optimization
Sequential forward oating selection
1. Introduction
Grinding is a major nishing process used to fabricate
engineering objects to the specied dimensions and surface
roughness. Grinding operations are carried out with grinding
wheels as tools and the tool condition changes from sharp to dull
as the grinding operation progresses. A dull wheel is inefcient,
draws higher power, and affects the quality of the ground
workpiece. Close monitoring of tool condition is very important
to grinding as well as other machining operations. In the current
industrial practice, tool condition is often monitored by the
operator overseeing the operation.
Efforts have been made in the past to develop a sensor-based
grinding wheel condition monitoring system to improve the
efciency and integrity of grinding operations. In their keynote
paper on grinding process monitoring, Tonshoff et al. (2002)
reviewed the sensors used to monitor both the micro- and macrotopography of the active abrasive layer of the grinding wheel,
which recognized as the dominant features of the grinding
process. A brief review of past publications is given in chronological order below. Mokbel and Maksoud (2000) analyzed raw
AE signals using fast Fourier transform and compared the AE
spectral amplitudes of different diamond wheel bond types, grit
ARTICLE IN PRESS
T.W. Liao, / Engineering Applications of Articial Intelligence 23 (2010) 7484
forces for different depths of cut when surface grinding mild steel
with an aluminum oxide wheel. For depths of 10, 20, and 30 mm,
respectively, the average normal and tangential forces show a
trend with a slight positive slope. For depths of 40 and 50 mm, the
data shows a negative slope and piecewise negative slope,
respectively.
Hosokawa et al. (2004) developed a practical wheel surface
condition monitoring system, in which the characteristics of the
wheel surface are discriminated on the basis of the dynamic
frequency spectrum signals of grinding sound using a neural
network technique. The system was successfully tested in the
plunge grinding of carbon steel with a vitried-bonded alumina
wheel and in the plunge grinding of hardened die steel with a
resinoid-bonded CBN wheel. Kwak and Ha (2004) showed that the
grinding force signal in surface plunge grinding of STD11 with an
alumina wheel could be better processed by wavelet de-noising
than by FFT ltering. To detect the wheel dressing time clearly, the
approximation coefcients A4 of Daubechies wavelet transform
were used because they were suitable for the detection of a
sudden signal change at the time or frequency domain. Liao et al.
(2006) extracted features from acoustic emission signals collected
at 1 MHz in grinding alumina with a resin-bonded diamond wheel
and then applied an adaptive genetic clustering algorithm to the
extracted features in order to distinguish different states of
grinding wheel condition. Liao et al. (2007) proposed another
clustering method, which involves rst extracting features from
acoustic emission signals based on discrete wavelet decomposition by following a moving window approach, next generating a
distance (dissimilarity) matrix using HMM, and then applying a
clustering algorithm to obtain clustering results. Table 1
summarizes the previous studies on sensor-based grinding
wheel condition monitoring.
Feature extraction and feature selection are two important
issues in sensor-based tool health monitoring. Feature extraction
is a process that transforms the original sensory signal into a
75
Table 1
Summary of studies on sensor-based grinding wheel condition monitoring.
Reference
Tool state(s)
Grinding
conditions
Signal(s) used
Signal analysis
Features
Prediction/
detection
algorithm
Test results
Furutani et
al. (2002)
Hosokawa et
al. (2004)
Wheel loading
and dulling
Wheel wear
Fixed
Hydrodynamic
pressure
Sound
FFT
Spectral amplitude
Non used
Not available
Frequency
spectrum
Neural network
80100%
Hwang et al.
(2000)
Wheel wear
Fixed
AE
Power spectral
density
None used
Not available
Kwak and Ha
(2004)
Wheel loading
Fixed
Force
None used
Not available
Lezanski
(2001)
Wheel wear
Three depths
of cut
Generated by
using different
grinding/truing
speed ratios
Wheel wear
Fixed
Neuro-fuzzy model
Mokbel and
Maksoud
(2000)
Forces,
vibration, and
AE
AE
Daubechies
wavelet
transform
Statistical and
spectral
83.3%
classication
accuracy
Not available
Varied
AE
Wavelet energy
Adaptive genetic
clustering
76.7%
Wheel wear
Varied
AE
Wavelet energy
Hidden Markov
model +clustering
75100%
depending
upon the
grinding
condition
Liao et al.
(2006)
Liao et al.
(2007)
Varied
dressing feed
Fast Fourier
transform
Discrete
wavelet
decomposition
Discrete
wavelet
decomposition
None used
ARTICLE IN PRESS
76
2
2
r1 f1 r0 f2 r1 f3 r3 fp rp
r2 f1 r1 f2 r0 f3 r2 fp rp 1
rp f1 rp f2 rp 1 f3 rp 2 fp r0
rt
N
X
yt yytt y=
t t1
N
X
yt y2 ;
i1
where N is the size of the sample. Plugging 3rt into Eq. (3), the
equation is solved to obtain the estimations of AR model
coefcients. The model coefcients are then used as the features
for subsequent classication. Only the model coefcients are then
used as the features for subsequent classication; the intercept
and the slope are not used.
3. Feature selection
Three feature selection methods, including two ACO-based
methods and sequential forward oating selection (SFFS), are used
to remove the irrelevant and redundant features, potentially exist
in the two different feature vectors extracted by the two methods
described in the previous section. Details of the ACO-based
methods and a brief summary of the SFFS method are given in
Sections 3.1 and 3.2, respectively, below.
ARTICLE IN PRESS
T.W. Liao, / Engineering Applications of Articial Intelligence 23 (2010) 7484
3.1.1. Traits
(1) It employs a colony of articial ants that work cooperatively to
nd good solutions.
Each articial ant is a simple agent that possesses the pathnding behavior of a real ant observed in real ant colonies.
Although each articial ant can build a feasible solution, high
quality solutions are the result of the cooperation among the
individuals of the whole colony. In the context of feature selection,
each articial ant is assigned the task to nd feasible feature
subsets and the whole colony work together to nd the optimal
feature subsets. The size of colony, or the number of articial ants,
is a parameter needed to be specied, which is similar to the
population size, or the numbers of chromosomes, in a genetic
algorithm.
(2) Each articial ant builds its solution by applying a stochastic
local search policy.
The stochastic local search policy is guided primarily both by
ants private information, which contains the memory of the ants
past actions, and by publicly available pheromone trails and a
priori problem-specic local information. In the context of feature
selection, ants private information contains the features that have
been selected in building a feature subset.
The stochastic local search policy used in the sequential search
strategy follows that used by Sivagaminathan and Ramakrishnan
(called the state transition rule in their paper), which is similar to
the pseudo-random-proportional rule used in the traveling salesman problem (Dorigo et al. (1999)). Specically an ant chooses a
feature un from the unselected feature set U at iteration t
according to the following probability:
8
>
1
>
<
pu t 0
>
>
: t t Z tb =P t t Z tb
u
u
8u u
u
if q Z q0
pu t
if q1 o q0
if q1 o q0
In the above equation, q1, q2, and q3 are all random variables
uniformly distributed over [0, 1]. All other notations are identical
to those dened in Eq. (3).
(3) To cooperate, articial ants communicate by stigmergy similar
to real ants via depositing pheromone on the ground while walking.
The shorter the path between the food source and the nest, the
sooner the pheromone is deposited by the real ants; which in
turn leads to more ants taking the shorter path. In the meanwhile
the pheromone deposited on the longer path receives few or
no new deposits and gets evaporated over time. Articial
ants deposit an amount of pheromone that is a function of the
quality of the solution found. Unlike real ants, the timing for
updating pheromone trails is often problem dependent. Our
methods update pheromone trails only after a solution is
generated.
Our pheromone updating policy modies from that used by
Sivagaminathan and Ramakrishnan. For this policy the elitist
strategy is adopted; i.e., only the best ant is allowed to update
pheromone trails. For any feature, u, in the best feature subset
found in iteration t denoted by X0(t), its pheromone level, tu, is
incremented by applying the global updating rule:
tu t 1 1 etu t e=minEt e;
8u A X0 t:
3.1.2. Algorithms
The two ACO-based feature selection methods are named ACOS and ACO-R with S and R denote sequential forward search and
random search, respectively.
3.1.2.1. ACO-S. The ACO-S algorithm has the following steps:
ACO-S Algorithm
if q o q0 & u argmaxftu t Zu tb g
if q o q0 & u a argmaxftu t Zu tb g
77
P
andftu t Zu tb g= 8u ftu t Zu tb g 4 q2 or if q1 Zq0
P
andftu t Zu tb g= 8u ftu t Zu tb g r q2 or if q1 Zq0
and
q3 4 0:5
and
q3 r 0:5
30
ARTICLE IN PRESS
78
4. Test results
All three feature selection methods and ve classication
algorithms were coded in Matlab and executed using a Dell
Latitude C640 laptop computer with Mobile Intel 2 GHz Pentium
4-M CPU. A total of 320 records each of both wavelet energy
Table 2
Summary of classication error rates for the wavelet energy levels data.
None (%)
SFFS
ACO-S (10,10)
ACO-R (50,50)
NM
1NN
F1NN
CBNN
KMNP
34.69
23.44
23.44 70.0
24.697 0
13.13
10
107 0
10.13 70.17 (best =10)
13.13
10
10.25 7 0.34 (best= 10)
10.197 0.28 (best =10)
11.25
7.81
8.137 0.22 (best= 7.81)
8.137 0.22 (best= 7.81)
13.13
10
10.25 70.34 (best= 10)
10.25 70.26 (best= 10)
ARTICLE IN PRESS
T.W. Liao, / Engineering Applications of Articial Intelligence 23 (2010) 7484
Table 3
Summary of CPU times (in seconds) for the wavelet energy levels data.
SFFS
ACO-S (10, 10)
ACO-R (50, 50)
NM
1NN
F1NN
CBNN
KMNP
14
26
161
491
1453
5993
381
380
3931
3200
5710
24,704
671
1011
6032
79
Table 4
Summary of best feature subsets found for the wavelet energy levels data.
SFFS
ACO-S (10, 10)
NM
1NN
F1NN
CBNN
KMNP
{10}
{10} 5/5
{14, 6, 7, 913}
{2, 3, 1013} 2/5
{14, 6, 7, 913}
{2, 3, 1013} 1/5
{14, 7, 913} 1/5
{14, 7, 8, 1013} 1/5
{14, 6, 7, 913}
{2, 3, 1013} 2/5
{14, 7, 8, 1013} 1/5
ARTICLE IN PRESS
80
Table 6
Summary of CPU times (in seconds) for the AR model coefcients data.
SFFS
ACO-S (10, 10)
ACO-R (50, 50)
a
NM
1NN
F1NN
CBNN
KMNP
29.5
40.8
147.0a
1517.7
542.3
6242.1
1146.9
708.0
4180.9
4968.3
2298.8
23,189.3
1908.7
843.7
5982.4
When max A and max T were increased to 100, the CPU time becomes 770.7.
x1
ACO-S with CBNN
14
x2
x3
Pheromone Level
12
x4
10
x5
x6
x7
x8
x9
x10
0
1
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55
Update Sequence
x11
x12
x13
Fig. 1. Pheromone level change prole of an ACO-S run with CBNN as the classier for the dataset of wavelet energy features.
x1
12
Pheromone Level
x2
x3
10
x4
x5
x6
x7
x8
x9
x10
0
1
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
Update Sequence
x11
x12
x13
Fig. 2. Pheromone level change prole of an ACO-R run with CBNN as the classier for the dataset of wavelet energy features.
Table 5
Summary of classication error rates for the AR model coefcients data.
No selection (%)
SFFS
ACO-S
(10, 10)
ACO-R
(50, 50)
a
NM
1NN
F1NN
CBNN
KMNP
39.375
21.25
21.57 0.51
(best= 20.625)
25.38 7 1.6a
(best= 22.81)
15.625
6.875
97 1.69
(best= 6.875)
8.38 7 0.87
(best= 7.19)
19.375
10.625
11.25 7 0.44
(best= 10.625)
12.067 0.93
(best= 10.625)
14.0625
6.875
9.38 7 2.01
(best = 7.19)
8.44 70.31
(best = 8.125)
15.625
6.875
7.56 7 0.81
(best= 6.875)
8.57 0.78
(best= 7.19)
When max A and max T were increased to 100, the error rates become 23.88 7 0.47 with the best = 23.13.
ARTICLE IN PRESS
T.W. Liao, / Engineering Applications of Articial Intelligence 23 (2010) 7484
81
Table 7
Summary of best feature subsets found for the AR model coefcients data.
NM
1NN
F1NN
CBNN
KMNP
SFFS
ACO-S (10, 10)
{1, 3, 4, 8, 9, 10}
{1, 3, 4, 8, 9, 10} 1/5
This feature subset has slightly lower error than that found by SFFS, as indicated in Table 5.
16
Pheromone Level
14
12
10
8
6
4
2
0
1
10
19
28
37
46 55 64 73 82
Update Sequence
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
x16
x17
x18
x19
x20
Fig. 3. Pheromone level change prole of an ACO-S run with 1NN as the classier for the dataset of AR coefcients.
16
Pheromone Level
14
12
10
8
6
4
2
0
1
14
27
40
53
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
x16
x17
x18
x19
x20
Fig. 4. Pheromone level change prole of an ACO-S run with KMNP as the classier for the dataset of AR coefcients.
Table 8
Summary of classication error rates for the iris dataset.
No selection (%)
Exhaustive (%)
SFFS (%)
ACO-S (2, 2) (%)
ACO-R (5, 5) (%)
NM
1NN
F1NN
CBNN
KMNP
7.33
4
4
47 0
47 0 (best= 4)
4.67
3.33
3.33
3.877 0.73 (best= 3.33)
3.877 0.73 (best= 3.33)
6
6
6
67 0 (best= 6)
67 0 (best= 6)
8
7.33
7.33
7.337 0 (best= 7.33)
7.337 0 (best= 7.33)
4.67
3.33
3.33
3.87 70.73% (best =3.33)
3.337 0%
ARTICLE IN PRESS
82
5. Discussion
To reafrm the validity of the test results presented in Section
4, the iris and wine datasets taken from the UCI Repository were
also tested with the same test procedure and methods. The reason
for choosing the iris dataset is its small size so that the exhaustive
method can also be applied to identify the globally optimal
feature subset.
Tables 810 summarize the stratied 10-fold cross-validation
classication error rate, the CPU time, and the best feature subsets
found for each combination of feature selection method and
classier, respectively. The above results were obtained with two
ants and two iterations for the ACO-S method, regardless of the
classier used, and with ve ants and ve iterations for the ACO-R.
Similar to previous tests, ve runs were made for each ACO-based
method by setting both q0 and e to 0.5 arbitrarily.
The results indicate that:
(1) Feature selection improves the classication accuracy, regardless the combination of feature selection method and
classier used.
(2) The globally optimal feature subset is {4}, {3, 4}, {1, 2, 3, 4}, {1,
2, 3, 4}, and {3, 4} with the lowest classication error of 4%,
3.33%, 6%, 7.33%, and 3.33% for the NM, 1NN, F1NN, CBNN, and
KMNP classier, respectively.
(3) All three feature selection methods can achieve the best
performance, regardless the classier used. The globally
optimal feature subset was found by all feature selection
and classier combinations except one, i.e., ACO-R with NM,
which found the best feature subset larger than the globally
optimal feature subset.
Table 9
Summary of CPU times (in seconds) for the iris dataset.
Exhaustive
SFFS
ACO-S (2, 2)
ACO-R (5, 5)
NM
1NN
F1NN
CBNN
KMNP
0.75
0.85
0.86
1.21
5.33
6.17
5.79
8.91
3.52
4.26
3.96
6.51
19.87
38.06
20.74
34.35
5.32
6.16
5.84
8.96
(4) For a dataset with small number of features like the iris
dataset, it makes no sense to use any feature selection method
at all. For such a dataset, feature selection methods took even
longer CPU time than the exhaustive method.
Table 11 summarizes the classication accuracies for the wine
dataset. Again, the results indicate that feature selection improves
the classication accuracy, regardless the combination of feature
selection method and classier used. The above test results have
clearly shown that both the ACO-based feature selection methods
and the SFFS method are capable of nding the best feature
subsets.
Next, the effects of two more parameters, specically the
exploitation probability factor, q0, and the pheromone evaporation
parameter, e, are investigated. With all other parameters xed,
these two parameters are varied at three levels each at 0.1, 0.5,
and 0.9. Hence, a total of nine combinations are experimented
including the one that has been used in the previous runs, i.e., the
0.50.5 combination. The 1NN classier is chosen over the other
classiers for this investigation for the following reasons: (1) the
CPU time required is not too long; and (2) it is the best classier
for the weld aw identication dataset.
Table 12 summarizes the effects of the ACO-S parameters on
the classication error, CPU time, and best feature subsets found
for the dataset of wavelet energy features with max A and max T
set at 10. The results indicate that all nine combinations of q0 and
e can produce the lowest classication error of 10%. Using a large
value of q0 and e takes less CPU time; however, the number of runs
producing the best result also reduces. On the other hand, using a
small value of q0 and e takes longer CPU time; however, the
number of runs producing the best result also increases. Table 13
summarizes the results for the dataset of AR coefcients with
max A and max T set at 10 as well. Three out of nine combinations
did not produce the lowest classication error of 6.875% and they
are the (0.1, 0.9), (0.5, 0.9), and (0.9, 0.1) combinations. Generally,
to obtain lower classication error lower value of q0 is better if the
e value is not too high, at the cost of slightly longer CPU time.
Lastly, the effectiveness of using db3, instead of db1, for the
wavelet-based feature extraction method was also tested. The
results are summarized in Table 14. Comparing with Table 2, it can
be seen that:
(1) Without feature selection, the classication accuracies of db3
are all worse than those of db1, regardless the classier used.
(2) With feature selection, the best classication accuracy of db3
is 2.19% lower than that of db1 for the CBNN classier and
Table 10
Summary of best feature subsets found for the iris dataset.
Exhaustive
SFFS
ACO-S (2, 2)
ACO-R (5, 5)
NM
1NN
F1NN
{4}
{4}
{3, 4} 4/5 {4} 1/5
{3, 4} 4/5 {2, 3, 4} 1/5
{3,
{3,
{3,
{3,
{1,
{1,
{1,
{1,
4}
4}
4} 3/5
4} 3/5
2,
2,
2,
2,
CBNN
3,
3,
3,
3,
4}
4}
4}
4}
{1,
{1,
{1,
{1,
2,
2,
2,
2,
3,
3,
3,
3,
KMNP
4}
4}
4}
4}
{3,
{3,
{3,
{3,
4}
4}
4} 3/5
4}
Table 11
Summary of classication error rates for the wine data.
None (%)
SFFS
ACO-S (10,10)
ACO-R (30,30)
NM
1NN
F1NN
CBNN
2.78
1.39
1.947 0.31 (best= 1.39)
1.677 0.21 (best =1.39)
4.17
0.69
1.25 70.58 (best= 0.69)
1.117 0.38 (best= 0.69)
4.17
1.39
1.11 70.38 (best= 0.69)
1.11 70.38 (best= 0.69)
2.78
0.69
1.537 0.58 (best =0.69)
1.947 0.31 (best = 1.39)
ARTICLE IN PRESS
T.W. Liao, / Engineering Applications of Articial Intelligence 23 (2010) 7484
83
Table 12
Effects of ACO-S parameters on the wavelet energy levels data with 1NN.
Method
q0
0.1
0.1
0.1
0.5
0.1
0.5
0.9
0.1
10.0 7 0
10.0 7 0
10.0 7 0
10.0 7 0
3044
3030
2462
2874
0.5
0.5
10.0 7 0
1453
0.5
0.9
409
0.9
0.1
3384
0.9
0.5
589
0.9
0.9
390
Table 13
Effects of ACO-S parameters on the AR model coefcients data with 1NN.
Method
q0
0.1
0.1
5427
0.1
0.5
6.875 70
3812
0.1
0.5
0.9
0.1
927
5142
0.5
0.5
0.9
0.9
0.9
0.5
0.9
0.1
0.5
0.9
542
557
5197
1589
491
Table 14
Summary of classication error rates for the db3-wavelet energy levels data.
None (%)
SFFS
ACO-S (10, 10)
ACO-R (50, 50)
NM
1NN
F1NN
CBNN
KMNP
35
23.13
23.197 0.14 (best =23.13)
23.5 70.26 (best= 23.13)
24.06
9.38
9.56 70.17 (best= 9.38)
10.317 1.17 (best = 9.38)
24.06
9.38
9.387 0
10.12 71.05 (best= 9.38)
21.56
10.00
10.25 70.34 (best= 10)
10.69 70.41 (best= 10)
24.06
9.38
9.5 70.17 (best =9.38)
10.56 71.22 (best= 9.38)
6. Conclusions
This paper has presented the results obtained in our investigation of two important issues related to sensor-based tool health
condition monitoring, i.e., feature extraction and feature selection.
Two feature extraction methods, three feature selection methods,
and ve classiers were employed in the study. Specically,
grinding wheel condition was monitored with acoustic emission
signals. AE signals were extracted both by discrete wavelet
decomposition and autoregressive modeling. The three feature
selection methods include the sequential forward oating selec-
ARTICLE IN PRESS
84
Keller, J.M., Gray, M.R., Givens Jr., J.A., 1985. A fuzzy k-nearest neighbor algorithm.
IEEE Transactions on Systems, Man, and Cybernetics 15 (4), 580585.
Kwak, J.-S., Ha, M.-K., 2004. Detection of dressing time using the grinding force
signal based on the discrete wavelet decomposition. International Journal of
Advanced Manufacturing Technology 23, 8792.
Lezanski, P., 2001. An intelligent system for grinding wheel condition monitoring.
Journal of Materials Processing Technology 109, 258263.
Liao, T.W., Hua, G., Qu, J., Blau, P.J., 2006. Grinding wheel condition monitoring
with hidden Markov model-based clustering methods. Machining Science and
Technology 10, 511538.
Liao, T.W., Ting, C.F., Qu, J., Blau, P.J., 2007. A wavelet based methodology for
grinding wheel condition monitoring. International Journal of Machine Tools &
Manufacture 47, 580592.
Mohamed, S.S., Youssef, A.M., El-Saadany, E.F., Salama, M.M.A., 2005. Articial life
feature selection techniques for prostrate cancer diagnosis using TRUS images.
In: Kamel, M., Campilho, A. (Eds.), ICIAR 2005, Lecture Notes in Computer
Science, vol. 3656, pp. 903913.
Mokbel, A.A., Maksoud, T.M.A., 2000. Monitoring of the condition of diamond
grinding wheels using acoustic emission technique. Journal of Materials
Processing Technology 101, 292297.
Pudil, P., Novovicova, J., Kittler, J., 1994. Floating search methods in feature
selection. Pattern Recognition Letters 15, 11191125.
Shen, Q., Jiang, J.-H., Tao, J.-C., Shen, G.-L., Yu, R.-Q., 2005. Modied ant colony
optimization algorithm for variable selection in QSAR modeling: QSAR studies
of cyclooxygenase inhibitors. Journal of Chemical Information and Modeling
45, 10241029.
Sivagaminathan, R.K., Ramakrishnan, S., 2007. A hybrid approach for feature subset
selection using neural networks and ant colony optimization. Expert Systems
with Applications 33, 4960.
Tonshoff, H.K., Friemuth, T., Becker, J.C., 2002. Process monitoring in grinding.
Annals of the CIRP 51 (2), 551571.
Wang, L., Yu, J., 2006. A modied discrete binary ant colony optimization and its
application in chemical process fault diagnosis. In: Wang, T.-D. et al. (Eds.),
SEAL 2006, Lecture Notes in Computer Science, vol. 4247, pp. 889896.
Warkentin, A., Bauer, R., 2003. Analysis of wheel wear using force data in surface
grinding. Transactions of the CSME/de la SCGM 27 (3), 193204.
Yan, Z., Yuan, C., 2004. Ant colony optimization for feature selection in face
recognition. In: Zhang, D., Jain, A.K. (Eds.), ICBA 2004, Lecture Notes in
Computer Science, vol. 3072, pp. 221226.
Zhang, C., Hu, H., 2005. Ant colony optimization combining with mutual information
for feature selection in support vector machines. In: Zhang, S., Jarvis, R. (Eds.), AI
2005, Lecture Notes in Articial Intelligence, vol. 3809, pp. 918921.