Академический Документы
Профессиональный Документы
Культура Документы
2, MARCH 1986
Abstract-This paper reports the results of a numerical comparison of FCM over the last decade. Successes notwithstanding,
of two versions of the fuzzy c-means (FCM) clustering algorithms. In many unanswered questions concerning these algorithms
particular, we propose and exemplify an approximate fuzzy c-means
(AFCM) implementation based upon replacing the necessary "exact" remain. Among these, one of the most frequent opera-
variates in the FCM equation with integer-valued or real-valued esti- tional complaints about FCM is that it may consume-for
mates. This approximation enables AFCM to exploit a lookup table large data sets-high amounts of CPU time (this holds,
approach for computing Euclidean distances and for exponentiation. e.g., in spite of the fact that FCM is, for certain problems,
The net effect of the proposed implementation is that CPU time during itself several orders of magnitude faster than, say, maxi-
each iteration is reduced to approximately one sixth of the time re-
quired for a literal implementation of the algorithm, while apparently mum likelihood iteration [7]). With this as background,
preserving the overall quality of terminal clusters produced. The two the scope of the present paper is stated simply: What
implementations are tested numerically on a nine-band digital image, means of accelerating computation time in the FCM loop
and a pseudocode subroutine is given for the convenience of applica- can be tried? And how effective are these revised imple-
tions-oriented readers. Our results suggest that AFCM may be used to mentations?
accelerate FCM processing whenever the feature space is comprised of
tuples having a finite number of integer-valued coordinates. Section II contains a brief description of the basic FCM
algorithm. Section III presents a table lookup implemen-
Index Terms-Approximate FCM, cluster analysis, efficient imple- tation of the basic FCM algorithm, called AFCM, and
mentation, fuzzy c-means, image processing, pattern recognition. discusses the two approximations and six lookup tables
that comprise AFCM. Experimental numerical results
comparing a literal implementation, LFCM, to AFCM are
I. INTRODUCTION given in Section IV. Section V contains a discussion of
CLUSTERING algorithms can be loosely categorized our conclusions.
\_by principle (objective function, graph-theoretical,
hierarchical) or by model type (deterministic, statistical, II. THE FCM ALGORITHM
fuzzy). This paper concerns itself with an infinite family
of fuzzy objective function clustering algorithms which Let R be the set of reals, RP the set of p tuples of reals,
are usually called the fuzzy c-means algorithms. For brev- R+ the set of nonnegative reals, and Wcn the set of real c
ity, in the sequel we abbreviate fuzzy c-means as FCM. x n matrices. RP will be called feature space, and ele-
A special case of the FCM algorithm was first reported ments x E RP feature vectors; feature vector x = (xI, x2,
by Dunn [11] in 1972. Dunn's algorithm was subse- * , xp) is composed of p real numbers.
quently generalized by Bezdek [3], Gustafson and Kessel Definition: Let X be a subset of RP. Every function u: X
[14], and Bezdek et at. [6]. Related algorithms and indi- [0, 1] is said to assign to each x E X its grade of mem-
rect generalizations of various kinds have been reported bership in the fuzzy set u. The function u is called afuzzy
by, among others, Granath [13], Full et al. [12], and An- subset of X.
derson and Bezdek [1]. Applications of FCM to problems Note that there are infinitely many fuzzy sets associated
in clustering, feature selection, and classifier design have with the set X. It is desired to "partition" X by means of
been reported in geological shape analysis [9], medical fuzzy sets. This is accomplished by defining several fuzzy
diagnosis [2], image analysis [10], irrigation design [8], sets on X such that for each x E X, the sum of the fuzzy
and automatic target recognition [5]. These references memberships of x in the fuzzy subsets is one.
manifest a nice progression of both theory and application Definition: Given a finite set X C RP, X = {xI, x2,
**,x} and an integer c, 2 < c < n, afuzzy cpartition
Manuscript received September 6, 1984; revised August 30, 1985. The of X can be represented by a matrix U E Wcn whose entries
work of R. L. Cannon was supported in part by the National Science Foun- satisfy the following conditions.
dation under Grant ECS-8217191 and by the IBM Palo Alto Scientific Cen- 1) Row i of U, say Ui = (u 1, u2,2 , uin) exhibits
ter. The work of J. C. Bezdek was supported in part by the National Sci-
ence Foundation under Grant IST-8407860. the ith membership function (or fuzzy subset) of X.
ith
R. L. Cannon and J. C. Bezdek are with the Department of Computer 2) Column j of U, say Uj = (ulj, u2j, u** u,)j ex-
Science, University of South Carolina, Columbia, SC 29208. hibits the values of the c membership functions of the]th
J. V. Dave is with the IBM Palo Alto Scientific Center, Palo Alto, CA
94304. datum in X.
IEEE Log Number 8406217. 3) uik shall be interpreted as Ui (Xk), the value of the
the number of gray levels to which the data are resolved. XA from vi. We have chosen as 11 the Euclidean norm
For our application, nGL 255; thus, the data array X can (A = I, the p x p identity matrix). Thus,
be represented as an array of n x p 1-byte integers.
In the discussion which follows, it is necessary to dis- p
tinguish among "real" algorithmic triples (v, u, {dik}), dik -
E
1=1
(Xkl Vil)2. (3)
their approximations (V, U, and {dik}), and their realiza-
tions as array elements ([V], [U], and [DK]) in imple- Secondly, the updated value of the fuzzy membership ma-
mentations of the approximation. trix [Uik] is given in the general case by (2). The third is
In order to give AFCM the ability to adjust to cluster (1), the expression for the cluster centers.
centers in a smooth manner and to converge to the same The goal of our table lookup approach is to eliminate
approximate values as a literal implementation of the FCM the use of exponentiation operators except in the initiali-
algorithm (LFCM), we have chosen to multiply them by zation stage when the tables are constructed. Likewise,
10 and round them to the nearest integer. Thus, the cluster the number of divisions is reduced by subtracting loga-
centers may be represented by an array [ V] of c x p 2 rithms which are obtained by referencing a table of log-
byte integers. This approximation of the real vi by ei arithms followed by a reference to a table of exponentials
rounded to one decimal place is the first imiiplementational for the antilogarithm. Of course, great amounts of space
deviation from FCM, and it demands the distinction imiade can be required for tables. We have made assumptions
by calling our modified irnplementation AFCM. It should about the nature of the data and the accuracy of interme-
be noted that using these estimates for the vi abrogates the diate results such that we can contain the size of tables
necessity of (1) and (2); as we shall see, however, the within reasonable limits. Of our six internal tables, two
approximation to vil E R by oil does not seriously affect are 2-byte integers, two are 4 byte integers, and the other
the path of iterates traced through Mfj x R"P by AFCM. two are 4-byte reals. The use of lookup tables constitutes
The fuzzy memberships uik are in the interval [0.0, 1.0], another major departure from LFCM.
so the values of the Uik are real. We choose to use a three Because the use of such tables may enhance implemen-
decimal place approximation of lik and multiply that value tation of the FCM algorithm in other environments (and
by 1000. so our approximations Uik of Uik are in [0, 1000]. possibly enhance other algorithms as well), we present
Thus, we can represent the approximate fuzzy member- here a description of the tables used by AFCM.
ship matrix U by an array [U] consisting of c x n 2-byte 1) dik: In (3), the expression for dik, for each iteration
integers. there are c x n computations of the square root of a num-
AFCM will be presented as a subroutine which has ber obtained by p differences, p squares, and p - 1 ad-
passed an n x p array [X] of data, a c x p array [V] of ditions. Two tables are used here. In defining the tables,
initial cluster centers, and a c x n array [U](). The sub- we use a pseudolanguage in which we will present the
routine updates the array [V] with the cluster centers after algorithm.
convergence and also returns the array [U] of fuzzy mem- Given that V[i, 1] is ten times the lth dimension (band)
berships of the data with respect to the final set of cluster of the ith cluster center, define TABLEA as
centers. TABLEA[i, 1, y] = 0.01 x (10 x y - V[i, 1])2,
To summarize, we have replaced a necessary pair (U,
v) for J,m via (1) and (2) with an approximately necessary 1 . i < C 1 < .p,0 cy . 255. Thus, foreachof
pair (U, e). The net savings in storage and processing time the possible values of a datum y, TABLEA contains the
are traded against the loss of true necessity. The loss of square of the difference between that value y, which is
necessity is traded for economies of storage and CPU an integer, and the coordinates of the cluster centers,
time. The numerical examples presented below imply that which are reals. These values may be as large as 2552, So
AFCM terminates at roughly the same place as LFCM, TABLEA consists of 4-byte reals. Also, as the cluster
so these economies seem to be realized at no sacrifice in centers are updated on each iteration of AFCM, TABLEA
the quality of LFCM estimates. must be recomputed on each iteration.
The reader has perhaps noted from (2) and (3) that (2)
B. The Internal Tables can be slightly modified to avoid repeated computation of
A literal implementation of the FCM algorithm can be the square root of real numbers. This strategy, although
very expensive computationally. As an example, in (1), meaningful in a literal implementation of FCM, results in
for each value of i and 1, there are n exponentiations in a very large increase in the size of the subsequent tables
the denominator and n exponentiations and products in the (from 255.0 to p x 255.02 in our case). To reduce stor-
numerator. A straightforward transliteration of the algo- age, therefore, it is advisable to obtain square roots and
rithm would require c x p x n uses of the exponentiation to work with the normalized form dik,
operators and c x p divisions. Because the algorithm is The second table is useful for computing the square root
iterative, these vi may be calculated repeatedly. of the sum over I of values from TABLEA. One decimal
Three equations can be identified as making significant place accuracy will be achieved by multiplying square
contributions to the overall time requirements of the al- roots by 10 and storing them as integers. TABLEB is
gorithm. The first is the calculation of d,k, the distance of given as
CANNON et al.: EFFICIENT IMPLEMENTATION OF FUZZY c-MEANS CLUSTERING ALGORITHMS 251
1000
c n
(0.1
E10.1 x DK[i] E (0.001
k =I
x U[i, k])7
j=i x DK],
Thus, U[i, k] = 1000 x aik, 1 < i < c, 1 < k c n. But, as U[i, k] = 1000 x aik, then
252 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-8, NO. 2, MARCH 1986
I-
1< /p) / pfl;
if DK[i] = 0 then numik: numik + I reduces the overall calculations and storage needed in (4)
end; from c x n to c x p. This is an effective reduction of
if numik # 0 then for i : I to c do begin roughly n:p 64 000: 16 = 4000: 1 for data on the or-
if DK[i] = 0 then Uli,k] :-round(1 000/numik);
else U[i,k] =0; der of one 16-band 256 x 256 image. This economy,
update unorm however, may lead to termination at different fixed points
end than does (4). Since the relative accuracy of these termi-
else for i: 1 to c do begin
Uli,k] nation criteria has not been reported, we leave this point
round( 1000/sum(TABLED[TABLEC[DK[i] to a future investigation.
-TABLEC[DKU]]], 1 <j <c));
update unorm IV. EXPERIMENTAL RESULTS
end
end; Two versions of the fuzzy c-means algorithm were
{compute cluster centers on all but first iteration} coded and tested with test data in order to determine speed
if iter > I then begin of execution, number of iterations to convergence, and
for i := I to c do begin
sumu: TABLED[sum(TABLEELUIi, k]], 1 ikcn)]; accuracy of results. One implementation, LFCM, was a
for1 : 1 to p do begin literal transliteration of the algorithm. The other, AFCM,
sumux:= TABLED[sum(TABLEE[Uli, k]] was the table lookup approach described in Section III.
+ TABLEF[X[k, 1]], 1 < k < n)];
Vii,I] := round(l0.0*summux)/sumu); Both programs were written in Fortran and were identical
end except in their implementation of the FCM equations as
end; delineated above.
iter iter+1 The three-screen experimental image processing envi-
until (unorm < epsilon) or (iter > maxiter)
end; ronment of DIMAPSAR [16] allowed interactive display
of the original image and the results of each iteration
(cluster centers, fuzzy-membership-matrix histogram and
Fig. 1. The AFCM implementation.
image for every cluster, segmented image, etc.). The host
Ii
processor was an IBM 3081 with 16 Mbytes of memory.
I10 E i Xk Virtual machines of 5 and 7 Mbytes were required for
V[i, 1] =
k =i execution of AFCM and LFCM, respectively.
E
k =I
(Cik)'T A. Test Data
The test data sets were nine-band aircraft flight data
10 x oil. showing a region of Oklahoma. A 256 x 256 pixel region
The values of TABLEE are integers in [-30 000m, was selected as the subject. AFCM was used to segment
0]. The values of TABLEF are integers in [0, 24 065]. the region into ten clusters, using the brightness values as
TABLEE must be represented by 4-byte integers, while feature vectors. Each pixel was assigned to the "class"
TABLEF can be represented by 2-byte integers. associated with the cluster in which its fuzzy membership
was maximal. Thus, ten spectral classes were defined. The
C. The Subroutine mean Ail and standard deviation ail of the brightness val-
Fig. I presents AFCM in pseudocode. Translation to ues for each classi 1 i < 10 and band 1, I c I c 9
<
languages such as Fortran and C should not be difficult. were determined from observations of the data. A
It is assumed that the user has determined some initial set 'ground truth" data set with each pixel in class i and
of cluster centers and has initialized the cluster center ar- band I was assigned a brightness value from a Gaussian
ray before invoking the subroutine. distribution [15] with mean ,iti and standard deviation
Since the matrix norm for convergence testing must be 0.25ail. The smaller standard deviation was used for two
specified, we have chosen to use the sup norm on Wcn, reasons. 1) Based upon actual observations, the noise is
viz. not all Gaussian; thus, because of natural variability, oU
CANNON et al.: EFFICIENT IMPLEMENTATION OF FUZZY c-MEANS CLUSTERING ALGORITHMS 253
accuracy of one place after the decimal point, and tables TABLE IV
RESULTS FOR okla5 AND CX = 0.5 FOR COMPARISON OF CLASSIFICATIONS BY
C and D were computed to three places after the decimal EACH OF AFCM AND LFCM WITH GROUND TRUTH FROM WHICH oklaS
point instead of four. The pilot version resulted in con- WAS CONSTRUCTED
vergence in about one fourth as many iterations of okla 1O.
However, the cluster centers were found to diverge by as Classified Classified
Correctly Incorrectly
much as six units from those found by LFCM. Thus, we (A) (B) Unclassified (A-B)
conclude that more precise approximations of the inter- AFCM 0.890 0.093 0.017 0.797
mediate values result in cluster centers which are in better LFCM 0.889 0.092 0.018 0.797
agreement with the "correct" cluster centers found by
LFCM.
TABLE V
Lastly, we show the degree to which LFCM and AFCM RESULTS FOR oklalO AND X = 0.5 FOR COMPARISON OF CLASSIFICATIONS
produce results which agree with the ground truth from BY EACH OF AFCM AND LFCM WITH GROUND TRUTH FROM
which all of the data sets were generated. WHICH oklalO WAS CONSTRUCTED
Definition: For X E RP and u a fuzzy set, define the a- Classified Classified
level set of u to be Correctly Incorrectly
(A) (B) Unclassified (A-B)
uJ(X) - {xE Xlu(x) > a}. AFCM
LFCM
0.605
0.595
0.342
0.347
0.053
0.058
0.262
0.248
Viewing the values of u as indicants of membership,
we let a be a threshold such that for a cluster membership
greater than a, we assign a hard class membership. By Uik become smaller because the data spread their fuzzy
this means, fuzzy partitions can be converted to hard ones. memberships among a greater number of clusters. Thus,
In particular, we can, for a value of a, find the ce-level in (1), the denominator will be a sum of higher powers of
sets of the ten clusters we have determined for oklaO, smaller numbers. Moreover, the denominator of (1) is af-
okla5, and oklalO. Data set oklaO is the benchmark, and fected more severely than the numerator because the terms
so exhibits a perfect classification. Tables IV and V show in the numerator sum are multiplied by the Xkl. Thus, for
more interesting cases. For the data sets oklaS and okla 1O, m >- 3, (1) may generate cluster centers outside [0-255],
the ten clusters determined by LFCM and AFCM have especially if the number of clusters is greater than five. If
been converted into classes by applying a threshold of a necessary, the current AFCM may be modified easily to
- 0.5. These classes were matched with the classes of represent fik with four digits of precision, i.e., [0-10 000]
the original data from which the data sets were con- without any significant increase in the size of the associ-
structed. The fraction of data classified correctly (A), ated tables.
misclassified (B), and unclassified (below the ae level for
any class) were determined. Additionally, an overall fig- V. SUMMARY
ure of merit is calculated by subtracting B from A. (the We have compared a literal coding of the fuzzy c-means
number of data classified correctly less the number clas- algorithm (LFCM) with a table-driven approach (AFCM).
sified incorrectly). From the tables, we see that AFCM Results of the performance comparison of LFCM and
performed essentially as well as LFCM in both cases. AFCM using multiband test image data indicate that
Although Table V appears to make AFCM the imple- AFCM requires about one sixth less computer time, while
mentation of choice, we do not believe that these exper- yielding practically the same accuracy as the literal im-
iments serve to validate any claim to that effect. Rather plementation. Our data were limited to integers in [0,255],
these results seem to corroborate the intuition that loss of and many of the proposed tables were constructed on that
the true necessity of (1) and (2) does not, for the approx- basis. These approximations and the general method of
imation proposed, cause a significant change in iterate se- using lookup tables are seen to have far greater utility in
quences generated by LFCM and AFCM. Due to the trun- image analysis than the present experiments suggest. In
cations used in the approximate representation, we may view of the approximate nature of the application of fuzzy
be, in effect, using a slightly different value of m at each c-means, it may be that AFCM suffices-at a considerable
iteration. Since (U, v) are continuous functions of m, it savings in CPU time-whenever feature space is confined
may be that (U(m), v(m)) -
((U(m + 6), t(m + 6)) for to a tuple of finite integers.
some small 6 > 0. An investigation is currently underway While the examples above show that AFCM does in-
to substantiate this conjecture. deed run faster than LFCM, a number of theoretical is-
The results presented in this section are for a weighting sues were put aside in our exposition of the proposed
exponent of m = 1.5. For image applications, 1.1 c< m methodology. We conclude this paper with a short list of
< 2.5 has proved to be adequate for all practical purposes important questions concerning theoretical relationships
[10]. Problems are eventually to be encountered, how- between LFCM and AFCM.
ever, with higher values of m, at first with AFCM, and First, as previously noted in Section IV-C, there is no
for yet higher values of m with LFCM because of the lim- guarantee that J,m descends on iterates generated by
itations of computer hardware. As m becomes larger, the AFCM. This probably precludes any type of convergence
CANNON et al.: EFFICIENT IMPLEMENTATION OF FUZZY C-MEANS CLUSTERING ALGORITHMS 255
theory for AFCM, and there is no reason to assume that fuzzy objective functions," Cen. for Automation Res., Univ. Mary-
AFCM will always converge. On the other hand, al- land, College Park, Tech. Rep. CAR-TR-51, 1984.
[11] J. C. Dunn, "A fuzzy relative of the ISODATA process and its use
though LFCM is known to generate sequences that always in detecting compact, well-separated clusters," J. Cybern., vol. 3,
contain a subsequence convergent to either a local mini- pp. 32-57, 1973.
t12] W. E. Full, R. Ehrlich, and J. C. Bezdek, "FUZZY QMODEL-A
mum or saddle point of Jm, there is no assurance that new approach to linear unmixing," Math Geol., vol. 14, pp. 259-
LFCM attains a local minimum in practice. Indeed, iter- 270, 1982.
ative convergence of LFCM to a local minimum of Jm is [131 G. Granath, "Application of fuzzy clustering and fuzzy classification
to evaluate provenance of glacial till," Math Geol., vol. 16, pp. 283-
guaranteed only by starting "close enough" to a solution; 301, 1984.
oscillatory behavior of LFCM is not precluded by its con- [141 E. E. Gustafson and W. C. Kessel, "Fuzzy clustering with a fuzzy
vergence theory. Against the possibility that AFCM may covariance matrix," in Proc. IEEE CDC, San Diego, CA, 1979, pp.
761-766.
not converge, we can offer only our computational expe- [15] T. H. Naylor, Computer Simulation Experiments with Models of Eco-
rience: in all of the computations described above, as well nomic Systems. New York: Wiley, 1971.
as several other examples detailed in [17], AFCM has al- [161 M. A. Wiley and J. V. Dave, DIMAPSAR-An interactive image
interpretation system for petroleum and minerals exploration," in
ways converged. Nonetheless, convergence theory for Proc. 8th IBM Univ. Study Conf., Raleigh, NC, Oct. 17-19, IBM
AFCM is at this point an open question. Acad. Inform. Syst. Stamford, CT, 1983.
A second point of great interest is whether AFCM and [17] R. L. Cannon, J. V. Dave, J. C. Bezdek, and M. M. Trivedi, "Seg-
mentation of thematic mapper image data using fuzzy c-means clus-
LFCM-when convergent-always terminate at (roughly) tering," in Proc. 1985 IEEE Workshop on Languages for Automa-
the same fixed point of the FCM operator. Because AFCM tion, IEEE Computer Society, 1985, pp. 93-97.
is not driven by the FCM penalty function, it is certainly
possible that iterate sequences generated by the two al- Robert L. Cannon received the B.S. degree in
gorithms diverge from each other. This has yet to happen mathematics from the University of North Caro-
in practice, but there is no doubt that it can occur. Whether lina, Chapel Hill, the M.S. degree in mathematics
AFCM always follows LFCM along a "parallel" path from the University of Wisconsin, Madison, and
the Ph.D. degree in computer science from the
through MfC is unknown. University of North Carolina, Chapel Hill.
Finally, we emphasize again that the weighting expo- I He has been a member of the faculty of the
nent m varies from iteration to iteration-indeed, perhaps l University of South Carolina, Columbia, since
1973, and has held visiting appointments with the
frotn term to term-in AFCM, so it is very difficult to University of Maryland and the IBM Corporation.
envision a fixed objective function underlying AFCM. In His interests include image processing applica-
summary, the relationship of AFCM to LFCM is analo- tions in remote sensing and geology and fuzzy mathematical applications
in artificial intelligence.
gous to, e.g., the acceleration of Newton's method by Dr. Cannon is a member of the IEEE Computer Society, the Association
computing the inverse of the Jacobian every kth time in- for Computing Machinery, the Pattern Recognition Society, the Interna-
stead of at every iteration. From a theoretical viewpoint, tional Fuzzy Systems Association, and the North American Fuzzy Infor-
mation Processing Society.
both shortcuts are disturbing; from a practical view, both
are well justified. We acknowledge the importance of
these theoretical issues, and hope to make them the basis Jitendra V. Dave received the Ph.D. degree in
of a future study. physics from Gujarat University, India, in 1957.
He is a Senior Technical Staff Member at the
IBM Palo Alto Scientific Center, Palo Alto, CA.
REFERENCES He has worked on problems of atmospheric phys-
[I] I. Anderson and J. C. Bezdek, "Curvature and tangential deflection ics and radiative transfer, remote sensing of ozone
of discrete arcs," IEEE Trans. Pattern Anal. Machine Intell., vol. and aerosols, the effect of particulate contamina-
PAMI-6, pp. 27-40, 1984. tion on climate, photovoltaic harvesting of solar
[2] J. C. Bezdek, "Feature selection for binary data-Medical diagnosis energy, image analysis ahd display applications in
with fuzzy sets," in Proc. Nat. Comput. Conf. AFIPS Press, 1972, _ _ _ oil and mineral explorations, and large-scale sci-
pp. 1057-1068. entific computing through Fortran. He has pub-
13]-, "Fuzzy mathematics in pattern classification," Ph.D. disserta- lished about 60 technical papers in various scientific journals.
tion, Cornell Univ., Ithaca, NY, 1973. Dr. Dave is a Fellow of the American Physical Society.
[4]-, Pattern Recognition with Fuzzy Objective Function Algorithms.
New York: Plenum, 1981.
[5] -, "Statistical pattern recognition systems," Boeing Aerospace, James C. Bezdek received the B.S. degree in civil
Kent, WA, Tech. Rep. TR-SC83001-004, 1983. engineering from the University of Nevada, Reno,
[6] J. C. Bezdek, C. Coray, R. Gunderson, and J. Watson, "Detection in 1969, and the Ph.D. degree in applied mathe-
and characterization of cluster substructure," SIAM J. Appl. Math., matics from Cornell University, Ithaca, NY, in
vol. 40, pp. 339-372, 1981. 1973.
[7] J. C. Bezdek, R. Hathaway, and V. Huggins, "Parametric estimation He is currently Professor and Chairman of the
for normal mixtures," Pattern Recognition Lett., vol. 3, pp. 79-84, Department of Computer Science, University of
1984. South Carolina, Columbia. His interests include
[8] J. C. Bezdek and K. Solomon, "Simulation of implicit numerical research in pattern recognition, computer vision,
characteristics using small samples," in Proc. ICASRC 6. New information retrieval, and numerical optimiza-
York: Pergamon, 1981, pp. 2773-2784. tion.
[9] J. C. Bezdek, M. Trivedi, R. Ehrlich, and W. E. Full, "Fuzzy clus- Dr. Bezdek is currently President of NAFIPS (North American Fuzzy
tering: A new approach for geostatistical analysis," Int. J. Syst., Information Processing Society) and President-Elect of IFSA (International
Meas, Decision, vol. 1, pp. 13-23, 1981. Fuzzy Systems Association). He is a member of the IEEE Computer So-
[10] R. L. Cannon and C. Jacobs, "Multispectral pixel classification with ciety the Classification Society, and the Pattern Recognition Society.