Вы находитесь на странице: 1из 18

A k-Nearest Neighbour Based Algorithm for Multi-label

Classification
Muhammad Ghufran Khan
Faculty of Computer Science and Engineering
Gik Institute of Engineering Sciences and Technology
Topi,KPK.
Abstract-In multi-label classification, each instance associated with training set may be
related to more than one label. Our goal is to predict a set of label of new instance whose set
of label is unknown.in this paper an algorithm ML KNN is proposed to predict a set of label of
those instance whose label is unknown. In ML KNN two concept use first is a k-nearest
neighbour algorithm they use to find the nearest neighbour of each new instance by
calculating distances of new instance with all the instances in the training set and select top
k distances based on these top k instances whose distance is nearer to the new instance,
maximum a probability is use to predict the set of label. This paper generate result on yeast
dataset and scene dataset, and this algorithm result is better as compared to other multi label
classification algorithm. In this paper different approaches(experiment) discussed to further
improve the result of ML KNN.MLAM Distance approach is discussed to predict the set of label
of new instance and also a multi label dependent method technique discuss to predict the set
of label of new instance.
Keyword multi label classification, ML KNN, k-nearest neighbour, maximize a probability,
MLAM algorithm, multi label dependent method, yeast, scene.
1.Introduction
There are two type of classification problem a binary classification and multi label classification.
In binary classification, each instance related with a single label e.g., switch is label that either
on or off. In the other hand, in multi-label classification each instance related with a set of label
e.g. a document may be related with a different topic like a document may be related with a
sport and news category. This paper work on a multi-label classification to predict a set of label
of new instance. In training set each instance is related to a set of label the goal of this paper is
to predict a set of label of new instance whos class label is unknown, different approaches use
to address multi label classification e.g., multi label decision tree and multi label kernel etc.in
this paper ML KNN base algorithm is proposed which use a k nearest neighbour algorithm and
Bayes theorem to predict the set of labels of a new instance.in this algorithm first identify the
k- nearest neighbour of a test instance by calculating their distance and take training instances
whos distance with new instance is top k. k is a parameter that give us the instances whos
distance with test instance is nearest. For example, if k=5 its means when we calculate distance
of new instance with all the training instance then top five nearest distance in the training set is
selected.in this paper also discusses different approaches to improve the result of ML KNN which
is proposed algorithm for multi label classification in this paper. First approach that discuss in
the paper is to take distance of test instance with all the training instance based on label and in
training set give the average of all the training instances whose label is one and also take average
distances of all the training instances whose label is zero, and predict label base on average. In
the second approach ML distance method which is derived from first approach is rather than
taking only distance of test instances with all the training instance take distance of each instance
in the training set with all the other instances in the training set first and then take test instance
distance with training instances. After that maximize a posterior probability is used to predict
the label of test instance. Third approach that discuss in this paper is calculating label dependent
posterior probability. First we calculate distance of each training instance with all the training
instances and k nearest distance selected and based on label Bayes theorem apply to predict the
label. The rest of this paper is organized as section 2 describe the literature review, section 3
describe the ML KNN method, section 4 describe the different approaches that use to further
improve the result of ML KNN, section 5 describe the evaluation criteria that use to check the
effectiveness of the algorithm how well algorithm predict label. Section 6 describe the
experiment result on multi label yeast dataset and multi label scene dataset and section 7
conclude the paper and future direction of the paper.
2. Literature Review
Classification is an important class of problem in machine learning.one of the classification
problem is multi label classification where each instance related with more than one label.
This problem is arising in the document where each document may be related with more than
one category. Different algorithm is proposed for multi label classification, in this section
briefly discuss the previous work on multi label classification.
BoosTexter algorithm proposed by Schapire and Singer[2] is an algorithm which is actual
design for text categorizations and speech categorization which is a multi-label classification
problem and is an extension of AdaBoost[3] algorithm. BoosTexter, in this approach they give
low weight to those example and labels who is easier to classify and give weight higher to
those example and label that difficult to classify.in this way training phase of Boostexter
maintain a weight of example and labels lower that is easy to classify and maintain a weight
higher to those example and labels that is difficult to classify.
Bayesian approach is another algorithm which is proposed by McCallum[1] in 1999 is popular
for document classification where the probabilistic model is used to predict a multi label of
each instance.
A kernel method for multi label classification using SVM[4] proposed by Andre Elisseeff and
Jason Weston SVM is an algorithm which give good result on binary classification the class
label either one or zero.
Navie Bayes approach is also used to predict multi label classification [5], which solve the multi
label classification by considering each label as a separate single class for classification
problem.
A multi-label problem is used for scene classification which is proposed by Boutella et al[6].this
technique divide the multi label problem into multiple independent binary classes where each
example associated with a single label.
3. ML-KNN
ML-KNN algorithm which a k nearest neighbour base algorithm to predict the set
of labels of multi label classification problem. Here is the algorithm explained
below.

For each instance in the training set belongs to a set of labels Lxi Y for each
instance if they belong to a specific label then Lxi = 1 its mean Lxi Y. When Lxi
= 0 its means Lxi not belongs to Y.K(x) denote the index set of the k-nearest
neighbours of x identified in the training set by applying a k-nearest neighbours
algorithm. After that, check the label of these neighbours and count how many
label belongs to 1, and build a membership counting table Cxi.

Cxi(l) = () xu(l) Eq-1

Purpose of Cxi(l) a membership counting table is to count how many number of


instances in the k-nearest neighbour belongs to the class Lxi Y.
Step 1
Step one in this algorithm calculate the prior probability of the labels in the
training set.

P(M1l) = (s + =1 xi(l))/(s x 2 + n) Eq-2


Eq-2 calculates the prior probability of each class in the training. Calculate the
probability that how many training instances whose label is one.
P(M0l) = 1 P(M1l) Eq-3
Eq-3 calculates the prior probability of classes whose label is zero in the training
set. Subtract 1 with prior probability of classes label one.
S is a Laplace smoothing which is parameter set to 1. Which minimize the effect of
probability zero to the final result. Let have an example to understand what
actually the Laplace smoothing is

Example 3.1.
Suppose we have a query string cat dog and the document string is cat cat
loin sheep sheep if we want to find the probability of query string in the
document

P(cat) = 2/5
P(dog) = 0/5=0;
P(Q | D) = P(cat) * P(dog) = 2/5 * 0 = 0
by probability of dog effect the probability of cat in the document then how
Laplace smoothing solve this problem.(s=1)
P(cat) = (1 + 2)/(1 + 5) = 0.5
P(dog) = (0 +1)/(1+5) = 0.167
P(Q | D) = 0.5 * 0.167 = 0.0835
That gives the better result and minimize the effect of p(dog) on P(cat)

Step 2
In this step, computing the posterior probability of training set. First identify the
k-nearest neighbour of each instance exist in the training set with all the training
instances and K(xi) store the index set of k-nearest neighbour. of each instance exist
in the training set

= Cxi(l) = () xu(l) Eq-4

Let say xi is an instance in the training set when we calculate their distance with all
the training instances then find k nearest neighbour Eq4 count the number of one
in the k nearest neighbour of instance x.
Tabel1

Example 3.2.

Let suppose in the given example we want to calculate x1 k-nearest neighbour


what exactly we do in k-nearest neighbour is take distance of X1 with X2, X3,
X4, X5 and let say k =3 so select top 3 k-nearest neighbour of x1 we extracted.
Lets assume the k- nearest neighbour of x1 is X3, X4, X5.

Tabel2
Tabel2 clearly show the k-nearest neighbour of x1. when we get x3,x4,x5 as k-
nearest of x1 then we count the number of one in the class label Y of k-nearest
neighbour of x1.by using Eq-4
= Cx1(l) = 2
indicate the number of ones in the k-nearest neighbour of x1 which calculated
from Tabel2.
Countone[] = Countone[] + 1 Eq 5
In Eq-5 when we calculate x1 k-nearest neighbour and count the number of ones
in the k-nearest neighbour then the next step to check x1 label either it is one or
zero.in Tabel 2 clearly indicate the label of x1 is one so, Countone is the vector that
store those instances in the training set whose label is one and whose k-nearest
neighbour contain exactly, number of ones.

Example 3.3.

Let suppose in an example when we calculate the X1 k-nearest neighbour and


count the number of ones in the k-nearest of x1 which is = 2 when we count
from Table 2, and label of x1 is one then Eq-5 will be.
Countone[2] = 1

It means till now in training set one instance exist whose k-nearest neighbour
number of ones is 2 and her own label is one.
Countzero[] = Countzero[] + 1 Eq-6
Countzero vector store those instance whose class label is zero and her k-nearest
neighbour contain number of ones. Its tells us how many instance in the training
set whose class label is zero and her k-nearest neighbour contain exactly number
of ones.

P(Xhl|M1l) = (s + Countone[h]) / (s x (k+1) + =0 []) Eq-7


After calculating Countzero vector and Coutone vector of all the training instances
based on k-nearest neighbour the next step is to calculate the posterior probability
of training set given label one. Countone[h] used in this equation for each iteration
of h store the number of training instances whose k-nearest neighbour number of
one is h and whose class label is one.

P(Xhl|M0l) = (s + Countzero[h]) / (s x (k+1) + =0 []) Eq-8


Countzero[h] used in Eq-8 for each iteration of h store the number of training
instances whose k-nearest neighbour number of one is h and whose class label is
zero.
Step 3
In step 1 and step 2 the prior and posterior probability is calculated based on
training set. In step 3 we use prior and posterior probability to estimate the test
instance label. First, we will identify the k-nearest neighbour of test instance.
Ct(l) = () tu(l) Eq-9

In Eq-9 For each label Ct(l) count number of ones in training set belongs to the k-
nearest neighbour of test instance.
Lt(l) = prob b {0,1} max P(Mbl)P(Xl Ct(l)|Mbl) Eq-10
Maximum a posterior probability is used in Eq-10 P(Xl Ct(l)|Mbl) is posterior
probability which tell the number of ones in the k-nearest neighbour is Ct(l) and
count number of training instances whose k-nearest neighbour number of one is
Ct(l) and whose label is either zero and one. P(Mbl) is the prior probability which is
calculated from training instance base on label zero and one. The label whose
probability is maximum will be predicted either it is zero or one.
Rankt(l) = P(M1l)P(Xl Ct(l)|M1l) / ({0,1} (Mbl)P(Xl Ct(l)|Mbl)) Eq-11

In Eq-11 Rankt(l) is a vector which calculate the percentage of existence of one label
in the test instance as compared to label 0.(ratio of one as compared to label zero).

Ex 3.4.
Suppose based on Table 2 when we calculate test instance k-nearest neighbour
of test instance t is x3, x4, x5 and when we count the number of one in label Y
of x3, x4, x5 is 2,and when we calculate training instances count x1,x3,x5 their
number of ones in the k-nearest neighbour is 2 and here label Y is one. Its mean
there is three instances in the training set whose k-nearest neighbour number
of ones is 2 and their label Y is one in the other hand, x2, x4 is the instances
whose k-nearest neighbour number of ones is two and here label Y is zero its
means there is two instances in the training set whose k-nearest neighbour
number of one is two and here label Y is zero. In this example we assume all the
cases just for understanding of the algorithm.
= Ct(l) = 2
Countone[2] = 3
Countzero[2] = 2
k=3
s=1
n=5

Step 1.
Putting value in Eq-2 to calculate posterior probability P(X2l|M1l)

P(M1l) = (s + =1 xi(l))/(s x 2 + n)
P(M1l) = (1 +3)/(1 x 2 + 5) = 0.5714
P(M0l) = 1 - P(M1l) = 1 0.5714 = 0.4286
Step 2.
Putting value in Eq-7 to calculate posterior probability P(X2l|M1l)

P(X2l|M1l) = (s + Countone[2]) / (s x (k+1) + =0 [])

P(X2l|M1l) = (s + Countone[2]) / (s x (k+1) + =0 [])


P(X2l|M1l) = (1 + 3) / (1 x (3+1) + (0 + 0 + 3 + 0)) = 0.5714
Putting value in Eq-8 to calculate posterior probability P(X2l|M0l)

P(X2l|M0l) = (s + Countzero[2]) / (s x (k+1) + =0 [])

P(X2l|M0l) = (s + Countone[2]) / (s x (k+1) + =0 [])


P(X2l|M0l) = (1 + 2) / (1 x (3+1) + (0 + 0 + 2 + 0)) = 0.5
Step 3.
In step three when we calculate K-nearest neighbour of test instance t using
Eq 9
Ct(l) = () tu(l)

Ct(l) = 2
Use Eq-10 for maximize a posterior probability
Lt(l) = prob b {0,1} max P(Mbl)P(Xl Ct(l)|Mbl)
Lt(l) = prob max{ P(M1l)P(Xl 2|M1l) , P(M0l)P(Xl 2|M0l)}
Lt(l) = prob max{ (0.57) x (0.57) , (0.42) x (0.50)}
Lt(l) = prob max{ 0.32, 0.21}
Lt(l) = 1, because P(M1l)P(Xl 2|M1l) > P(M0l)P(Xl 2|M0l)
We predict label of test t is one.
Rankt(l) = P(M1l)P(Xl Ct(l)|M1l) / ({0,1} (Mbl)P(Xl Ct(l)|Mbl)) Eq-12

Rankt(l) = P(M1l)P(Xl 2 | M1l) / (P(M1l)P(Xl 2|M1l) + P(M0l)P(Xl 2|M0l))


Rankt(l) = 0.32 / (0.32 + 0.21)
Rankt(l) = 0.6037
My Contribution
1. ML AM
In an effort to further improve the result of ML KNN method to introduce ML-AM
(Average Method).in this algorithm we eliminate the k-nearest neighbour
algorithm and take average of test instance with all the training instances based
on their label Here is the Algorithm that describe.

Step 1.
In step 1 compute the test instance distance with every training instance whose
label Y=1 and based on the distance take average.
Step 2.
In step 2 compute the test instance distance with every training instance whose
label Y=0 and based on the distance take average.
Step 3.
In step 3 based on average of label Y=1 and Y=0 we predict whose average is less
than the other.
Lt(l) = Avg b {0,1} min (t|Yb) Eq-12
In Eq-12 we get those average either label 0 and 1 who is less than the other, and
predict test instance. If class label one average is minimum to class label zero, then
we predict test instance label Y = 1 and if class label zero average is minimum to
class label one then we predict test instance label Y=0.The results of the ML AM
Method discuss in the Experiment section. The result of ML AM method is not
better than the ML KNN method so we further add new method which ML AM with
factor parameter .
2. ML AM with factor parameter
In this approach the step 1 and step 2 is same as the ML AM Approach changes
occur in step 3.

First we set the parameter is 1.5 and then take average of test instance with
those training instances whose label is one which store in Avg_one variable, and
take average of test instances with those training instance whose label is zero
which store in Avg_zero variable, after that take ratio of base on average one and
if ratio is greater than equal to parameter then predict label Y = 1 otherwise
predict label Y = 0.
Different parameter value check on ML AM method and we take = 1.5 that
improve our one evaluation criteria hamming loss as compared to ML AM but other
evaluation criteria one error and coverage same as ML AM.
3. ML ATDM (Training instance dependent Method)
In previous approach ML AM method we only use test instance to predict label.
Take distance of test instance with all the training instances based on training
instance label. In ML ATDM we not only take in account a test instance but all the
training instances. First we consider each training instance as a test instance and
calculate their distance with all the other training instances based on label Y.

Step 1
calculate the prior probability of training instances of label Y one and zero.
Step 2
For each class label y calculate the distance of each instance in the training
with all the instances in the training set. Same as ML AM Method take average of
all the distance of training instances whose label is one and also take average of
all the distance of training instances whose label is zero.
Count[1,1] = Count[1,1] + 1 Eq-13
In Count[1,1] vector store those instances in the training set whose label is one
and when take their average with all the training instances by taking their distance
average one is less than average zero.

Count[1,0] = Count[1,0] + 1 Eq-14


In Eq-14 Count[1,0] vector store those instances in the training set whose label is
one and when take their average with all the training instances average zero is less
than average one.
Count[0,1] = Count[0,1] + 1 Eq-15
In Count[0,1] vector store those instances in the training set whose label is zero
and when take their average with all the training instances average one is greater
than average zero.
Count[0,0] = Count[0,0] + 1 Eq-16
In Count[0,0] vector store those instances in the training set whose label is zero
and when take their average with all the training instances average zero is less
than average one.
post_prob_one_one(y) = (Count[1,1))/(Count[1,1] + Count[1,0]) Eq-17
in Eq-17 calculate the posterior probability of those training instances whose label
Y = 1 and when take distance with all the other training instances in the training
set based on label zero and one and take average the average one is less than
average zero.
post_prob_one_zero(y) = (Count(1,0))/(Count[1,0] + Count[1,1]) Eq-18
in Eq-18 calculate the posterior probability of those training instances whose label
Y = 1 and when take distance with all the other training instances in the training
set based on label zero and one and take average the average zero is less than
average one.
post_prob_zero_one(y) = (Count(0,1))/(Count(0,1)+Count (0,0)) Eq-19
in Eq-19 calculate the posterior probability of those training instances whose label
Y = 0 and when take distance with all the other training instances in the training
set based on label zero and one and take average the average one is less than
average zero.
post_prob_zero_zero(y) = (Count(0,0))/(Count(0,1)+Count (0,0)) Eq-20
in Eq-20 calculate the posterior probability of those training instances whose label
Y = 0 and when take distance with all the other training instances in the training
set based on label zero and one and take average the average zero is less than
average one.
Step 3:
Step 3.1
For each label Y calculate the test instance average zero and one
p = ML_DISTANCE(training_set,t,y) Eq-21
in Eq-21 ML-Distance method calculate the distance of test instance with all the
training instance and take average label zero and one and predict the label whose
average is less than the other (e.g, avg1 less than avg0 then predict p = 1).
ones = P(M1l) * post_prob_one_one(y) Eq-22
in Eq-22 if p is one that calculate in Eq-21 then it calculates how many instance in
the training set whose p=1 and her own label is also one.
zero = P(M0l) * post_prob_one_zero(y) Eq-23
in Eq-23 if p is one that calculate in Eq-21 then it predicts label base on multiplying
prior probability of label zero with posterior probability which tell how many
instances in the training set whose p=1 and her own label is zero.
ones = P(M1l) * post_prob_zero_one(y) Eq-24
in Eq-24 if p is zero that calculate in Eq-21 then it predicts label base on multiplying
prior probability of label one with posterior probability which tell how many
instances in the training set whose p=0 and her own label is one.
zero = P(M0l) * post_prob_zero_zero(y) Eq-25
in Eq-25 if p is zero that calculate in Eq-21 then it predicts label base on multiplying
prior probability of label zero with posterior probability which tell how many
instances in the training set whose p=0 and her own label is also zero. After
calculating ones and zero probability we predict those label whose probability is
greater than the other.
4. ML ATDM Method with KNN
In order to improve result of ML ATDM use k-nearest neighbour first and then take
average given label 1 and 0. Frist we calculate the k-nearest neighbour of each
training instance and then base on k-nearest neighbour we calculate the average
based on label.in the same side we calculate test instance k-nearest neighbour first
and then calculate average base on these k-nearest neighbour label. All the other
step same as the ML ATDM method.
5. ML LDM
Label dependent method is introducing to improve the result of ML KNN , ML AM,
ML ATDM.

Step 1:
Count the number of ones of each class label in the training instances and
count number of zero in each class label in training instances.
Step 2.
Same as ML KNN method for each label first identify the k-nearest neighbour
of each training instances then count how many training instances in the training
whose k-nearest neighbour contain same number of ones and her own label is one
and how many training instances in the training instances in the training set whose
k-nearest neighbour contain same number of ones in her label.
Step 3.
First identify the k-nearest neighbour of test instance with all the training
instance and then count how many number ones in her k-nearest neighbour label
contain.
P(Xl Ct(l)|M1l) = (s + Countone[Ct(l)]) / (s * total_label + L(M1l)) Eq-26
In Eq-26 Countone is vector which store how many instances in the training set
whose k-nearest neighbour contain exactly Ct(l) one (which is counter vector that
calculate the test instance k-nearest neighbour number of one) and her own label
is one. L(M1l) is the total number of ones in the training instances label.
P(Xl Ct(l)|M0l) = (s + Countzero[Ct(l)]) / (s * total_label + L(M0l)) Eq-27
In Eq-26 Countzero vector store how many instances in the training set whose k-
nearest neighbour contain exactly Ct(l) one (which is counter vector that calculate
the test instance k-nearest neighbour number of one) and her own label is zero.
L(M0l) is the total number of zero in the training instances label.
Lt(l) = prob b {0,1} max P (Xl Ct(l)|Mbl) Eq-28
In Eq-28 predict the label based on maximize the probability and predict label
whose probability is maximum.
6. ML LDM with factor parameter
When set = 2.5 and when we maximize probability and then check if probability
of prediction label one ratio as compared to label is greater than then predict
label one otherwise label zero is predicted.
7. ML-KNN
The ML-KNN paper use yeast data set to evaluate the performance of ML-
KNN. In this paper use two dataset yeast as well as scene data set to
evaluate the performance of ML-KNN.
4. Evaluation Criteria
1. Accuracy
Accuracy = 1/N (()() /)
In this P(x) is the predicted label of test instances and A(x) is the actual label
in the training set. Check where the label exists both in P(x) and A(x) and C
is the total number of classes. Average all the test instances divide by total
number of test instances. Maximum the accuracy better the performance.

2. Hamming Loss
Hamming Loss = 1/N (() () /)
is use for not intersection mean where the predicted label and the actual
label is not equal. How much label in the test instance whose predicted label not
equal to actual label? Average all the test instance of divide by total number of
instance. Smaller the hamming loss better the performance.
3. One error

One Error = 1/N


[arg max((, )) Yi ]yZi
One error [7] is a criteria which determine how many times the best ranked
label given by the classifier is not part of the true label set of the instance.
Smaller the one error better the performance.
4. Coverage

Coverage = 1/N =1 Max Yi ri() 1


Coverage is the metric that evaluate how far on average an algorithm need
to go down in the ordered list of prediction to cover all the true labels of an
instance. Smaller the value of coverage, the better the performance.

5. Experiments
This paper is test on two dataset yeast and scene data set. In yeast data set each
gene belongs to some classes whose maximum size is 190. Elisseeff and Weston[6]
pre-process the data in order take better decision they only include those classes
who has well known structured.

in yeast dataset, in yeast use 1500 instances and 14 well-structured functional


classes and 917 test instances to check the effectiveness of ML-KNN., use 103
number of attribute.

In yeast data set in order to improve result of ML-KNN 6 different approaches


introduce but the result of ML-KNN is better than all these 6 approaches.in ML-AM
on YEAST data the one-Error is better than ML-KNN. The result of ML LDM with
parameter factor is much closer to the ML-KNN. The performance of last
approach ML LDM with factor parameter is better than all other approaches.
Scene Dataset.
Scene dataset is used for image classification which has 6 possible classes (Beach
,Sunset , FallFoliage ,Field , Mountain, Urban) and have 1211 training instances
and 1196 test instances.

In SCENE dataset in order to improve result of ML-KNN 6 different approaches


introduce but the result of ML-KNN is better than all these 6 approaches.in ML-AM
on scene data the one-Error is better than ML-KNN. The result of ML LDM with
parameter factor is much closer to the ML-KNN.

6. Conclusion
In this paper six approaches use in order to improve the result of ML KNN, but the
ML KNN perform better then all six approaches. The last approach ML LDM result
much closer to the ML KNN but not better then that.by using average to predict
the label of multi-label classification is not give the better result as compared to
ML KNN. So, neural network might be improving the result of ML KNN.
References

[1] A. McCallum, Multi-label text classication with a mixture model trained by


EM, in Working Notes of the AAAI99 Workshop on Text Learning, Orlando,
FL, 1999.

[2] R. E. Schapire and Y. Singer, Boostexter: a boosting-based system for text


categorization, Machine Learning, vol. 39, no. 2/3, pp. 135168, 2000.

[3] Y. Freund and R. E. Schapire, A decision-theoretic generalization of on-line


learning and an application to boosting, in Lecture Notes in Computer
Science 904, P. M. B. Vitanyi, Ed. Berlin: Springer, 1995, pp. 2337.

[4] A. Elisseeff and J. Weston, A kernel method for multi-labelled classication,


in Advances in Neural Information Processing Systems 14, T. G. Dietterich, S.
Becker, and Z. Ghahramani, Eds. Cambridge, MA: MIT Press, 2002, pp. 681
687.

[5] Sushobhan Nayak , Raghav Ramesh, Suril Shah, A Study of multi-label text
classification and the effect of label hierarchy.

[6] . M. R. Boutell, J. Luo, X. Shen, and C. M. Brown, Learning multi-label scene


classication, Pattern Recognition, vol. 37, no. 9, pp. 17571771, 2004.

[7] . M. Pushpaa *, S. Karpagavallib* a, Multi-label Classification: Problem


Transformation methods in Tamil Phoneme classification, Department of
Computer Science, PSGR Krishnammal college for women, Coimbatore-
641004, Tamilnadu, India.

Вам также может понравиться