Вы находитесь на странице: 1из 6

2014 4th International Conference on Artificial Intelligence with Applications in Engineering and Technology

Modified Neural Network Activation Function


Adamu I. Abubakar1, Haruna Chiroma2, Sameem Abdulkareem2, Abdulsalam Yau Gital3, Sanah Abdullahi
Muaz4, Jafaar Maitama4, Muhammad Lamir Isah5, Tutut Herawan6
1

Department of Information Systems, International Islamic University Malaysia, Kuala Lumpur, Malaysia
2
Department of Artificial Intelligence, University of Malaya, Kuala Lumpur, Malaysia
3
Department of Mathematical Scinece, Abubakar Tafawa Balewa University, Bauchi, Nigeria
4
Department of Software Engineering, University of Malaya, Kuala Lumpur, Malaysia
5
Department of computer science, Abubakar Tatari Ali polytechnic Bauchi, Nigeria
6
Department of Information Systems, University of Malaya, Kuala Lumpur, Malaysia
adamu@iium.edu.my, hchiroma@acm.org, {sameem, tutut}@um.edu.my, asgital@yahoo.com

significant improvement was achieved. Example of some


major modifications of the structure of the ANNs includes
adding recurrent structure to the ANNs by [14]. These
recurrent networks associate static patterns with output
patterns that are sequentially ordered. Hidden nodes
visualize their own previous output to serve as a guide for
subsequent behavior. Memory is provided in the network
with the recurrent connections [14]. Jordan [15] modified
Elmans network to include context units, and augmented
the model with an input layer. However, these units only
interact with the hidden layer, not the external layer. The
previous values of Elmans neural network output fed back
into hidden units while the hidden neuron output fed back to
itself in the Jordans neural network. Specht [16] proposes
probabilistic ANN. The network has the capability of
interpreting the network structure in the form of a
probability density function. In contrast to other types of
ANNs, PNNs are only applicable in solving classification
problems. A functional link ANNs was first proposed by
[17]. The FLNN is a higher order ANN without hidden
layers (linear in nature). Despite the linearity, it is capable
of capturing non-linear relationships when fed with suitable
and adequate sets of input polynomials. A Support vector
machine (SVM) is relatively a new variant of ANN
proposed by [18] and it is capable of solving problems in
classification, regression analysis and forecasting. Training
SVMs are equivalent to the linear constrained quadratic
programming problem, which translates to the exceptional
and global optimum. SVMs are immune to local minima
unlike the case of other ANNs, the optimum solution to a
problem depends on the support vectors which are a subset
of the training exemplars. Radial basis function network
(RBFN) is another class of ANNs with a form of local
learning and is also a competent alternative to the other
ANNs. The regular structure of the RBFN comprises the
input, hidden and output layers [19]. The major differences
between the RBFN and the other ANNs are that, the RBFN
is composed of a single hidden layer with a radial basis
function. The input variables of the RBFN are all
transmitted to each neuron in the hidden layer without
being computed with initial random values of the weights.

Abstract Neural Network is said to emulate the brain,


though, its processing is not quite how the biological brain
really works. The Neural Network has witnessed significant
improvement since 1943 to date. However, modifications on the
Neural Network mainly focus on the structure itself, not the
activation function despite the critical role of activation
function in the performance of the Neural Network. In this
paper, we present the modification of Neural Network
activation function to improve the performance of the Neural
Network. The theoretical background of the modification,
including mathematical proof is fully described in the paper.
The modified activation function is code name as SigHyper.
The performance of SigHyper was evaluated against state of
the art activation function on the crude oil price dataset.
Results suggested that the proposed SigHyper was found to
improved accuracy of the Neural Network. Analysis of
variance showed that the accuracy of the SigHyper is
significant. It was established that the SigHyper require further
improvement. The activation function proposed in this
research has added to the activation functions already
discussed in the literature. The study may motivate researchers
to further modify activation functions, hence, improve the
performance of the Neural Network.
Keywords Artificial Neural Network; Activation Function;
Training algorithm; Logsig; Hyperbolic

I.

INTRODUCTION

Artificial Neural Networks (ANNs) proposed by [1],


recorded successful results in the modeling of nonlinear
problems across discipline, and the ANNs have the
capability of generalizing, adaptability, self-organization,
fault tolerance, real time and etc. the ANNs has been
applied across a broad spectrum of discipline for solving
problems [2]. Recently, the ANNs are applied for modeling
in energy [3-6], data mining [7], weather [8-9], medical [1011], stock market [12], industries [13], etc. However, the
robustness of the ANNs heavily depend on its structure,
training algorithm, and the optimal choice of training
parameters which makes the optimization of ANNs to solve
problems become difficult and retards its robustness [2].
Since the inception of the ANNs, more attention has
been given to the modifications of its structure and
978-1-4799-7910-3/14 $31.00 2014 IEEE
DOI 10.1109/ICAIET.2014.12

j 1

The major issue with the above cited studies, is that


attention is more focus on the structure of ANN without
giving adequate attention to other critical component of the
ANNs such as the activation function [20], whereas
activation function has been proven to have a significant
influence on the performance of the ANNs [20-21].
Therefore, this study proposed to modify the activation
function of ANN by summing of logsig and hyperbolic
(SigHyper) to extract the common features of both the
activation functions since they are nonlinear and easily
differentiable [20, 22] to improve the computational speed
and accuracy of the ANNs. Logsig and Hyperbolic were
chosen among other activation functions because they are
more established in the literature [22] and widely accepted
by the research community [5, 21- 23].

Netko

kj

Where
k

j 1

V ji

Netko

j 1

1 e

 Onet hj

rko

1 e

(7)

V kj

O

1 e

w ji xi

(d k  Ok )OOk (1  Ok )

Where

(8)

rko is the training signal for output neurons d k is the

desired output at neuron

k , Ok output produce by the

neuron k at the output layer. Computation of training output


for hidden neurons can be expressed as:

rjh

k o
rk Vkj Oy j 1  y j
k 1

(9)

The weights in the output layer are updated as follows:


(1)

Vkj ( N ) Vkj ( N )  Orko y j ( N )


Where N and
respectively.

(10)

are number of epoch and learning rate

Vkj ( N ) Vkj ( N )  K (d k  Ok )OOk (1  Ok ) y j ( N )

(2)

(11)
Update of weights in hidden layer

yi is activation function and O is the step size which


controls the steepness of the yi .
yj

O

Computation of training output for output neurons

Where

1
 O w ji x i

1
j 1

1 e

j and x is the spattern to input layer at neuron i.


computation at the hidden layer neuron produce the output:

1 e

(6)

j 1

w ji represent the connection weight from I to neuron

i 1

1  e  ONet k

Ok

i 1

yj

(5)

n 1

w ji xi

Ok

n 1

Where

O

Computation output of the output layer is expressed as:

The original aim of the development of NN was to


mathematically represent the processing of information in
biological systems [1]. The NN is a system that processes
information similar to the human brain and constitutes a
general mathematical representation of human reasoning.
The ANNs are built on the following assumptions [24]:
i.
Information is processed by neurons
ii.
Signals are communicated between neurons through
established links
iii.
Every connection between neurons is associated with
weight; transmitted
signals between neurons are multiplied by the weight
iv. Every network neuron applies an activation function
Let us assume we have n dimensional input x to the input
layer of ANNs and set all weights in the network to random
values within [-1,1]. Computation of the network (Net) input
to hidden layer with n neurons can be expressed as [25-26]:

w ji xi

(4)

Vkj is the weight connection from neuron j to neuron

II. BASIC THEORY OF ANN

Net hj

yj

j 1

w ji ( N )
w ji

(3)

i 1

w ji ( N )  Krjh xi ( N )

w ji ( N )  K rkoVkj y j (1  y j ) xi ( N )
k 1

Update error for the first epoch

Computation of Net inputs to output layer neurons

(12)
(13)

E  (rko ) 2

E  d k  Ok OOk 1  Ok

The SigHyper is the modified activation function


proposed in the study which will be implemented in the next
phase of this research.

Where E is the error produce by the network and the


algorithms repeat the operation with another input x and set
E = 0, the operation is repeated until the best results cannot
be improve, then the training is terminate.

The cuckoo search algorithm (CS) is to be used for


training the back-propagation ANN to avoid the possibility
of being stuck in local minima. The CS is to optimize the
weights and bias of the back-propagation ANN code name
CSBP. The CS is a global search algorithm for searching a
global optimum solution. In CS, the fitness can be
proportional to the objective function value without
difficulties. The three major ideas of the CS proposed by
Yang and Deb [27] as rules of the optimization algorithm
for the CS are as follows: (1) Each of the cuckoo lays one
egg at a time and puts it in a randomly chosen nest; (2) the
nests with the optimum quality eggs will move to the next
generation; (3) the available nest host is fixed and the egg
laid by a cuckoo is discovered by the host bird with the
probability of worse nests to be abandoned (Pa)
Pa [0,1]. The fitness function is selected as the objective
function itself for maximum or minimum problems. In the

The logsig activation function is given as:


(16)

The hyperbolic activation function is expressed as:

e x  e x
e x  e x

f ( x)

(17)

log sig  hyperbolic

SigHyper

" x  " x  1  " x " x  " x


1  "x " x  "x

"

(18)

(19)

(20)

" x  " x  " x  " x  "2 x  "0


1  " x " x  " x

(21)

 "  x  " x  "  x  " x " x  " x ("  x )


1  "x " x  "x

( t 1)

for cuckoo i, a levy


generation of a new solution, xi
flight is performed as expressed in Equation (28):

x ( t 1)

(24)

(25)

"x 2  "x 1
" x  "  x  1  " 2 x

(23)

"x 2  "x 1
" x  "  x  " 0  " 2 x

"x 2  "x 1
" x  1  "  x  " 2 x

xit  D1 levy(O )

(28)

Where D1 is the lvy flight step size multiplication


processes with an entry wise multiplication process.
However, levy flight provides a random walk, whereas their
random step lengths are drawn from the levy flight
distribution for large steps. The CS initialized the population
(n) for the nest, and randomly selected the best nest via levy
flight. Thus, the cuckoo birds are always looking for a better
place in order to reduce the chance of their eggs being
discarded. The CS requires the setting of parameters for
execution such as n, etc. However, the most critical
parameters required to obtain the optimal solution from CS
are Pa and D1 [28].

Applying law of indices to Eqn. (21), we get Eqn. (22)

2" x  " 2 x  1
" x  "  x  "  x (" x )  "  x ("  x )

(27)

IV. CUCKOO SEARCH ALGORITHM

III. THE PROPOSE MODIFICATION

1
1  e x

(15)

k 1

f ( x)

SigHyper

k 1

f ( x)

"x 2  "x 1
" x  "  x  " 2 x  1

(14)

V. DATASET
The dataset for Brent crude oil price was collected from
the Energy Information Administration of the United States
Department of Energy [29]. The dataset was collected from
1987 to 2011 in view of the fact the period of data collection
is determined by the availability of the data [30]. The
dataset was collected on a monthly frequency because some
of the variables, e.g. oil demand and supply data are only
available on monthly frequency [31]. The dataset was
normalized within [-1, 1] to improve effectiveness of
computation in the ANN neurons and to improve

(26)

10

convergence speed and accuracy. The dataset were partition


into 80% for training and 20% for testing according to the
convention in literature [32].

However, Table II indicated that the CSBP with


SigHyper have not improved the training computation time.
Thus, there is still room for improving the proposed
activation function.
We used analysis of variance (ANOVA) to measure the
significance of the accuracy achieved by CSBP with
SigHyper. The ANOVA results are presented in Table III
showing that there is a significant difference (P-value =
0.05) between the accuracy of the CSBP with SigHyper and
CSBP with logsig. This implies that the accuracy of CSBP
with SigHyper is significantly better than the accuracy of the
CSBP with logsig on training dataset.

VI. EXPERIMENTAL SETUP


The proposed activation function was coded and
implemented in MATLAB R2012b on a machine: Intel Core
(TM) 2 Quad CPU 2.33GHz RAM 2GB, 32-bit operating
system. The ANN was trained using the CS and the
activation function in the hidden layer neurons of the ANN
was varied. The ANN and CS require initialization to start
running. The CS was to optimize the weights and bias of the
ANN. The parameters of the CS Pa and D1 were 0.25 and
1 adopted from [27]. The number of inputs and output
neurons of the ANN was set to 10 and 1 respectively,
because the independent variables in the dataset were 10 and
the dependent variable which is the Brent crude oil price
was one. The hidden layer neurons were set to five realized
through trial-and-error. The activation function in the
hidden layer was the proposed SigHyper and linear was used
in the output layer. The experiment was repeated with logsig
activation function to compare its performance with the
SigHyper. Thus, we have CSBP with SigHyper and CSBP
with logsig. Maximum epoch and fitness function were
3000 and mean square error (mse). The experiment was
repeated 10 times to ensure consistent performance since a
meta-heuristic algorithms requires several executions to
allow the user select the best result. The performance of the
two algorithms was observed and recorded.

TABLE III ANOVA RESULTS OF THE STATISTICAL TEST FOR


CSBP WITH SIGHYPER AND CSBP WITH LOGSIG

Between
Groups
Within
Groups

CSPB with SigHyper

Best
1786.87

Worst
1771.63

CSBP with SigHyper

2016.524

2013.53

2019.52

Mean
0.029627

Best
0.029744

Worst
0.028632

0.01588198

0.015882

0.015882

Algorithm
CSBP with logsig

Mean
1587.25

Best
1586.87

Worst
1591.63

CSBP with SigHyper

1766.40

1757.93

1769.82

TABLE VI ANOVA RESULTS ON TEST DATASET FOR CSBP WITH


SIGHYPER AND CSBP WITH LOGSIG

TABLE II COMPARING TRAINING TIME (SEC.) OF CSBP WITH


SIGHYPER AND CSBP WITH LOGSIG

Mean
1595.253

P.
0.000

TABLE V COMPUTATION TIME OF THE CSBP WITH SIGHYPER


AND CSBP WITH LOGSIG ON TEST DATASET

Worst
0.007592
0.002176

Algorithm
CSBP with logsig

17

Algorithm
CSPB with logsig

TABLE I ACCURACY (MSE) OF THE PROPOSED ACTIVATION


FUNCTION ON TRAINING DATASET

Best
0.008193
0.002176

0.000

F
742488065934.714

TABLE IV ACCURACY OF THE CSBP WITH SIGHYPER AND CSBP


WITH LOGSIG ON TEST DATASET

This section presents and discusses the performance of


the proposed SigHyper in comparison to the logsig. Table I
showed the performance of both the activation functions in
the training phase and it was found that the CSBP with
SigHyper performs better than the CSBP with losig in terms
of accuracy. This implies that the proposed algorithm has
improved the accuracy of the state of the art activation
function.

Mean
0.007178
0.00217633

df
1

Table IV showed the accuracy of the CSBP with


SigHyper on test dataset. The results indicated that the
CSBP with SigHyper is better than the CSBP with logsig
similar to the accuracy performance in the training phase,
whereas the computation time of the CSBP with SigHyper
as shown in Table V has not improved.

VII. RESULTS AND DISCUSSION

Algorithm
CSBP with logsig
CSBP with SigHyper

Sum of
Squares
0.000

Between
Groups
Within
Groups

11

Sum of
Squares
0.001

df
1

0.000

18

F
577558058.248

P.
0.000

4.

The ANOVA results in Table VI suggest that there is a


significant difference between the accuracy of the CSBP
with SigHyper and CSBP with logsig. This shows that the
accuracy of the CSBP with SigHyper is significantly better
than the accuracy of the CSBP with logsig.
The possible reason why the proposed SigHyper
performs better than logsig in terms of accuracy, it could
probably because of the modification made to the activation
function. The modification may have hybridized the
strengths of the hyperbolic and the logsig to make the
proposed SigHyper more effective than the individual
activation function. The possible explanation of the
proposed SigHyper not to improve computation time is
probably because the modified SigHyper has more variables
than the individual activation function which could have
added time for the computation. This clearly showed there is
a room for improving the proposed activation function in
terms of computation time. In some applications,
computation time is not much critical, for example,
prediction of crude oil prices mainly focuses on accuracy
not the convergence speed, most literature on the prediction
of crude oil price do not report convergence time in their
study [33-36]. However, in medical applications, both
accuracy and computational time are critical [37-39].

5.

6.

7.

8.

9.

10.

11.

VIII.CONCLUSIONS AND FURTHER RESEARCH


12.

The objective of the study is to modify the activation


function of the ANN to advance its performance. In this
paper, we present the propose modification of the activation
function by integrating logsig and hyperbolic to propose
SigHyper activation function. Comparative simulation
analysis showed that the CSBP with SigHyper significantly
performs better than the CSBP with logsig. The activation
function proposed in this paper has added to the activation
functions in the literature. This paper, is a work in progress,
the second phase of the research will involve further
improvement of the SigHyper to enhance computation time
of the SigHyper.

13.

14.
15.

16.
17.

AKNOWLEGEMENT
18.

This work is supported by Kulliyyah of Information and


Communication technology (KICT), International Islamic
University Malaysia. The authors wish to thank KICT for
their supports.

19.

20.

REFERENCES
1.

2.

3.

W. S. McCulloch, and W. Pitts, A logical calculus of the ideas


immanent in nervous
activity, Bulletin of Mathematical
Biophysics vol. 5, pp. 115133, 1943. Reprinted in Anderson and
Rosenfeld, 1988.
D. Karaboga, B. Akay, and C. Ozturk, Artificial bee colony (ABC)
optimization algorithm for training feed-forward neural networks,
In: Modeling decisions for artificial intelligence . Springer Berlin
Heidelberg, pp. 318-329, 2007.
  

            


performance analysis of coal fired thermal power plant," International
Journal of Exergy vol. 12, no. 3, pp. 362-379, 2013.

21.

22.

23.

12

H. Chiroma, S. Abdulkareem, A. Abubakar, and J.U. Mohammed,


Computational intelligence techniques with application to crude oil
price projection: A literature survey from 2001-2012, Neural
Network World, vol. 23, no.6, pp. 523-551, 2013.
H. Chiroma, S. Abdulkareem, A. Abubakar, A. Zeki, and A. Y.
Gital, Intelligent system for predicting the price of natural gas based
on non-oil commodities, In: Industrial Electronics and Applications
(ISIEA), 2013 IEEE Symposium on pp. 200-205, 2013.
L. Hernandez, C. Baladrn, J.M. Aguiar, B. Carro, A.J. SanchezEsguevillas, and J. Lloret, Short-term load forecasting for microgrids
based on artificial neural networks, Energies, vol.6, no.3, pp.13851408, 2013.
H. Chiroma, S. Abdul-Kareem, and A. Abubakar, A Framework for
Selecting the Optimal Technique Suitable for Application in a Data
Mining Task, In: Future Information Technology. Springer Berlin
Heidelberg, pp. 163-169, 2014
E. Pi, N. Mantri, S.M. Ngai, H. Lu, L. Du, BP-ANN for Fitting the
Temperature-Germination Model and Its Application in Predicting
Sowing Time and Region for Bermudagrass, PLoS ONE vol. 8,
no.12 e82413. doi:10.1371/journal.pone.0082413, 2013.
       

          


neural network analysis in modelling of toolchip interface
temperature in machining," Expert Systems with Applications vol. 38,
no.9, pp. 11651-11656, 2011 .
L.C. Zhu, Y.L. Ye, W.H Luo, M. Su, H.P Wei, et al., A Model to
Discriminate Malignant from Benign Thyroid Nodules Using
Artificial Neural Network, PLoS ONE vol. 8, no. 12. e82211.
doi:10.1371/journal.pone.0082211, 2013.
G. Papantonopoulos, K. Takahashi, T. Bountis, B.G. Loos,
Artificial Neural
Networks for the Diagnosis of Aggressive
Periodontitis Trained by Immunologic Parameters. PLoS ONE vol.
9, no.3, e89757. doi:10.1371/journal.pone.0089757, 2014.
M. Y. Bello, and H. Chiroma, Utilizing artificial neural network for
prediction in the Nigerian stock market price index, Computer
Science & Telecommunications, vol. 30, no.1, pp. 68 77, 2011.
M.N. Nawi, M.Z. Rehman, M.I. Ghazali, M.N. Yahya, and A. Khan,
Hybrid Bat-BP: A New Intelligent Tool for Diagnosing NoiseInduced Hearing Loss (NIHL) in Malaysian Industrial
Workers, Applied Mechanics and Materials, vol.465, pp.652-656,
2014.
J.L. Elman, Finding Structure in Time, Cognitive Science, vol.14,
pp.179-211, 1990.
M.I. Jordan, Serial order: A parallel distributed processing
approach, UC San Diego, Institute for Cognitive Science Report,
vol. 8604, 1986.
D.F. Specht, Probabilistic neural networks, Neural Network, vol.3,
pp.109-118, 1990.
M.S. Klasser, and Y.H. Pao, Characteristics of the functional link
net: a higher order delta rule net, IEEE proceedings of 2nd annual
international conference on neural networks, San Diago, CA, 1988.
C. Cortes, and V. Vapnik, Support vector networks, Machine
Learning Resaerch vol.20, pp.273297, 1995.
W. Qunli, H. Ge, C. Xiaodong, Crude oil forecasting with an
improved model based on wavelet transform and RBF neural
network, IEEE International Forum on Information Technology and
Applications pp.231 234, 2009.
R. Jammazi, and C. Aloui, Crude oil price forecasting: Experimental
evidence from wavelet decomposition and neural network
modeling. Energy Economics, vol. 34, no. 3, pp. 828-841, 2012.
A.T. Azar, Fast neural network learning algorithms for medical
applications, Neural Computing and Applications, vol. 23, no. (3-4),
pp. 1019-1034, 2013.
G. Zhang, B.E. Patuwo, and M.Y. Hu, Forecasting with artificial
neural networks: the
state of the art, International Journal o
Forecasting, vol. 14, pp. 35-62, 1998.
H. Chiroma, Gital A.Y. and A.M. Usman, Neural network model for
the forecasting of 7up security close price in the Nigerian stock
exchange, Journal of Computer Science and its Application, vol.16,
no. 1, pp. 1-10, 2009.

24. L.V. Fausett, Fundamentals of Neural Networks: Architectures,


Algorithms, and Applications, Prentice-Hall: Englewood Cliffs,
1994.
25. S. Haykin, Neural networks, New Jersey: Prentice Hall, 2nd edn.
1999.
26. Y.H. Zweiri, J.F. Whidborne, L.D. Sceviratne, A three-term
backpropagation algorithm, Neurocomputing, vol. 50, pp.305318,
2002.
27. X-S. Yang, and S. Deb, Cuckoo search via Lvy flights, In: Nature
& Biologically Inspired Computing, World Congress on pp. 210-214,
2009.
28. M.K. Marichelvam, T. Prabaharan, and X-S. Yang, Improved
cuckoo search algorithm for hybrid flow shop scheduling problems to
minimize make span, Applied Soft Computing, vol. 19, pp. 93-101,
2014.
29. Energy Information Administration of the United States Department
of Energy, www.eia.org retrieved 20 December, 2012.
30. M.O. Adetutu, Energy efficiency and capital-energy substitutability:
Evidence from four OPEC countries, Applied Energy vo. 119, pp.
363370, 2014.
31. S. Kulkar, I. Haidar, Forecasting model for crude oil price using
artificial neural networks and commodity future prices, International
Journal of Computer Science and Information Security vol. 2, no. 1,
pp.8188, 2009.

32.

33.

34.

35.

36.

37.

38.

39.

13

I.H. Witten, E. Frank and A.M. Hall, Data Mining: Practical


Machine Learning Tools and Techniques (3rdEdn.). San Mateo:
Morgan Kaufmann, 2011.
A. Ghaffari, and S. ZareA, novel algorithms for prediction of crude
oil price variation based on soft computing. Energy Economics vo.
31, pp. 531536, 2009.
H. Pan, I. Haidar, and S. Kulkarni, Daily prediction of short-term
trend of crude oil prices using neural networks exploiting multimarket
dynamics, Frontiers of Computer Science in China vol.3, no. 2, pp.
177 -191, 2009.
T. Mingming, Z. Jinliang, and T. Mingxin, Effects simulation of
international natural gas prices on crude oil prices based on WBNNK
model, In: IEEE computer society Six International Conference on
Natural Computation, Sanya, China, pp. 16431648, 2009.
X. Chen, and Y. Qu, A prediction method of crude oil output based
on artificial neural networks, In: Proceeding of IEEE International
Conference on Computation and Information sciences, Chengdu,
China, pp. 702704, 2011.
L. Zhang, et al., "Research of Neural Network Classifier Based on
FCM and PSO for Breast Cancer Classification," in Hybrid Artificial
Intelligent Systems. vol. 7208, E. Corchado, et al., Eds., ed: Springer
Berlin Heidelberg, pp. 647-654, 2012.
D. ali    !   # '    *   
system based on LDA-Wavelet Support Vector Machine
Classifier, Expert Systems with Applications, vol. 38, no.7, pp.83118315, 2011.
 +   < >?  # 
 '

 \ *  


principal component analysis and adaptive neuro-fuzzy inference
system to diagnosis of diabetes disease, Digital Signal Processing,
vol. 17, pp. 702-710, 2007.

Вам также может понравиться