Вы находитесь на странице: 1из 6

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY

VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303

Robust Human Emotion Analysis Using LBP, GLCM


and PNN Classifier
1

S.Seedhana Devi
Assistant Professor
Department of Information
Technology
1
Sri
Vidya
College
of
Engineering & Technology
Seedhana19@gmail.com

S.Jasmine Rumana2
UG Student
Department of Information
Technology
2
Sri Vidya College of
Engineering & Technology
jasminerumana@gmail.com

G.Jayalakshmi3
UG Student
Department of Information
Technology
3
Sri Vidya College of
Engineering & Technology
jayalakshmi.tech@gmail.com

Abstract- The project presents recognition of face expressions based on textural analysis and PNN classifier. Automatic facial
expression recognition (FER) plays an important role in HCI systems for measuring peoples emotions by linking expressions to a
group of basic emotions such as disgust, sadness, anger, surprise and normal. This approach is another version made to protect the
network effectively from hackers and strangers. The recognition system involves face detection, feature extraction and
classification through PNN classifier. The face detection module obtains only face images and the face region which have
normalized intensity, uniformity in size and shape. The distinct LBP and GLCM are used to extract the texture from face regions
to discriminate the illumination changes for texture feature extraction. These features are used to distinguish the maximum
number of samples accurately. PNN classifier based on discriminate analysis is used to classify the six different expressions. The
simulated results will provide better accuracy and have less algorithmic complexity compared to facial expression recognition
approaches.
Index Terms-Distinct Local Binary Pattern (LBP), First Ordered Compressed Image, Gray Level Co-occurrence Matrix (GLCM)
and Probabilistic Neural Network (PNN) Classifier, Triangular Pattern

INTRODUCTION
Face Expressions convey high recognizable emotional
Signals, their specific forms may have originated not for
communication, but as functional adaptations of more direct
benefit to the expresser. The common human facial expression
deals with happy or angry thoughts, feelings or understanding of
the speaker expected or unexpected response from listeners,
sympathy, or even what the speaker is conversing with the person
opposite to them.
The traditional face detection is used to extract face area
from an original image. Then to extract eyes, mouth and eyebrow
outlines position from face area. Face expressions are recognized
to have accurate detection. The accuracy can be predicted, if the
original face is hidden with duplicate face. If the whole face image
is recognized, the performance will be low, so face expressions are
chosen. Mostly local features are detected. Beyond the
performance, facial expression plays a vital role in communication
with sign language.
Many geometric approaches are existed for face analysis, which
include techniques as linear discriminant analysis (LDA [1],
Independent Component analysis (ICA) [3], principal component
analysis (PCA) [14],Support Vector Machine (SVM) [4]. Object

recognition based on Gabor wavelet features where global features


are detected and classified using SVM classifier [2]. Facial
expression illustrates intention, personality, and psychopathology
of a person [9]. These methods suffer from the generality
problem, which might be extremely different from that of training
the face images. To avoid this problem, non-statistical face
analysis method using local binary pattern (LBP) has been
proposed. Initially, LBP was first introduced by ojala et al [5],
which showed a high discriminative power for texture
classification due to its invariance to monotonic gray level
changes.
In existing system, face expressions are recognized using Principle
Component Analysis and Geometric methods [3], these methods
suffer from Low discriminatory power and high computational
load. Geometric features will not provide optimal results. PNN
classifier and local binary pattern has been proposed to overcome
the problem faced in existing system. PNN classifier has been used
widely to identify facial areas with greater discrimination.
The significant of above literature has its own limitations to
recognize facial expression. To avoid such problems a novel
method is derived for facial expression in the present paper. The
paper is organized as follows. The overview of the proposed
system is presented in section 1. The section 2 represents methods

98

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
of expression recognition. Section 3 contains classification and
followed by conclusion in Section 4.
I.OVERVIEW OF THE PROPOSED METHOD
In order to detect the face expressions accurately and to
show variations, an original images are reducesd to HSV from
RGB plane. The textures where to get expressions is cropped from
HSV image. Both these processing steps playa an important role in

Training
Samples

Testing
Samples

Face
Detection

Face
Detection

face detection. Facial expression are derived using the features


calculated from Gray Level Co-occurrence Matrix and DLBPs of
First Order Compressed Image.The images are trained and tested
based on PNN classifier. The proposed method comprises of seven
steps is described as in figure 1

PNN
Classifier

Features
Extraction

Features
Extraction

Decision/Anger
Disgust/Fear/
Sadness/Happiness/Surprise

Figure1. Block diagram of overall the proposed system

II.

RGB TO HSV COLOR MODEL


CONVERSION

In order to extracts gray level features from color information,


the proposed method represents the HSV color space. Color
vision can be processed into RGB color space or HSV color
space. RGB color space describes colors in terms of red,
green, and blue. HSV describes the Hue, Saturation, and
Value. color description, the HSV color model is often
preferred over the RGB model. The HSV model describes
colors to how human eye tends to classify the color. RGB
defines color in terms of a mixture of primary colors,
whereas, HSV describes color using more familiar
comparisons (color, vibrancy and brightness).
III.

Figure 2 respectively.

Figure 2 a) An original image


IV.

CROPPING OF IMAGE:

Cropping is the elimination of the outer parts of an image to


get better framing, emphasize subject matter or change aspect
ratio. The Gray scale facial image is cropped based on the two
eye location.An original image and cropped image is shown in

b) Cropped image

FEATURE EXTRACTSION

The input data to an algorithm is too large to be processed and it


is suspected to be redundant, so it can be transformed into a
reduced set of features. The Features are extracted from each
cropped pixel of size 5 x5 matrix and the edge information is
collected based on neighborhood pixel analysis. The
representation of locality pixel of size 5 x 5 is shown in table 1. A
neighborhood of 5x5 pixels is denoted and it comprises 25 pixel
elements=
{P11P15:P21P25:
P31P35:P41P45:P51P55} . P33 is used as center pixel.

99

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
i=1,2,..9. The group of nine pixel element of gray level FCIM is
given in Table 2.

P11 P12 P13 P14 P15


P21 P22 P23 P24 P25
P31 P32 P33 P34 P35
P41 P42 P43 P44 P45
P51 P52 P53 P54 P55
Table 1: Representation of a 5x5 neighborhood pixel

Table 2. Representation of Gray level FCIM

A. Formation of First order compressed image matrix of size


3x3 from 5x5
The FCIM is a 3x3 matrix with nine pixel element (FCP1 to
FCP9). Each overlapped 3x3 sub matrix is extracted and separated
into 5x5 matrix. FCIM maintain local neighborhood properties
including edge information. FCP1=Avg of (ni) for i values are

FCP1

FCP2

FCP3

FCP4

FCP5

FCP6

FCP7

FCP8

FCP9

The nine overlapped 3 x 3 sub pixel formed from 5 x 5 pixel to


handle FCIM is given in Table 3.

Table3. Formation of nine overlapped 3x3 neighborhoods{n1,n2,n3.n9}


P11

P12

P13

P12

P13

P14

P13

P14

P15

P21

P22

P23

P25

P33

P24
P34

P24

P32

P23
P33

P23

P31

P22
P32

P33

P34

P35

N1

N2

N3

P21

P22

P23

P22

P23

P24

P23

P24

P25

P31

P32

P33

P32

P33

P34

P33

P34

P35

P41

P42

P43

P42

P43

P44

P43

P44

P45

N4

N5

P31

P32

P33

P41

P42

P43

P51

P52

P53

N7

N6

P32

P33

P34

P33

P34

P35

P42

P43

P44

P43

P44

P45

P52
N7

P53

P54

P53

P54

P55

N8

From the binary FCIM of 3X3 neighborhoods four Triangular


LBP unit values are derived as shown in Figure.3. Each Triangular
LBP unit value contains only three pixels. The Upper TLBPs
(UTLBP) and Lower TLBPs (LTLBP) are formed from the
combination of pixels (FCP1, FCP2, FCP4 and FCP2, FCP3, FCP6
and FCP4, FCP7, FCP8 and FCP6, FCP8, FCP9). The two DLBPs
are formed from sum of UTLBP (SUTLBP) and sum of LTLBP
(SLTLBP) values of FCIM.

N9
B. Formation of two Distinct LBPs from FCIM
SUTLBP Sum of Triangular Local Binary Pattern 1 and
Triangular Local Binary Pattern 2
SLTLBP=TLBP3+TLBP4
SUTLBP Sum of Triangular Local Binary Pattern 3 and
Triangular Local Binary Pattern 4
SUTLBP=TLBP1+TLBP2

100

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
FCP1

FCP2

FCP3

FCP4

FCP5

FCP6

FCP7

FCP8

FCP9

VI.

PERFORMANCE EVALUATION

The approach shows the discrepancies over duplicate image


from an original image through PNN classifier. The performance
is evaluated based on four measures as shown below
Contrast= 1
( )2
, =0 ln

Figure3.a

1
, =0 1+ 2
( ( )
Correlation= 1
, =0
2
Energy= 1
( )2
, =0 ln

Homogeneity=
SUTLBP

UTLBP

SLTLBP

LTLBP

VII.

Figure 3.b.
Figure3: Formation of DLBP on FCIM

RESULT AND DISCUSSIONS:

C. Formation of GLCM based on DLBP and FCIM


Features are formed by a formation of GLCM and DLBPs i.e.:
SUTLBP and SLTLBP values of FCIM. The GLCM on DLBP is
obtained by representing the SUTLBP values on X-axis and
SLTLBP values on Y-axis. This method has the elements of
relative frequencies in both SUTLBP and SLTLBP.
V.
CLASSIFICATION
PNN, Feed Forward Network is used to classify the test
images from trained images. It is a fast executing process. It is
assured as optimal classifier for calculating size of various
representative training set. Finally PNN classifier is used to give
the output. Based on expressions used in the proposed method,
PNN classifier consists of four layers: input layer, pattern layer,
summation layer and output layer.

The proposed method is established from the database


containing 35 expressions. 18 expressions are used in training, 17
expressions are used in testing. Database contains
facial
expressions. The set of 7 expressions are collected from five
distinct face images. Few sample expressions are shown in
figure4.
The proposed GLGM on DLBP method gives complete
information about an image.GLGM depends on Gray level range
image. DLBP on FCI reduces the size of GLGM from 0-14.Thus it
reduces overall complexity
In the proposed GLGM based DLBP method, the example
images are grouped into seven type of expressions and they are
stored in the database. Feature are extracted into GLGM on
DLBP. Features are extracted into seven type of facial
expressions.Google database are considered and used images are
scanned images. The numerical features are extracted from test
images and the results are stored in the test database.

Figure4: Sample Images collected as database

101

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
Finally feature database and test database are estimated. Test
images are classified using PNN classifier and results are
predicted. The successful results are followed below.The approach
evaluated based on performance measures is shown in Table4. The
measures are compared and it is represented graphically in figure
5.
Table4: Performance Measure Evaluation
Imag
e
1
2
3
4
5

Homogenit
y
0.0723
0.0670
0.0694
0.0745
0.0729

Contras
t
1.7845
1.7371
1.7138
1.7121
1.8101

Energ
y
2.7156
2.6165
2.5960
2.7458
2.7276

Correlatio
n
84.0213
112.3886
37.4682
150.2601
42.7056

for few sample expressions as in Table 4. The approach


mainly aims at network security. In future, the work will be
further refined to have many biometric applications as in
border security system.
References
[1] Ms .Aswathy . R .., A Literature review on Facial
Expression Recognition Techniques", IOSR Journal of
Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, pISSN: 2278-8727Volume 11, Issue 1 (May. - Jun. 2013),
PP 61-64
[2]Arivazhagan, S.; Priyadharshini, R.A.; Seedhanadevi, S.,
"Object
recognition
based
on
gabor
wavelet
features", Devices, Circuits and Systems (ICDCS), 2012
International Conference on , vol., no., pp.340,344, 15-16
March 2012
[3]M. Bartlett, J. Movellan, T. Sejnowski, Face
recognition by independent component analysis,IEEE
Transactions on Neural Networks 13 (2002) 14501464.
[4]B. Heisele, Y. Ho, T. Poggio, Face recognition with
support vector machines: global versus component-based
approach, in: Proceedings of International Conference on
Computer Vision,2001, pp. 688694.
[5]T. Ojala, M. Pietikainen, D. Harwood, A comparative
study of texture measures with classification based on
feature distributions, Pattern Recognition 29 (1996) 51
59.
[6]B. Fasel and J. Luettin, Automatic facial expression
analysis: A survey. Pattern Recognition, 2003

Figure5:Graphical Rpresentation

VIII. CONCLUSIONS
The proposed method represents absolute
information of the Face expressions. The GLCM on DLBP
of HCI is a three phase model for recognizing facial
expression. In the first stage it reduces the 5X5 image into a
3X3 sub image without losing any important information.
GLCM features on Distinct LBP is derived from second
and third stages. The computational cost and other
complexity involved in the formation of GLCM are
reducesd by reducing the size of the GLCM by 15X15
using DLBP. In the fourth stage, PNN classifier is used to
avoid multiple layer perception . PNN classifier extract
overall features so that the result can be exact. The
proposed method leads to unpredictable distribution of the
facial expressions.The performance estimate is shown only

[7]S. M. Lajevardi and H. R. Wu, Facial Expression


Recognition in Perceptual Color Space,
IEEE
Transactions on Image Processing, vol. 21, no. 8, pp. 37213732, 2012.
[8]Marian Stewart Barlett, Gwen Littlewort, Ian Fasel, and
Javier R. Movellan, Real time face detection and facial
expression recognition: Development and applications to
human computer interaction, in Proceeding of the 2003
Conference on Computer Vision and Pattern Recognition
Workshop, 2003.
[9]Shyna Dutta, V.B. Baru., Review of Facial Expression
Recognition System and Used Datasets, International
Journal of Research in Engineering and Technology
eISSN: 2319-1163 | pISSN: 2321-7308 ,Volume: 02 Issue:
12 | Dec-2013.

102

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
[10]F. Dela Torre and J. F. Cohn, Facial expression
analysis. In Th. B. Moeslund, A. Hilton, V. Kruger, and L.
Sigal, editors, Guide to Visual Analysis of Humans:
Looking at People, pages 377410. Springer, 2011.
[11]S. Moore and R. Bowden, Local binary patterns for
multi-view facial expression recognition, Computer.
Vision. Image Understand., vol. 115, no. 4, pp. 541558,
2011.

[13] Anitha C, M K Venkatesha, B Suryanarayana


AdigaA Survey On Facial Expression Databases
International Journal of Engineering Science and
Technology Vol. 2(10), 2010, 5158-5174.
[14] Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Face Recognition with Local Binary Patterns Machine
Vision Group, Infotech Oulu PO Box 4500, FIN-90014
University of Oulu, Finland.

[12]M. Pantic, L.J.M. Rothkrantz, "Automatic Analysis of


Facial Expressions: the State of the Art", IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol. 22, No. 12,
pp. 1424-1445, 2000.

103

Вам также может понравиться