Академический Документы
Профессиональный Документы
Культура Документы
PROJECT REPORT
Abstract
The aim of this project was to develop an algorithm that detects human
facial features like the mouth, nose and eyes in a full frontal face image. We
adapted a multi-step process in order to achieve the goal. To detect the face
region a skin-color segmentation method was used. Morphological
techniques were used to fill the holes that were created after the
segmentation process. From the skeletonization process, a skeleton of the
face was obtained from which face contour points were extracted. Facial
features can be located in the interior of the face contour. We used several
different facial-images to test our method. We achieved a detection rate of
over 90%.
Introduction
Face feature extraction is considered to be a key requirement in many
applications such as Biometrics, Facial recognition systems, video
surveillance, Human computer interface etc. Therefore reliable face
detection is required for the success of these applications.
The task of human facial feature extraction is not easy. Human face varies
from person to person. The race, gender, age and other physical
characteristics of an individual have to be considered thereby creating a
challenge in computer vision.
Facial feature detection aims to detect and extract specific features such as
eyes, mouth and nose.
In order to discriminate face candidates from the neck, ears and the incorrect
segmented parts, shape analysis of the color segmentation result is
necessary. One method (Saber and Tekalp, 1998; Lee et al., 1996) involves
first filling an ellipse to the segmented face for registration. Most of them do
not take into consideration the misalignment caused by ear and neck.
Ian-Gang Wang, Eric sung in their article have proposed a morphological
procedure to analyze the shape of segmented face region. This is an
unexplored approach in face detection research. We have incorporated some
of the methods proposed in this article. Several rules have been formulated
for the task of locating the contour of the face. These rules were primarily
based on the facial skeleton and knowledge of the face. The searching region
of the face features reduces. The facial features such as mouth, nostrils and
eyes can be more accurately located within the face contour.
Terrillon et al., 1998 mentions the problem of how other body parts such as
neck may lead to face localization error. A different approach was presented
in (Sobottka and Pitas, 1998) where the features are first detected and then
the contour was tracked using a deformable model. Brunelli and Poggio,1993
use dynamic programming to follow the outline in a gradient intensity map of
elliptical projection of face image.
Haalick and Shapiro, 1993 demonstrate how morphological operations can
simplify the image data while preserving their essential shape characteristics
and can eliminate irrelevances.
Figure
a) The output after performing skin color segmentation
Figure
b) The output after filling the interior holes of the face region using dilation
and erosion operations.
Figure
c) Overlay skeleton with face region.
Contour Tracing
Some vertices of the skeleton lines can fit the contour of the human face
while excluding the points that lie on the ears and the neck. The fitting points
can be found using the set of rules.[1]
Rule 1. Some face contour fitting points can be found from the line tracing as
a result of the skeleton. The contour points should satisfy
Rule 1.1 The contour fitting points should be the vertices of the roughly
horizontal skeleton line segments that are long enough (the threshold is set
proportional to the longest skeleton line segment).
Rule 1.2 The left vertex will be selected as candidate for contour fitting if
most of the horizontal line segments are positioned at the left of the
symmetry axis.
Rule 1.3. The right vertex will be selected as candidate for contour ftting if
most of the horizontal line segments are positioned at the right of the
symmetry axis.
Rule 1.4. The contour points should be above a vertical position that is set at
3/4 of the height from the top of the symmetry axis
Rule 1.5. The points satisfying the above conditions are separated into two
sets (left and right contour points, respectively). They are each sorted from
top to bottom, respectively.
Rule 1.6. If the difference between the horizontal coordinates of a point from
right sorted points set and any one of its previous and succeeding point is
large enough (a threshold is set), then the
Rule 2. The point set satisfying the above will be doubled using symmetry
axis, i.e., for a left fitting point there exists a right point that can be
calculated using the symmetrical axis and vice versa.
All the resulting skeleton lines are checked form Rules 1 to 2 in turn. The
points for fitting the contour of the human face can be collected while
excluding the points that lie on the ear and neck.
Name
Lips
Nose
Mouth
c37m.jpg
c4m.jpg
c5m.jpg
c6m.jpg
c7m.jpg
c8m.jpg
c9m.jpg
10
c10m.jpg
11
c11m.jpg
12
c12m.jpg
13
c13m.jpg
c1m.jpg
c2m.jpg
f) ROI
References:
[1]Frontal-view face detection and facial feature extraction using color and
morphological operations by Jian-Gang Wang, Eric Sung *
[2] A Model-Based Gaze Tracking System by Rainer Stiefelhagen, Jie Yang,
Alex Waibel
[3] Digital Image Processing Using MATLAB by Gonzalez, Woods
&Eddins,Prentice
[4] Images taken from : www.faceresearch.org
[5] Prof. Gaborskis lecture slides.
[4] www.wikipedia.com