Вы находитесь на странице: 1из 24

Person Identification

Security System
Created by
1. Juhi Achhra
2. Prachi Chauhan
3. Ninad Dolas
4. Devendra Sanflikar
Project Guide
Prof. Vrushali Purandare

Organization of Presentation

Objective
Problem Definition
Introduction
Literature Survey
Proposed Model
Future Scope
References

objective
To develop robust and efficient face detection
technique using Skin-Tone detection, Edge
detection & Template matching.
To obtain transformed images and prepare a
database using Eigen Vector.
To evaluate face recognition using Eigenface
algorithm.

Problem definition
In most of the incidents, only the
suspects face gets captured in the
CCTV Cameras.
But the authorities are unable to
recognize the suspect.
So our project gives a solution to
identify the suspect by Face
Recognition.
It can be used in Banks, Restaurants,
Corporate Areas, etc.

Introduction
A facial recognition system is a computer application for
automatically identifying or verifying a person from a digital image
or a video frame from a video source. One of the ways to do this is
by comparing selected facial features from the image and a facial
database. The human face plays a major role in conveying identity
and emotion. It is typically used in security systems and can be
compared to other biometrics such as fingerprint or eye iris
recognition systems. A notable advantage of facial recognition over
other biometric recognition methods is that it is less cumbersome
for end users. Real time application of face recognition theory and
is formulated based on still or video images captured either by
digital camera or by a webcam. The faces considered here for
comparison are still faces. Here we have developed a Matlab code
initializing the webcam of a laptop, capturing the image and
comparing it with the database of images present in the laptop.

Literature Survey
Algorithms for Face Recognition
Holistic Approach
Entire face act as raw input
Optimal variance of pixel data

Feature Based Approach


Local Features extracted
Features act as vector of geometric features

Literature Survey
A. Feature base approach In feature based
approach the local features like nose, eyes are
segmented and it can be used as input data in face
detection to easier the task of face recognition.
B. Holistic approach In holistic approach the whole
face taken as the input in the face detection
system to perform face recognition.
C. Hybrid approach Hybrid approach is
combination of feature based and holistic
approach. In this approach both local and whole
face is used as the input to face detection system.

Literature Survey
Skin tone
Matching
Histogram of skin
and non skin pixels is
determined.
Lightening
compensation is
employed.
Face interpreted as
sum of chrominance
and luminance.
Illumination Variant.

Edge Matching

Eigen Face

Uses Physiological
face image.

Divides face into


feature vectors.

Face image is
converted into gray level
picture.

Feature vectors are


converted to co-variance
matrix.

Image is not
normalized.

Image is normalized.

Face is combination
of binary edge maps.

Face is combination
of Eigen vectors.
Illumination Invariant.

Illumination Variant.

Proposed Model

Face Detection
A process for face detection, which involves
multi-resolution template matching, region
clustering and colour segmentation, works
with high accuracy, and gives good statistical
results with training images.
Given the generality of the images and the
templates used, the assumption would be
that the implementation works well on other
images, regardless of the scene lighting, size
of faces or type of faces in the pictures.

PROCESS
Template matching is performed first
to find the regions of high correlation
with the face and eyes templates.
Subsequently, using a mask derived
from colour segmentation and cleaned
by texture filtering and various binary
operations, the false and repeated hits
are removed from the template
matching result.
The output of this process is then
passed to a clustering procedure,
where points are within a certain
Euclidean distance from one another
will be clustered into one point.
The whole process will then be
repeated at a different
scale/resolution.
The outputs from each resolution are
then recombined into a single mask.

DETECTION FLOWCHART
Live video preview will be
monitored on the screen.
If the supervisor clicks on
the face of the suspect
only then the recognition
process starts.
The Algorithm includes
Skin tone detection, Edge
detection, Template
Matching & Eigen Vector
conversion.

STAR
T
If
Mous
e
Click
YE
S
Algorithm

NO

Skin-Tone Matching

Skin color model-based approaches


build a skin color model using
Gaussian normal distribution since
color is one of the most widely used
visual features in face detection.
Specifically, said models convert
the color image into an appropriate
color space, such as HSI, YCbCr, or
YIQ, to find skin color.
These color spaces are more robust
to the
lighting conditions than the RGB
color space and therefore are
suitable for face detection under
different illuminations.
The mean and covariance matrix of
the skin color are then computed
from the skin colors. Finally, the
results of this computation are used
to find the likelihood that each pixel
in the input image is, indeed, a skin

Edge Matching
Edges define the boundaries between
regions in an image, which helps with
segmentation and object recognition.
The algorithm describes a new
technique based on line edge maps
(LEM)
In order to measure the similarity of
human faces the face images are
firstly converted into gray-level
pictures.
The images are encoded into binary
edge maps using Sobel edge
detection algorithm to accomplish
face recognition.

TEMPLATE MATCHING

The discrete convolution of two


functions f(x, y) and h(x, y) of size M
N is denoted by f(x, y) h(x, y) and is
defined as the following expression:

For feature detection, simple eye and


mouth templates were used.

The templates search for the eye and


the mouth location

Face Recognition
Most of the face recognition literature dealt with local and
intuitive features, such as distance between eyes, ears
and similar other features.
This system was found to be inefficient as it took
significantly more time for object recognition.
The system will use an Information Theory approach
wherein the most relevant face information is encoded in
a group of faces that will best distinguish the faces.
It transforms the face images in to a set of basis faces,
which essentially are the principal components of the face
images.
This is particularly useful for reducing the computational
effort.

EIGENFACE DATABASE

Algorithm for creating Eigenface Database:

1. Obtain M training images I1,I2, ,IM. It is very important that the images
are centred.
2. Represent each image Ii as a vector i as discussed below.

3. Find the average face vector .

4.

Subtract the mean face from each face vector i to get a set of vectors
i. The purpose of subtracting the mean image from each image vector is
to be left with only the distinguishing features from each face and
removing in a way information that is common.

EIGENFACE DATABASE
5.

Find the Covariance matrix C.

6. We now need to calculate the Eigenvectors ui of C .


7. Instead of the Matrix
is
a
matrix,

. Remember A
matrix. If

consider the matrix


thus
is a

we find the Eigenvectors of this matrix, it would return

Eigenvectors,

each of Dimension M x 1, lets call these Eigenvectors vi. Now from


some properties of matrices, it follows that: ui=Avi. We have found out vi
earlier. This implies that using vi we can calculate the M largest
Eigenvectors of
. Remember that
the number of training images.
8. Find the best M Eigenvectors of
discussed above.

C=

M<<N^2

as M is simply

by using the relation

That is: ui=Avi. Also keep in mind that||ui||=1.


9. Select the best
done heuristically.

M Eigenvectors, the selection of these Eigenvectors is

EIGENFACE DATABASE
Database of Faces

Eigenfaces of Database

Face Recognition
If an unknown probe face is to be recognized then:
1. We normalize the incoming probe as:

= .
2. We then project this normalized probe onto the Eigen
space (the collection of Eigenvectors/faces) and find out
the weights.

3. The normalized probe can then simply be represented as:

Face Recognition
Face as a Linear Combination of Database
Faces

FUTURE SCOPE
Live as well as previously recorded
video can be used for person
identification.
Even if the face is blur, we can
identify the person by Image
Enhancement Techniques.
With the use of Hi-Tech cams, the
speed of recognition can be further
increased.
Advanced Identification Security

References
1.
2.

3.

4.

5.
6.

International Journal of Soft Computing and Engineering (IJSCE) ISSN: 22312307, Volume-2, Issue-5, November 2012
FACE DETECTION THROUGH TEMPLATE MATCHING AND COLOR
SEGMENTATION By Scott Tan Yeh Ping (yptan@stanford.edu),Chun Hui Weng
(cwengc@stanford.edu),Boonping Lau (bppower@stanford.edu ) (EE 368
Final Project)
M Sudarshan*, P Ganga Mohan and Suryakanth V Gangashetty Speech and
Vision Lab, International Institute of Information Technology, Hyderabad,
Andhrapradesh, India - 50032.
Segmentation Algorithm for Multiple Face Detection in Color Images with
Skin Tone
Regions using Color Spaces and Edge Detection TechniquesH
C Vijay Lakshmi, S. PatilKulakarni,
Robust Face Detection Using Template Matching Algorithm by Amir
Faizi(University of Toronto)(Copyright c 2008 by Amir Faizi)
Smart Survelliance System, Jigar Gada 1, Jaimin Kakadiya 2, Darshit
Morakhia 3 and Vishakha Kelkar U.G. Student, Electronics and
Telecommunication Department, Assistant Professor, Electronics and
Telecommunication Department, DJSCOE, Vile-Parle (W), Mumbai.

Thank You..

Вам также может понравиться