Академический Документы
Профессиональный Документы
Культура Документы
Prepared for:
Submis sion of Ass ignment w or k(3)
on A pplied M athematics
Prepared by:
Darshan Venkatrayappa
dars h. venkat@gmail.com
Sharib Ali
ali.sharib2002@gmail.com
Submitted to
Desire Sidebe
Dro-desire.sidebe@u-bourgogne.fr
17 th November 2010
Contents
Acronyms .................................................................................................................. 1
Chapter 2 Normalization
4.1 Analysis............................................................................................... 10
4.2 Algorithm ........................................................................................... 10
4.3 Result
4.3.1 Accuracy………………………………………………… 10
4.3.2 Limitation………………………………………………... 10
4.3.3 Scope of Improvement………………………………….. 11
REFERENCES................................................................................................................ R-1
1
Acronyms
SVD = Singular Value Decomposition
EV = Eigen Vector
person identification.
human-computer interaction.
security systems.
The history of it goes back to the start of computer vision. Several other approach like irish,
fingerprint etc has been used in the application but however this has been used more widely
as the research. Face recognition has always remain a major focus of research because of its
non- invasive nature and because it is peop le's primary method of person identification. The
most famous early example of a face recognition system is due to Kohonen. Kohonen's
system was not a practical success, however, because of the need for precise alignment and
normalization.
F A C E REC O NI TI ON U S IN G P C A
In Principal Component Analysis for the face recognition, we train the faces and create a
database of the trained sample images of the person. We then find the covariance matrix from
this trained set. We get the eigen faces which corresponds to the principal components in the
eigen vector obtained. This will give an ‘eigen face’ which is ghost like in appearance. Now,
each face in the training set is the linear combination of the eigen vectors. Now, when we
take a test image for the recognition, we follow the same normalization steps and then project
the test image to the same eigen space containing the eigen faces. Finally we calculate the
minimum Euclidean distance between the eigen faces in the trained set of images with the
eigen face of the test image. This is explained in the further discussions.
The main purpose of PCA is to reduce the large dimensionality of the data space (observed
variables) to the smaller intrinsic dimensionality of feature space (independent variables),
which are needed to describe the data economically. This is the case here is a strong
correlation between observed variables.
2
1.2 Objective
i. To Study and implement the concept of SVD and PCA.
ii. To decrease the dimension of the feature space
iii. To make the computation fast
iv. To built a strong correlation between observed variables.
v. To see how the Eigen vectors gives the principa l components of the image to be
recognized from a trained set of images.
vi. To make effort towards improvement of the result obtained.
Identification
3
Chapter 2. Normalization
2.1 Why Normalization required?
The normalization steps are done for the sacling, orientation and location variation
adjustment to all the images with some predefined features and the feature of images. Basically,
all the images are mapped to a window 64x64(in our case) taking some of the important facial
features. In our case, we have taken, 1. Left Eye Center 2. Right Eye Center 3. Tip of Nose 4.
Left Mouth Corner and 5. Right mouth corner.
1
4
1
*which gives the affine transformation ‘A’ and ‘b’ used to map the images to 64x64 window
Algorithm:
p
1. We take the predefined co-ordinates , f i
14 20
50 20
F A C E REC O NI TI ON U S IN G P C A
34 34
16 50
48 50
2. We take all the feature f i files starting from the first feature file and compute the equation
transformation.
FBar=Singular_Value_Decomposition(FBar,fp);
5
4. We take the average of all the FBAR calculated and update FBAR again with this
average value
5. Now, we compare the previous result with the current and if it’s greater than the
threshold error 10^-6 then we come out of the loop and the final converged FBAR gives
the affine transformation matrix A and vectoe b.
Since we have got the matrix A and vector b we can easily put the pixels of 384x384 into
64x64 window. This follows the following algorithm.
1. We now use FBAR to get the values of ‘A’ and ‘b’. The first 4 values gives the values of
matrix A and the last two gives the vector b.
x=(V*S*U')*F_BAR; b=x(5:6,1);
a=x(1:4,1); a=a(1:1:4,1); A=zeros(2,2);
A(:,1)=a(1:2,1); A(:,2)=a(3:4,1);
2. Since we know the the transformation matrix so we will plot each pixel of 64x64 window
into the 384x384 image using.
F384x384 A1 ( F4096 b)
3. We extract this window image i.e. the transformed image of 64x64.
2.2 Limitations
Non-uniform illumination
Images do not show the correct positions of the features taken for the affine
transform may result in convergence or some bad result.
PCA computes the basis of a space which is represented by its training vectors. These
basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest
variance of the training vectors. Each eigenface can be viewed a feature. these are
Eigenvectors and have a face like appearance, they are called Eigenfaces. Sometimes,
they are also called as Ghost Images because of their weird appearance. When a
particular face is projected onto the face space, its vector into the face space describe the
importance of each of those features in the face. The face is expressed in the face space
by its eigenface coefficients (or weights).
1. We have taken the mean of all the 57 training faces. We substract these values from each
face. We concatenate all the substracted faces in a a single matrix which we will call D.
F A C E REC O NI TI ON U S IN G P C A
I1 (1,1).........................................I1 (64,64)
.
.
D .
.
.
I (1,1)....................................I (64,64)
57 57
7
5. Each face in the training set, i ,can be represented as a linear combination of these
Eigenvectors.
i X i . , where, is the principal components
X i is the training images
8
7
x 10
3
2.5
1.5
0.5
-0.5
0 10 20 30 40 50 60
Euclidean Distance: The Euclidean Distance is probably the most widely used distance
metric. It is a special case of a general class of norms and is given as:
x y xi yi
2
4. The minimum distance position between them will give the nearly identical face in the
Eigen space.
5. We read the corresponding image which will be identical to the test face.
4.3 Result
The result with the 30 test images was not 100% accurate but it gave some good
matching with almost 28 images.
F A C E REC O NI TI ON U S IN G P C A
4.3.1 Accuracy
4.3.2 Limitaitions
The images of a face, and in particular the faces in the training set should
lie near the face space.
Each image should be highly correlated with itself.
10
Further changes in the algorithm may lead to better accuracy. Inaddition, we can
take some more distinct features to the training set of faces like length of forehead, chin
positon etc. Facial recognition is still an ongoing research topic for computer vision
scientists.
Test Image
20 20
40 40
60 60
20 40 60 20 40 60
Test Image
F A C E REC O NI TI ON U S IN G P C A
20 20
40 40
60 60
20 40 60 20 40 60
Chapter-5 Conclusion:
1. We must choose some features of the sample face and create a database of the images. In
our case, we have taken 57 face images.
2. Use of the Affine transform in finding the variables responsible for the same orientation,
scaling and other feature variations for all the images.
3. The special features taken should be mapped in the window we are taking the face, it
should include most part of the face rather than body.
5. Principal component Analysis can be used to both decrease the computational complexity
and measure of the covariance between the images.
6. How PCA reduces the complexity of computation when large number of images are
taken?
7. The principal components of the Eigen vector of this covariance matrix when
concatenated and converted gives the Eigen faces.
8. These eigenfaces are the ghostly faces of the trained set of faces forms a face space.
9. For each new face(test face), 30 in our case, we need to calculate its pattern vector.
10. The distance of it to the eigen faces in the eigen space must be minimum.
11. This distance gives the location of the image in the eigen space which is taken as the
F A C E REC O NI TI ON U S IN G P C A
REFERENCES
[1] Matthew Turk and Alex Pentland, Eigen Faces For Recognition, MIT , The Media
Laboratory
[2] Desire Sidebe, Face Recognition Using PCA,Assignment-3 sheet,UB
[3] Wikipedia