Вы находитесь на странице: 1из 10

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 18, ISSUE 1, APRIL 2013 6

MACE Correlation Filter Algorithm for Face Verification in Surveillance Scenario


Omidiora E. O., Olabiyisi S. O., Ojo J. A., Abayomi-Alli A., Akingboye A.Y., Izilein F. and Ezomo P. I.
AbstractFacial recognition is a vividly researched areaof computer vision, pattern recognition and more precisely biometrics and has become an important part of everyday lives due to increasing demand for security and law enforcement applications in real life surveillance scenarios. This research work set out to evaluate the performance of the MACE correlation filter algorithm whichhas been utilized in automatic target recognition successfully and has exhibited excellent face recognition performance with many databases. The Scface database mimics a really complex imaging environment which makes it a difficult performance benchmark for any face recognition algorithm and face matching experiments were carried out using the MACE correlation filter algorithm that was implemented in Matlab for the purpose of the study. Result obtained showed an overall accuracy of 0.8678, Matthews correlation coefficient of 0.2880, positive and negative predictive values of 0.3592 and 0.9270 respectively. The algorithm performed best with frontal infrared images taken at a distance of 1m from the camera with a recognition rate of 60%. Thus the algorithm is robust to identification in low illumination conditions and future research is expected to improve itsrobustness to compression artefacts and noisy images, which might be important in forensic and law enforcement applications. Index TermsCorrelation Algorithms, Face recognition, MACE filter, Scface Database, Surveillance.

1 INTRODUCTION

acial recognition is a very important aspect of biometric authentication and/or identification [1], [2], [3]. Even though facial recognition is very useful in diverse application domains *4+, it isnt without its problems and challenges [3] due to the variability of the face images for same subject as it changes due to expression, pose angle, illumination, occlusion, and low resolution. The combinations of some of these factors affect the performance of the facial recognition system in various ways [1], [4], [5], [6], [7]. Even a big smile can render a face recognition system less effective [7]. When performing recognition under surveillance scenario, one or more combinations of these factors come into play, thereby making recognition more difficult. It is important to put in place an evaluation framework that can enable evaluation of the performance of a face recognition system implementing a particular algorithm before deployment so as to have a clue regarding what its eventual performance will be like and also to determine if it will be acceptable for its intended purpose.

E.O. Omidiora is with the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. S.O. Olabiyisi is with the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. J. A. Ojo is with the Department of Electronic and Electrical Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. A. Abayomi-Alli is with the Department of Computer Science, Federal University of Agriculture, Abeokuta, Nigeria. A.Y. Akingboye is with the Department of Electrical and Computer Engineering, Igbinedion University Okada, Benin City, Nigeria. F. Izilein is with the Department of Electrical and Computer Engineering, Igbinedion University Okada, Benin City, Nigeria. P. I. Ezomo is with the Department of Electrical and Computer Engineering, Igbinedion University Okada, Benin City, Nigeria.

One of the advantages that face recognition has over other forms of biometric identification and authentication is that it doesnt require active subject participation [1] so it is difficult if not impossible to get user cooperation. Thus there is a departure from the easy scenario, which makes the face recognition system to experience severe problems [3] such as pose variation, illumination conditions, scale variability, aging, glasses, moustaches, beards, low quality image acquisition, partially occluded faces etc. Currently, many security applications use human observers to recognize the face of an individual. In some applications, face recognition systems are used in conjunction with limited human intervention. For autonomous operation, it is highly desirable that the face recognition systems be able to provide high reliability and accuracy under multifarious scenarios [8]. This is a difficult feat to achieve, partly due to the fact that most face recognition systems are not usually designed and tested based on surveillance conditions in which the systems are usually deployed [1], [3], [9]. To overcome this problem it is important to be able to use manually generated images that will mimic those that can be obtained from real life surveillance conditions so as to obtain system evaluation metrics that will be close to real life scenarios [2]. This has been achieved by Grgic, Delac and Grgic in [2]. Most approaches to face recognition are in the image domain whereas it is believed that there are more advantages to work directly in the spatial frequency domain [10]. By going to the spatial frequency domain, im-

2013 JCSE www.Journalcse.co.uk

age information gets distributed across frequencies providing tolerance to reasonable deviations and also providing graceful degradation against distortions to images (e.g., occlusions) in the spatial domain. Correlation filter technology [10] is a basic tool for frequency domain image processing. In correlation filter methods, normal variations in authentic training images can be accommodated by designing a frequency domain array (called a correlation filter) that captures the consistent part of training images while deemphasizing the inconsistent parts (or frequencies). Object recognition is performed by cross-correlating an input image with a designed correlation filter using fast Fourier transforms (FFTs) [10]. This research therefore, set out to evaluate the performance of the correlation filter algorithm specifically the MACE filter on the Scface surveillance database. The choice of the correlation filter algorithm is mainly because research has shown it to exhibit better tolerance to noise and illumination variations than many space domain methods [11], [12]. The subsequent sections in the paper presents an introduction to face recognition and its challenges, overview of correlation filters algorithm is presented, and the Minimum Average Correlation Energy (MACE) filter is discussed in more detail. Finally the results of the verification experiments of the MACE filter on the Scface database are shown and analyzed.

tion by approximating the eigenvectors of the face image's autocorrelation matrix; these eigenvectors are now known as eigenfaces. Kohonen's system was not a practical success, however, because of the need for precise alignment and normalization. In following years many researchers tried face recognition schemes based on edges, inter-feature distances, and other neural net approaches. While several were successful on small databases of aligned images, none successfully addressed the more realistic problem of large databases where the location and scale of the face is unknown.

2 FACE RECOGNITION
The subject of face recognition is as old as computer vision [13]; dating as far back as the 1960s, the first semiautomated system for face recognition required the administrator to locate features (such as eyes, ears, nose and mouth) on the photograph before it calculated distances and ratios to a common reference point [14]. Despite the fact that other methods of identification (such as fingerprints, or iris scans) can be more accurate, face recognition has always remained a major focus of research because of its non-invasive nature and because it is people's primary method of person identification. Face recognition is one of the most successful applications of image analysis and understanding, and it has received significant attention in recent years. There are at least two reasons for this trend; the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research [15]. Perhaps the most famous early example of a face recognition system is due to Kohonen et al [16], who demonstrated that a simple neural net could perform face recognition for aligned and normalized face images. The type of network he employed computed a face descrip-

2.1 Face Recognition in 2D Face recognition has been well studied using 2D still images for over a decade [17], [18], [19]. In 2D still image based face recognition systems, a snapshot of a user is acquired and compared with a gallery of snapshots to establish a person's identity. In this procedure, the user is expected to be cooperative and provide a frontal face image under uniform lighting conditions with a simple background to enable the capture and segmentation of a high quality face image. However, it is now well known that small variations in pose and lighting can drastically degrade the performance of the single-shot 2D image based face recognition systems [20]. 2D face recognition is usually categorized according to the number of images used in matching as shown in Table 1. Some of the well-known algorithms for 2D face recognition are based on Principle Component Analysis (PCA) [17], [18], Linear Discriminant Analysis (LDA) [21], Elastic Graph Bunch Model (EGBM) [22], and correlation based matching [23]. Table 1 Face Recognition in 2D Domain [19].
Gallery Single still Many still Image images one-to-one many-to-one one-to-many many-to-many

Probe Single still image Many still images

More efforts have been devoted to 2D face recognition because of the availability of commodity 2D cameras and deployment opportunities in many security scenarios. However, 2D face recognition is susceptible to a variety of factors encountered in practice, such as pose and lighting variations, expression variations, age variations, and facial occlusions. Local feature based recognition has been proposed to overcome the global variations from pose and lighting changes [24], [25]. The use of multiple frames with temporal coherence in a video [26], [27] and 3D face models [28], [29] have also been proposed to improve the

recognition rate.

2.2 Face Recognition in 3D 3D face recognition methods use the surface geometry of the face [29]. Unlike 2D face recognition, 3D face recognition is robust against pose and lighting variations due to the invariance of the 3D shape against these variations [30]. 3D face recognition is a modality of facial recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face recognition methods can achieve significantly higher accuracy than their 2D counterparts can. 3D face recognition achieves better accuracy by measuring the geometry of rigid features on the face. This avoids such pitfalls of 2D face recognition algorithms as change in lighting, different facial expressions, occlusion and head orientation. Another approach is to use the 3D model to improve accuracy of traditional image based recognition by transforming the head into a known view.
Table 2 Face recognition in 3D domain [19]. Gallery Single still Many still Image images 2D-to-2D 2D-to-3D 2D-to-3D 3D-to-3D

dividual appearances. Zhao, W. In [31] showed that the difference in face images of the same person due to severe lighting variation could be more significant than the difference in face images of different persons. A study by Li, S. Z. and Jain, A. K. showed that pose variation is one of the major sources of performance degradation in face recognition [20]. The face is a 3D object that appears different, depending on which direction the face is imaged. Thus, images taken at two different viewpoints of the same subject (intra-user variation) may appear more different from two images taken from the same viewpoint for two different subjects (inter-user variation) [31]. 2.3.2 Inter-user Variability The accuracy of facial recognition systems drops significantly under certain enrolment, verification, and identification conditions. For example to enrol successfully, users must be facing the acquisition camera and cannot be acquired from sharp horizontal or vertical angles. The users face must be lit evenly, preferably from the front. These are not problems for applications such as ID card, in which lighting, angle, and distance from camera can be controlled, but for verification and identification applications, where environmental factors vary significantly for facial recognition system (FRS). FRS is especially ineffective when the environmental condition of the database image is different from that of the test image. On the other hand, when users are enrolled in one location and verified in another. Factors such as direct and ambient lighting, shadows and glare, camera quality, distance from camera, angle of acquisition (camera angle), and background composition can dramatically reduce performance accuracy. Reduced accuracy is most strongly reflected when terrorists or fraudsters beat identification systems due different acquisition conditions of images. Alternatively, authorized users are being incorrectly rejected by the authentication systems [30]. However, results reported in several research [32], [33], [34] showed that the minimum average correlation energy filter (MACE) in conjunction with the peakto-sidelobe ratio (PSR) metric provide a recognition algorithm which is tolerant to even unseen illumination variations [12].

Probe 2D image 2.5D images

The main technological limitation of 3D face recognition methods is the acquisition of 3D images, which usually requires a range camera. The drawbacks in 3D face recognition are the large size of the 3D model, which requires high computation cost in matching, and the expensive price of 3D imaging sensors [30]. 3D face recognition is still an active research field, though several vendors offer commercial solutions.

2.3 Face Recognition Problems In biometrics, the distortion of biometric template data comes from two main sources: intra-user variability and the changes in acquisition conditions. Face recognition therefore faces some issues inherent to the problem definition, environmental conditions, acquisition conditions, and hardware constraints.
2.3.1 Intra-user Variability In face recognition, human face appearance has potentially very large intra-subject variation due to 3-D head pose, facial expression, occlusion due to other objects or accessories (e.g sunglasses, scarf, etc.), facial hair, aging, etc. Although there is also the problem of inter-user variability but the variations are small due to the similarity of in-

3 CORRELATION FILTERS FOR FACE VERIFICATION


Correlation filters have been utilized in automatic target recognition successfully. It was first used for object detection [35]. Recently they have been applied for face verification and recognition [36], [37], [38]. MACE proposed by Mahalanobis [35] was derived from the Synthetic Discri-

minant Function (SDF) proposed by Hester and Casassent [39] which was also built to resolve the challenges from the Matched Spatial Filter (MSF). Research has shown that among the correlation filters family the MACE filter has the best results when tested. Thus, this study implements the MACE filter and tested it on the Scface surveillance database [2]. Essentially MACE filter is the solution of a constrained optimization problem that seeks to minimize the average correlation energy while at the same time satisfies the correlation peak constraints. As a result, the output of the correlation planes will be close to zero everywhere except at the locations of the trained objects that are set to be correct where a peak will be produced [40].

Defining L as [1 , ] we get

= 1
Substituting (5) into X +H = d

(5)

+ 1 =
Solving for L

(6)

= ( + 1 )1

(7)

Substituting (7) into (5) we get the optimal solution

= 1 ( + 1 )1

(8)

3.1 Minimum Average Correlation Energy Filter (MACE) Elias Rentzeperis in [40] gives an exhaustive literature on the correlation filters family. The following sub-section discusses the MACE correlation filter problem and its algorithm as presented in [40] for further details. Using Parsevals relationship the optimization problem in the frequency domain is:

3.3 MACE Algorithm In equation (8) the MACE filter is calculated. The correlation peak at the origin is set to + = , where the s corresponding to the authentic class are set to one, otherwise they are set to zero. For each class a single MACE filter is synthesized. Once the MACE filter H(u, v) has been determined, the input test image f(x, y) is cross correlated with it in the following manner: , = 2 1

min +. . + = 1
1 (1)
where X is a series of vectors that contains the 2D-FFT coefficients of the training images, D is a square matrix that contains along its diagonal the average power spectrum of the elements of the training images and H is a matrix that contains the 2D-FFT coefficients of the resulting filter.

2 , (9)

where the test image is first transformed to frequency domain and then reshaped to be in the form of a vector. The result of the previous process is convolved with the conjugate of the MACE filter. This operation is equivalent with cross correlation with the MACE filter. The output is transformed again in the spatial domain.

3.2 Solution of the MACE constrained optimization problem From [40], Using Lagrange multipliers, the function to be minimized becomes = + 21 ( +1 1 ) 21 ( + 1 1 ) (2)
In order to minimize the function we set the gradient of with respect to H equal to the zero vector

= 0 => = 1 1 + +

(3)

Since the matrix D is diagonal by definition it is invertible

1 =1

1 1

(4)

Fig. 1. Correlation Filter Block Diagram shows a filter designed on N images for class I. When a test image from class I is input to the system, then the correlation output yields a sharp peak [12].

10

The Peak to Sidelobe Ratio (PSR) is a metric that measures the peak sharpness of the correlation plane. For the estimation of the PSR the peak is located first. Then the mean and standard deviation of the 20X20 sidelobe region excluding a 5X5 central mask centred at the peak are computed. PSR is then calculated as follows:

camera, camera quality details and experimental protocol see the Scface database paper [2].

( _ ) _

(10)

The class with the MACE filter that yields the highest PSR is considered the most likely candidate [40]. The representation of equation (10) is depicted in Figure 2.

Fig. 2. Region for estimating the peak-to-sidelobe ratio (PSR) [12].

4 EXPERIMENT AND RESULTS OBTAINED


4.1 Scface Database The Scfacedatabase mimicsthe real-world conditions as close as possible. Using this database to evaluate the performance of the MACE correlation filter algorithm, we believe will show that real-world indoor face recognition by using standard videosurveillance equipment will show the algorithms true strength. The database collected images from 130 subjects, consisting of a total4,160 images, taken in uncontrolled lighting (infrared night vision images taken in dark)with five different quality surveillance video cameras. Subjects images weretaken at three distinct distances from the cameras with the outdoor (sun) light as the only source of illumination. The fivesurveillance cameras were installed in one room at the height of 2.25 m. Two cameras were made to operate ininfrared (IR) night vision mode so that IR images were recorded as well. The only source of illumination was the outdoor light, whichcame through a window on one side. Two out of five surveillance cameras were able torecord in IR night vision mode as well. The sixth camera was installed in a separate,darkened room for capturing IR mug shots. It was installed at a fixed position and focusedon a chair on which participants were sitting during IR mug shot capturing. The IR part ofthe database is important since there are research efforts in this direction. For the database details like image naming conventions, set up of the

4.2 Experimental Protocol Face matching experiments were carried out on the Scface surveillance database [2] using the MACE correlation filter algorithm that wasimplemented in Matlab for the purpose of the study. The study introduced an evaluation protocol much diffent from thatproposed by Grgic, Delac and Grgic in [2], but which we believe helped to show a true test of the strength of the algorithm on a difficult surveillance database like Scface. A random selection of 10 subjects was used for the evaluation. For each subject, 17 images were collected from the Scface database into our own gallery witha total of 170 images. While a total of 14 probe images are used per subject so that the program performs identification by comparing each probe image with images in every class (a class represents a subject in the gallery) and gives a score for every class; hence for every subject, the 14 probe images yield 10 scores each. The program gives as output, similarity scores between the probe image and gallery images in the database, the program identifies based on the highest score.All the scores obtained are presented in tables with one table containing scores obtained from performing identification on a given subject. This produces a 10 by 14 matrix table for each subject with each row representing a subject and each column representing a given probe image.Theres an extra row at the bottom of each table that contains 1s and 0s. This row is used to represent instances where the algorithm identifies correctly (TP) with a one (1) and False negative (FN) with zero (0). 4.3 Results Obtained The results obtained from the face verification experiments and metrics for performance measurement is presented in this subsection. Table 3 shows as an example the result of the verification experiment for subject one ofthe ten subjects. The images from cam2_1, cam3_2, cam4_3, cam7_2 returned the highest verification score for subject 1, thus a true positive with a one (1) below the table and a zero (0) to represent false negative. The aggregrateconfusion matrix as shown in table 4 presents the performance of the MACE filter algorithm in term of the correctly recognized faces and the confused faces (faces recognized as a subject different from the actual person) while table 5 presents the table of confusion for each subject as obtained from the experiments. The aggregate table of confusion, obtained by finding the average of all the true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN) is presented in table 6.

11

Table 3. Result of Verification Experiment for Subject One.Images from cam2_1, cam3_2, cam4_3, and cam7_2 returned the highest recognition score for images that are subject one hence a true positive. While images from L2, L4, R1, R3, cam1_2, cam2_3, cam4_1, cam5_2, cam6_1, and cam6_3 returned a false positive as the images with the highest recognition scores were not subject one.

Table 4.Overall Confusion Matrix from the Experiment. Results along the diagonal plane represent the total true positive (TP) for the respective subjects.

4.4 Performance metrics A MATLAB program was written to compute the various metrics used for the evaluation of the system. The value of all the metrics gotten from the output of the program is presented in table 7.
4.4.1 False Positive Rate The false positive rate value of 0.0745 implies that on the average, only a small fraction of the recognition outcome wrongly predicted a particular subject for each of the subjects. 4.4.2 Sensitivity (True positive rate) Sensitivity value of 0.3643 means that the proportion of actual subjects predicted correctly by the algorithm is rather small; this though is not an absolute scale for judging the algorithm as the results of other metrics also have a role to play.

4.4.3 Accuracy An accuracy of 0.8678 suggests great performance but can also be misleading in isolation because this value could be high even if recognition rate for certain classes is zero. 4.4.4 Specificity A specificity value as high as 0.9255 suggests the converse of what a false positive rate value of 0.0745 suggests a large proportion of actual negative outcomes were labeled as such with respect to each subject.

12

Table 5. Table of Confusion for All Subjects Subject 1 2 3 4 5 6 7 8 9 10 TP 4 6 5 1 9 4 6 11 2 3 FN 10 8 9 13 5 10 8 3 12 11 FP 15 3 3 5 28 3 5 5 6 5 TN 111 113 113 121 77 123 121 121 120 121

5 DISCUSSION
The ROC plot in figure 3 indicates a point above the random guess line, so the algorithms performance can be considered only as satisfactorily under surveillance condition.This is simplybecause images in the Scface database are of different variations and quality; hence the algorithm will perform far better with high quality images.

Table 6. Aggregate Table of Confusion TP=5.1 FP=7.8 FN=8.9 TN=114

Table 7. Overall Performance Metrics Metric False positive rate Sensitivity (True positive rate) Accuracy Specificity Positive predictive value Negative predictive value False discovery rate Matthews correlation coefficient F1 score Value 0.0745 0.3643 0.8678 0.9255 0.3592 0.9270 0.6408 0.2880 0.0748

Fig. 3. ROC Curve showing the MACE correlation filter algorithms performance on the Scface database. It was observed that the algorithm failed to recognize any of the images taken at an angle of +22.5 to the camera (i.e. image R1). This can be considered an anomaly as it is inconsistent with performance when recognition is performed on images taken at other angles.There was at least 40% recognition for images taken at the three other angles. It was expected that images taken at extreme angles (such as +90 and -90) should have lesser recognition rates but the results of this experiment revealed contrary as images taken at the angle of -90 (L4) yielded 40% recognition rate which is same as the recognition rate for images taken at -45 (L2) and higher than the rate for images taken at +22.5. This somewhat confirms the MACE filter invariance to pose variation. Results also revealedthat the algorithm performed best for frontal infrared images taken at a distance of 1.00 meter from the camera (i.e. Cam6_3) with an overall recognition rate of 60%. This is consistent with the result reported in [2] where night-time experiments performed using frontal infrared mugshotsyielded better results relative to results gotten from daytime experiments.

4.4.5Positive predictive value A PPV of 0.3592 is somewhat low and could mean that the algorithm has a weak ability to positively predict each subjects identity. 4.4.6Negative Predictive Value The NPV is 0.9270 and this means that there is high tendency for the algorithm to declare the outcome of recognition as rightfully not been a particular subject for all subjects in the test sample. This is consistent with the Specificity and FPR value. 4.4.7False Discovery Rate A value of 0.6408 implies that a rather large proportion of recognition outcome is expected to be false positives. 4.4.8Matthews Correlation Coefficient A value of 0.2880 indicates a slightly-above-random identity recognition outcome. 4.4.9F1 Score An F1 score of 0.0748 is low but can not be used as sole criteriafor measuring performance.

13

Fig. 4.Bar Chart of Recognition Rates. Images taken at angles of +22.5 degrees to the right had zero recognition rates while frontal infrared images from cam6_3 had the highest recognition rate.

A total of 43% of all image classes had a recognition rate of 40%. This is the modal recognition, i.e. majority of the image types yielded 40% recognition rate. The entirety of the recognition rates is presented in figure 4.

ACKNOWLEDGMENT
The authors wish to thank Professors MislavGrgic, KresimirDelac and Sonja Grgic for the release of the SCFace surveillance cameras face database.

REFERENCES

6 CONCLUSION
The research work evaluated the MACE correlation filter algorithm for face verification under surveillance scenario using the Scface database. Although the MACE Correlation filter has exhibited excellent face recognition performance with many databases, experimental results in this study shows that it performed just slightly above averagebecause the Scface database mimics a really complex imaging environment, such as is expected when building an airport security system or any other national security system. Thus the database is a difficult performance benchmark for any face recognition algorithm. It was discovered that the MACE filter algorithm is not effective for surveillance scenarios as variations in pose angle and distance from camera caused serious degradation in the performance. This is further confirmed by [40] that success rate of MACE filter degrades severely as the face strays from its ideal position. The algorithm will however be very effective for one-on-one face verification applications with high quality probe and gallery pair since it performs better with high quality frontal shots. For future study,more experiments need to be done in order to imporve the MACE filters value in the face verification process to strengthen its robustness to pose and noise such as extreme image compression and low quality acquisition sensors.

[1] John D. Woodward, Jr., Christopher Horn, Julius Gatune, and Aryn Thomas, Biometrics - A Look At Facial Recognition, RAND public safety and justice, Virginia, 2003. [2] MislavGrgic, KresimirDelac and Sonja Grgic, SCFace surveillance cameras face Database, Multimed Tools Appl (2011) 51:863879, DOI 10.1007/s11042-009-0417-2 [3] Luis Torres, Is There Any Hope For Face Recognition? Technical University of Catalonia, Barcelona, Spain, 2004. [4] Ion Marques, Face Recognition Algorithms, Proyecto Fin de Carrera, Universidad del Pais Vasco, EuskalHerrikoUnibertsitatea, 2010. [5] Narayanan Ramanathan, Rama Chellapp (Center for Automation Research and Dept. of Electrical & Computer Engineering , University of Maryland) and Amit K. Roy Chowdhury (Dept. of Electrical Engineering University of California), Facial Similarity Across Age, Disguise, Illumination and Pose, 2003. [6] Eric P. Kukula and Stephen J. Elliott, Evaluation of a Facial Recognition Algorithm Across Three Illumination Conditions, Purdue University, 2003. [7] Wikipedia, Facial Recognition System, available online at http://en.wikipedia.org/wiki/Facial_recognition _system. [8] G. Givens, J. R. Beveridge, B. A. Draper, P. Grother, and P. J.Phillips, How features of the human face affect

14

recognition: a statistical comparison of three face recognition algorithms, CVPR, vol. 2, 2004. [9] Richa Singh, MayankVatsa and AfzelNoore (2008). Recognizing Face Images with Disguise Variations, Recent Advances in Face Recognition, KresimirDelac, MislavGrgic and Marian Stewart Bartlett (Ed.), ISBN: 978-9537619-34-3, InTech. [10] B.V.K. Vijaya Kumar, AbhijitMahalanobis and Richard D. Juday, Correlation Pattern Recognition, University Press, UK, November 2005. [11] B.V.K. Vijaya Kumar, M. Savvides, K. Venkataramani, and C. Xie, Spatial Frequency Domain Image Processing For Biometric Recognition, Proceedings IEEE International Conference on Image Processing, pp. 53-56, 2002. [12] M. Savvides, B.V.K. Vijaya Kumar, and P.K. Khosla, "Corefaces- Robust Shift Invariant PCA based correlation filter for illumination tolerant face recognition, Procee dings IEEE Computer Vision and Pattern Recognition (CVPR), pp. 834-841, June 2004. [13] Andrew W. Senior and Ruud M. Bolle, Face Recognition and Its Applications, IBM T.J. Watson Research Center, 2002. [14] Face Recognition, available online athttp://vismod.media.mit.edu/techreports/TR516/node7. html [15] Zhao, W.; Chellappa, R.; Rosenfeld, A.; Phillips, P. J. (2003), Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, Issue 4, December 2003, pp. 399458. [16] Available online at http://www.biometrics catalog.org/ NSTCSubcommittee. [17+ Kirby, M. and Sirovich, L. (1990): Application of the Karhunen-loeve procedure for the characterization of human faces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1):103-108. *18+ Turk, M. and Pentland, A. (1991): Eigenfaces for recognition, Journal of Cognitive Neuroscience, 3(1):72-86. [19] Zhao, W.; Chellappa, R.; Rosenfeld, A.; Phillips, P. J. (2003), Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, Issue 4, December 2003, pp. 399458. [20] Li, S. Z., and Jain, A. K. (2005). eds., Handbook of Face Recognition. Springer-Verlag, Secaucus, New York, USA, 2005. *21+ Fisher, R. A. (1938): The statistical utilization of mu ltiple measurements, Annals of Eugenics, 8:376 -386. [22] Wiskott, L.; Fellous, J.-M.; Kruger, N.; and Von der Malsburg, C. (1997): Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):775-779, 1997. *23+ Lewis, J. P. (1995): Fast normalized cross correlation, Vision Interface, pp 120-123. [24] Hadid, A.; Ahonen, T.; and Pietikainen, M. (2004):

Face recognition with local binary patterns, In Proc. European Conference on Computer Vision, pages 469481. [25] Arca, S.;Campadelli, P. and Lanzarotti, R. (2003): A face recognition system based on local feature analysis. In Proc. Audio- and Video-Based Biometric Person Authentication, pp 182-189. [26] Zhou, S.; Krueger, V.; and Chellappa, R. (2003): Probabilistic recognition of human faces from video, Computer Vision and Image Understanding, Vol. 91, pp 214-245. [27] Aggarwal, G.; Roy-Chowdhury, A. K.; and Chellappa, R. (2004): A system identification approach for video based face recognition, In Proc. International Conference on Pattern Recognition, volume 4, pp 175-178. [28+ Blanz, V. and Vetter, T. (2003): Face recognition based on fitting a 3d morphable model, IEEE Transa ctions on Pattern Analysis and Machine Intelligence, 25(9):1063-1074. *29+ Lu, X.; Jain, A. K.; and Colbry, D. (2006): Matching 2.5d face scans to 3d models, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28(1), pp3143. *30+ Park, U. (2009): Face Recognition: Face in Video, Age Invariance, and Facial Marks, An unpublished Ph.D Dissertation submitted to the Department of Computer Science, Michigan State University, U.S.A. *31+ Zhao, W. and Chellappa, R. (1999): Robust face recognition using symmetric shape-from-shading, Technical Report, Center for Automation Research, University of Maryland. [32] M. Savvides and B.V.K. Vijaya Kumar, "Quad-phase minimum average correlation energy filters for reducedmemory illumination-tolerant face authentication," Audio and Visual Biometrics based Person Authentication (AVBPA), 2003. [33] M. Savvides, B.V.K. Vijaya Kumar and P.K. Khosla, "Robust, Shift-Invariant Biometric Identification from Partial Face Images", accepted for publication in Biometric Technologies for Human Identification (OR51) 2004. [34] M. Savvides, B.V.K. Vijaya Kumar and P.K. Khosla, "Cancellable Biometric Filters for Face Recognition", accepted for publication in International Conference in Pattern Recognition (ICPR) 2004. [35] AbhijitMahalanobis, B. V. K. Vijaya Kumar and David Casasent, Minimum Average Correlation Energy Filters, Journal of Applied Optics, Vol. 26, No 17 , 1987. [36] MariosSavvides, B. V. K. Vijaya Kumar and PradeepKhosla, Face Verification using Correlation Filters, Proc. Of the Third IEEE Automatic Identification Advanced Technologies, 56-61, Tarrytown, NY, 2002. [37] B. V. K. Vijaya Kumar, MariosSavvides, KrithikaVenkataramani and ChunyanXie, Spatial Frequency Domain Image Processing For Biometric Recognition, Proc. of

15

Intl. Conf. on Image Processing (ICIP), Vol. I, 53-56, 2002. *38+ B. V. K. Vijaya Kumar, MariosSavvides, Efficient Design of Advanced Correlation Filters For Robust Distortion-Tolerant Face Recognition, Proc. of the IEEE Conference on Advanced Video and Signal Based Surveillance, 2003. *39+ C. F. Hester and D. Casassent, Multivariant tec hnique for multiclass pattern recognition, Journal of Applied Optics 19, pp. 1758-1761, 1980. *40+ Elias Rentzeperis (2003), A Comparative Analysis of Face Recognition Algorithms: Hidden Markov Models, Correlation Filters and Laplacianfaces Vs. Linear subspace projection and Elastic Bunch Graph Matching, an MSc. Thesis in Information Networking, Autonomic and Grid Computing Group, Carnegie Mellon University, USA.
Omidiora, E. O. received the B.Eng degree in Computer Engineering, ObafemiAwolowo University, Ile-Ife in 1992, MSc Computer Science, University of Lagos, Lagos in 1998 and Ph.D Computer Science, LadokeAkintola University of Technology, Ogbomoso, Nigeria in 2006. He is currently an Associate Professor in the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. He has published in reputable journals and learned conferences. DrOmidiora is a full member of Computer Professional (Registration) Council of Nigeria (CPN) and a registered engineer by COREN. His research interests biometricalgorithm and application, image processing and microprocessor based system. Olabiyisi, S. O. received his B. Tech., M. Tech and Ph.D degrees in Mathematics from LadokeAkintola University of Technology, Ogbomoso, Nigeria, in 1999, 2002 and 2006 respectively. He also received M.Sc. degree in Computer Science from University of Ibadan, Ibadan, Nigeria in 2003. He is currently an Associate Professor in the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. He has published in reputable journals and learned conferences. DrOlabiyisi is a full member of Computer Professional (Registration) Council of Nigeria (CPN). His research interests are in computational mathematics, theoretical computer science, information systems and performance modelling and simulation. OJO, J. A. received a B.Techdegree in Electronic and Electrical Engineering from LadokeAkintola University of Technology (LAUTECH), Ogbomoso, Nigeria in 1998, M.Sc. in the same field from University of Lagos, Nigeria in 2003 and Ph.D from LAUTECH in 2011. He is presently a senior Lecturer in the Department of Electronic and Electrical Engineering in the same institution; he is a member of the Nigerian Society of Engineers (NSE) and a registered engineer by COREN. His research interest includes biometrics, security and surveillance systems, e-health and mobile-health biomedical engineering and image pre-processing techniques. Abayomi-Alli, A. obtained his B.Tech Degree in Computer Engineering from LadokeAkintola University of Technology (LAUTECH), Ogbomoso in 2005, MSc Computer Science from the University of Ibadan, Nigeria in the 2009. He is a registered engineer by COREN and a chartered information technology practitioneer by the Computer Professional Registration Council of Nigeria (CPN). His current research interests include biometrics, image quality assessment and machine learning. Akingboye, A.Y. graduated from the LadokeAkintola University of Technology, Ogbomoso with a Master of Technology (M.Tech) and Bachelor of Technology (B.Tech) degrees in Computer Science in 2012 and 2005 respectively. His research interests include microprocessor systems, image processing and human computer interac-

tion. He is presently with the Department of Electrical and Computer Engineering at Igbinedion University Okada. Izilein, F. A. has a B.Eng and M.Eng Degree in Electrical Electronics. He presently lectures at the Department of Electrical and Computer Engineering in Igbinedion University Okada. He is a registered engineer by COREN and a member of the NigerinSoceity for Engineers (NSE). Ezomo, P. I. has a B.Eng and M.Eng Degree in Electrical Electronics. He is presently a lecturer in the Department of Electrical and Computer Engineering at Igbinedion University Okada. He has extensive experience in microprocessor systems and biometrics while he worked in the oil and gas industry for over twenty years. He is a registered engineer by COREN and a member of the NigerinSoceity for Engineers (NSE).

Вам также может понравиться