Вы находитесь на странице: 1из 5

ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 2, April

l 2012

DATA LEVEL FUSION FOR MULTI BIOMETRIC SYSTEM USING FACE AND FINGER
Shubhangi Sapkal Govt. College of Engineering, Aurangabad

Abstract In this work, most commonly used and accepted biometrics face and finger are used for data level fusion. Multi biometric systems are expected to improve population coverage, reduce spoofing and be resilient to fault tolerance of different mono modal biometric systems. This system is designed for access control system requires more security such as allow to access important data, it is the false acceptance rate that is major concern in such applications. We do not want to access the data even the risk of manually examining a large number of potential matches identified by the biometric system.

Index TermsMulti modal biometrics, Failure-to-enroll, Fusion

I. INTRODUCTION Multimodal biometric systems are those which utilize, or have capability of utilizing, more than one physiological or behavioral characteristic for enrollment, verification, or identification. The reason for combining different sensor modalities is to improve the recognition accuracy [1]. Unimodal biometric systems have to contend with a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. Some of these limitations can be addressed by deploying multimodal biometric systems that integrate the evidence presented by multiple sources of information [2]. For IDs application, multimodality may be an effective tool to reduce the Failure to Enroll (FTE) rate. The sequential use of multiple modalities guarantees that the non-enrollable population is reduced drastically. Furthermore, sequential use of modalities permits fair treatment of persons that do not possess a certain biometric trait [3]. Here two inexpensive and widely accepted biometric traits namely face and fingerprint is used. Human face recognition has a tremendous potential in a wide variety of commercial and law enforcement applications. Considerable research efforts have been devoted to the face recognition problem over the past decade. Although there are a number of face recognition algorithms which work well in constrained environments, face recognition is still an open and very challenging problem in real applications [4]. Biometrics has long been known as a robust approach for person authentication. However, most mono modal biometrics are proven to exhibit one or more weaknesses. Multi biometric systems combine the information presented
Manuscript received April 07, 2012. Shubhangi Sapkal, Computer Science and Engineering Department,Government College of Engineering, Aurangabad., India.

by multiple biometric sensors, algorithms, samples, units, or traits. In addition to improving recognition accuracy, these systems are expected to improve population coverage, reduce spoofing and be resilient to fault tolerance of different mono modal biometric systems [5]. Face recognition is a nonintrusive method, and facial images are probably the most common biometric characteristic used by humans to make personal recognition. It is questionable whether the face itself, without any contextual information, is a sufficient basis for recognizing a person from a large number of identities with an extremely high level of confidence. Humans have used fingerprints for personal identification from many decades. But, fingerprints of a small fraction of the population may be unsuitable for the automatic identification because of genetic factors, aging, environmental, or occupational reasons (e.g., manual workers may have a large number of cuts and bruises on their fingerprints that keep changing) [6] . The initial idea and early work of this research have been published in part as conference papers in [7], [8], [9]. The outline of the work is as follows. Section 2 discusses approaches presented in the literature. Section 3 deals with image fusion. Section 4 extends to modes of operations. Section 5 discusses on Wavelet Transform and Decomposition. Section 6 contains similarity Measures. Experimental results are given in section 7. Finally conclusions are drawn in section 8.

II. RELATED RESEARCH ON MULTIMODAL BIOMETRICS In [10], the data level fusion is used and the DWT coefficients are selected as features and the image is reconstructed with those features. Miguel Carrasco in [11] proposed a bimodal identification system that combines face and voice information. A probabilistic fusion scheme at the matching score level is used, which linearly weights the classification probabilities of each person-class from both face and voice classifiers. In [12], histogram equalization of biometric score distribution is successfully applied in a multimodal person verification system composed by prosodic, speech spectrum and face information. Furthermore, a new bi-Gaussian equalization (BGEQ) is introduced. Stephen J. Elliott in [13], outlines the perceptions of 391 individuals on issues relating to biometric technology. Results demonstrated overwhelming support for biometrics applications involving law enforcement and obtaining passports, while applications involving time and attendance tracking and access to public schools ranked lowest on the list. A bimodal biometric verification system based on k-Nearest Neighbourhood (k-NN) classifiers in the decision fusion module for the face and speech experts is discussed in [14]. In [15], a method of speaker recognition is introduced based 80

All Rights Reserved 2012 IJARCSEE

ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 2, April 2012

on multimodal biometrics by using the kernel fisher discriminant analysis. Micha Chora [16] proposed a system on the basis of ear, palm and lips images for human identification. The combination of iris and fingerprint biometrics is used in [17]. Jian Yang [18] proposed an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. The method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases. M. K. Shahin, A. M. Badawi Proposed in [19] three biometric modalities for validating and implementing multimodal biometric system, that are hand vein, hand geometry and fingerprint.

biometric characteristics do not have to be acquired simultaneously. Further, a decision could be arrived at without acquiring all of the traits. This reduces the overall recognition time. In the hierarchical scheme, individual classifiers are combined in a treelike structure. This work used parallel mode for fusion of face and finger. V.
WAVELET TRANSFORM

III. IMAGE FUSION The three possible levels of fusion are: fusion at the feature extraction or data level [20], fusion at the matching score level [21] , [22], [23] [24], [25] and fusion at the decision level [26], [27], [28]. (a) Fusion at the data or feature level: Either the data itself or the feature sets originating from multiple sensors/sources are fused [2]. The data obtained from each sensor is used to compute a feature vector. As the features extracted from one biometric trait are independent of those extracted from the other, it is reasonable to concatenate the two vectors into a single new vector. The new feature vector now has a higher dimensionality and represents a persons identity in a different hyperspace. Feature reduction techniques may be employed to extract useful features from the larger set of features. (b) Fusion at the matching score level: Each system provides a matching score indicating the proximity of the feature vector with the template vector. These scores can be combined to assert the veracity of the claimed identity. (3) Fusion at the decision level: Each sensor can capture multiple biometric data and the resulting feature vectors individually classified into the two classesaccept or reject. A majority vote scheme can be used to make the final decision [22]. IV.
MODES OF OPERATION

The biometrics image fusion extracts information from each source image and obtains the effective representation in the final fused image [29]. The aim of image fusion technique is to process the fusing detailed information which obtains from both the source images. The multi-resolution image used to represent the signals where decomposition is performed for obtaining finer detail. Multi-resolution image decomposition gives an approximation image and three other images viz., horizontal, vertical and diagonal images of coarse detail. The face and fingerprint images are obtained from different sources. After re-scaling, the images are fused by using wavelet transform and decomposition. Finally, we obtain a completely new fused image, where both the attributes of face and fingerprint images are focused and reflected. The proposed image fusion rule selects the larger absolute values of the two wavelet coefficients at each point. Therefore, a fused image is produced by performing an inverse wavelet transform based on integration of wavelet coefficients correspond to the decomposed face and fingerprint images. More formally, wavelet transform decomposes an image recursively into several frequency levels and each level contains transform values. Finally, inverse wavelet transformation is performed to restore the fused image. The fused image possesses good quality of relevant information for face and fingerprint images. In this work daubechies2 (Fig. 2) wavelet family for decomposition (Fig. 1) is used.

A multi biometric system can operate in one of three different modes: serial mode, parallel mode, or hierarchical mode[6]. In the serial mode of operation, the output of one biometric trait is typically used to narrow down the number of possible identities before the next trait is used. This serves as an indexing scheme in an identification system. For example, a multi biometric system using face and fingerprints could first employ face information to retrieve the top few matches, and then use fingerprint information to converge onto a single identity. This is in contrast to a parallel mode of operation where information from multiple traits is used simultaneously to perform recognition. This difference is crucial. In the cascade operational mode, the various

Fig. 1: Wavelet decomposition

81
All Rights Reserved 2012 IJARCSEE

ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 2, April 2012

Fig 2: A daubechies 2 wavelet VI.


EXPERIMENTAL EVALUATION

Threshold FRR FAR 0.14 0 0.6 0.32 0 0.65 0.44 0 0.7 0.56 0 0.75 0.75 0 0.8 0.87 0 0.85 Table 1: Recognition performance for different threshold values using MEAN fusion technique Threshold FRR FAR 0.34 0.21 0.6 0.4 0.19 0.65 0.43 0.07 0.7 0.61 0 0.75 0.79 0 0.8 0.88 0 0.85 Table 2: Recognition performance for different threshold values using MAX-MIN fusion technique

A typical biometric recognition system commits two types of errors: false acceptance and false rejection; a distinction has to be made between positive and negative recognition; in positive recognition systems (e.g., an access control system) a false match determines the false acceptance of an impostor, whereas a false non-match causes the false rejection of a genuine user. On the other hand, in a negative recognition application (e.g., preventing users from obtaining welfare benefits under false identities), a false match results in rejecting a genuine request, whereas a false non-match results in falsely accepting an impostor attempt. The notation false match/false non-match is not application dependent and therefore, in principle, is preferable to false acceptance/false rejection. However, the use of false acceptance rate (FAR) and false rejection rate (FRR) is more popular and largely used in the commercial environment [31]. Positive recognition system is considered in this work and correlation is used as similarity measure. FRR is False Rejection Ratio, which means the fault when someone which registered in the system was refused by system [33]. Table I presents the FRR values of genuine person faces. FAR is False Acceptance Rate, which is the fault where someone of user which does not enlist will be held true by the system. FAR values for impostor persons are presented in Table II. Finally, Table III presents the FAR and FRR values for all persons with different threshold values. The FRR and FAR for number of participants (N) are calculated as specified in Eq. (1) and in equation Eq. (2):
1N FRR FRR () n (1) N1 n

Graph 1: - FAR-FRR diagram for MEAN method

Graph 2: - FAR-FRR diagram for MAX-MIN method VII.


CONCLUSION

1N FAR FAR ) () n(2 N1 n

A 2D Discrete Wavelet Transform is proposed to capture the characteristics in faces and fingerprints. Experimental results on an extensive set of face (FETRET database) and fingerprint database (FVC-2004 database) demonstrate that the proposed correlation and wavelet method outperforms in identification. It is shown that the proposed method gives satisfying results for threshold-0.7. There is a trade-off between FAR
82

All Rights Reserved 2012 IJARCSEE

ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 2, April 2012

and FRR values. In some access control system with more security such as allow to access some important data, it is the false acceptance rate that is major concern that is, we do not want to access the data even the risk of manually examining a large number of potential matches identified by the biometric system. Result shows that FAR is 0, which can be applied in such applications. This work can be extended to feature level fusion to improve accuracy and robustness.
References [1] Satyanadh Gundimada and Vijayan K. Asari, Facial Recognition Using Multisensor Images Based on Localized Kernel Eigen Spaces, IEEE Transactions on Image Processing, Vol. 18, No. 6, PP. 1314-1325, June 2009 [2] Arun Ross and Anil K. Jain, Multimodal Biometrics: An Overview, Proc. of 12th European Signal Processing Conference (EUSIPCO), pp. 1221-1224, September 2004. [3] Damien Dessimoz, Jonas Richiardi, Christophe Champod, Andrzej Drygajlo, Multimodal biometrics for identity documents, Forensic Science International 167, pp. 154159, 2007. [4] Xiaoguang Lu, Yunhong Wangy, Anil K. Jain, Combining Classifiers For Face Recognition [5] Mohamed Deriche, Trends and Challenges in Mono and Multi Biometrics, Image Processing Theory, Tools & Applications, 2008 IEEE. [6] Anil K. Jain, Arun Ross, and Sharath Pankanti, Biometrics: A Tool for Information Security, IEEE Transactions on Information Forensics And Security, Vol. 1, No. 2, pp. 125-143, June 2006. [7] S.D. Sapkal, S.N. Kakarwal, P.S. Revankar, Image classification using neural network, Proc. Of International conf. ICSCI, pp. 259-263, 2007. [8] S.D. Sapkal, S.N. Kakarwal, M.D. Malkauthekar, Classification of facial images using FFNN, Proc. Of International conf. ICACT, pp.435-438, 2008. [9]S.D. Sapkal, S.N. Kakarwal, Image enhancement and Feature Extraction for Fingerprint Images: A Review, National conf. MIT Aurangabad, pp. 04. [10] Satyanadh Gundimada and Vijayan K. Asari, Facial Recognition Using Multisensor Images Based on Localized Kernel Eigen Spaces, IEEE Transactions on Image Processing, Vol. 18, No. 6, pp. 1314-1325, June 2009. [11] Miguel Carrasco, Luis Pizarro and Domingo Mery, Bimodal Biometric Person Identification System Under Perturbations, Springer, pp. 114127, 2007. [12] P. Ejarque J. Hernando, Score bi-Gaussian equalisation for multimodal person verification, IET Signal Process., Vol. 3, Iss. 4, pp. 322332, 2009. [13] Stephen J. Elliott, Sarah A. Massie, Mathias J. Sutton, The Perception of Biometric Technology: A Survey, pp. 259-264, 2007 IEEE. [14] Andrew Teoh, S. A. Samad and A. Hussain, Nearest Neighbourhood Classifiers in a Bimodal Biometric Verification System Fusion Decision Scheme, Journal of Research and Practice in Information Technology, Vol. 36, No. 1, pp. 47-62 February 2004.

[15] Masatsugu Ichino, Hitoshi Sakano and Naohisa Komatsu , Multimodal Biometrics of Lip Movements and Voice using Kernel Fisher Discriminant Analysis ,ICARCV 2006 IEEE [16] Micha Chora, Emerging Methods of Biometrics Human Identification, 2007 IEEE. [17] Stelvio Cimato, Marco Gamassi, Vincenzo Piuri, Roberto Sassi and Fabio Scotti, Privacy-aware Biometrics: Design and Implementation of a Multimodal Verification System, 2008 Annual Computer Security Applications Conference, 2008 IEEE pp. 130-139. [18] Jian Yang, David Zhang, Jing-yu Yang, and Ben Niu, Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics, IEEE Transactions on Pattern Analysis And Machine Intelligence, pp.650-664, 2007. [19] M. K. Shahin, A. M. Badawi, M. E. Rasmy, A Multimodal Hand Vein, Hand Geometry, And Fingerprint Prototype Design For High Security Biometrics, CIBEC'08, 2008 IEEE. [20] A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, Feature Level Fusion of Face and Fingerprint Biometrics, 2007 IEEE. [21] Norman Poh, Thirimachos Bourlai and Josef Kittler, BioSecure DS2: A Score-level Quality-dependent and Cost-sensitive Multimodal Biometric Test Bed [22] Arun Ross, Anil Jain, Information fusion in biometrics, Pattern Recognition Letters 24, pp. 21152125, 2003. [23] F. Wang and J. Han, Multimodal biometric authentication based on score level fusion using support vector machine, OptoElectronics Review 17(1), pp. 5964 [24] Ajay Kumar, Vivek Kanhangad, David Zhang, Multimodal Biometrics Management Using Adaptive Score-Level Combination, 2008 IEEE. [25] Robert Snelick, Umut Uludag, Alan Mink, Michael Indovina, and Anil Jain, Large-Scale Evaluation of Multimodal Biometric Authentication Using State-of-the-Art Systems, IEEE Transactions on Pattern Analysis And Machine Intelligence, Vol. 27, pp. 450-455, March 2005. [26] Kar-Ann Toh, Xudong Jiang, and Wei-Yun Yau, Exploiting Global and Local Decisions for Multimodal Biometrics Verification, IEEE Transactions On Signal Processing, Vol. 52, No. 10, pp. 3059-3072, October 2004. [27] Kar-Ann Toh,er, and Wei-Yun Yau, Combination of Hyperbolic Functions for Multimodal Biometrics Data Fusion, IEEE Transactions on Systems, Man, And CyberneticsPart B: Cybernetics, Vol. 34, pp. 1196-1209, April 2004. [28] Kalyan Veeramachaneni, Lisa Ann Osadciw, and Pramod K. Varshney, An Adaptive Multimodal Biometric Management Algorithm, IEEE Transactions on Systems, Man, And CyberneticsPart C: Applications And Reviews, Vol. 35, No. 3, pp. 344-356, August 2005. [29] Dakshina Ranjan Kisku, Ajita Rattani, Phalguni Gupta, Jamuna Kanta Sing, Biometric Sensor Image Fusion for Identity Verification: A Case Study with Wavelet-based Fusion Rules and Graph Matching, 2009 IEEE, pp. 436-439. [30] Arjun V. Mane, Ramesh R. Manza, Karbhari V. Kale,The Role of Similarity Measures in Face Recognition, 83

All Rights Reserved 2012 IJARCSEE

ISSN: 2277 9043 International Journal of Advanced Research in Computer Science and Electronics Engineering Volume 1, Issue 2, April 2012

International Journal of Computer Science and Application, Issue-I, pp. 62-65, 2010. [31] Davide Maltoni, Dario Maio, Anil K. Jain, Salil Prabhakar, Handbook of Fingerprint Recognition(Springer), pp 3. [32] Neil Yager and Ted Dunstone, The Biometric Menagerie, IEEE Transactions On Pattern Analysis And Machine Intelligence, VOL. 32, NO. 2, FEBRUARY 2010, 220-230. [33] Website: http://www.bromba.com/faq/biofaqe

84
All Rights Reserved 2012 IJARCSEE

Вам также может понравиться