Вы находитесь на странице: 1из 6

International Conference on Intelligent and Advanced Systems 2007

PALMPRINT IDENTIFICATION USING


WAVELET ENERGY
Kie Yih Edward Wong* G. Sainarayanan** Ali Chekima*
*School of Engineering and Information Technology, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Malaysia
**Department of Electrical and Electronics, New Horizon College of Engineering, Bangalore, India

Abstract- Palmprint Identification is the means of recognizing Palmprint geometry features are incapable to identify the
an individual from the database using his/ her palmprint identity of individuals in large database. This is because most
features. Palmprint is easy to capture, requires cheaper of the adults have similar palm width and palm area. Line
equipment and is more acceptable by the public. Moreover, features are hard to extract due to its thickness that is not
palmprint is also rich in features. Wavelet transform is a multi- uniform. Some of the wrinkles are as thick as the principal
resolution analysis tool that can extract palm lines in different lines. Point features require high-resolution images to find the
resolution levels. In low-resolution level, fine palm lines are minutiae points, or delta points. Palmprint can be analyzed as
extracted. The higher the resolution level, the coarser are the texture features. Some of the texture analyses conducted are
extracted palm lines. In this work, a digital camera is used to using Wavelet Transform [6], Derivative of Gaussian filters
acquire the ten right hand image of 100 different individuals. [7] and Fourier Transform [8]. Wavelet transforms extract
The hand images are pre-processed to find the key points. By wavelet coefficients from the palmprint image using different
referring to the key point, the palmprint images are rotated and types of wavelets. Some of the commonly used wavelets are
cropped. The palmprint images are enhanced and resized. The Haar wavelet, Daubechies wavelet, Symlets wavelet and
resized images are decomposed using different types of wavelets Coiflets wavelet. Every wavelet has its own decomposition
for six decomposition levels. Two different wavelet energy and scaling coefficients. The wavelets under different scaling
representations are tested. The feature vectors are compared to property can act as multi-resolution analysis to analyse
the database using Euclidean Distance or classified using different types of palm lines (principal line, wrinkles and
feedforward backpropagation neural network. From the results, ridges) [9].
an accuracy of 99.07 percent can be obtained using Db5 wavelet Fig. 1 shows the proposed palmprint biometric system
energy feature type 2 and classified with neural network. using wavelet energy.

I. INTRODUCTION
Palmprint Identification is the means of recognizing an
individual from the database using his/ her palmprint features.
Palmprint is one of the physiological biometrics that has been
introduced a decade ago. Physiological biometrics is the
usage of human body parts to identify the identity of an
individual. Some of the commonly used physiological Fig. 1. Palmprint Biometric System
characteristics are fingerprint, face, hand geometry, iris and
palmprint. Fingerprint can achieve high accuracy but it is In this work, ten right hand images of 100 different
affected by “dummy finger” that can fool most of the individuals are acquired using a digital camera. Then, the
fingerprint identification systems that do not have liveliness hand images are pre-processed to find the key points. By
test [1]. Face recognition has lower identification rate but it referring to the key points, the angle of rotation and the ROI
can be used to track the location of a person using mask are calculated. The palmprint images are rotated
surveillance cameras. Hand geometry features for most of the according to the angle of rotation and the central part of the
adults are similar, thus it is not suitable for identification palm is cropped. The palmprint images are enhanced using
purposes. Iris identification can provide highest accuracy image adjustment and/ or histogram equalization [10-11]. The
compared to others biometrics methods, but the iris-scanning palmprint is resized to 256 x 256 pixels. The resized images
device is expensive and might cause visual discomfort for are decomposed using different types of wavelets for six
frequent users. decomposition levels. The types of wavelet used in this work
Palmprint gained popularity among other types of are Haar wavelet, Daubechies wavelet 2 to 6, Symlets wavelet
biometric systems because it is difficult to imitate due to its 3 to 6 and Coiflets wavelet 1 and 2. Two different wavelet
complexity and size. The palmprint image is easy to capture energy representations namely, combination decomposition
and more acceptable by the public. Palmprint acquisition level normalization, WE1 and individually decomposition
requires cheaper equipment than iris scanning devices, such level normalization, WE2 are tested. The feature vectors are
as commercial digital camera. Furthermore, palmprint is also compared to the database using Euclidean distance or
rich in features. Various types of methods are suggested to classified using scaled conjugate gradient-based feedforward
extract the geometry features [2], line features [3], point backpropagation neural network.
features [4] and texture features [5-7] from the palmprint.

714 ~ 1-4244-1355-9/07/$25.00 @2007 IEEE

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.
International Conference on Intelligent and Advanced Systems 2007

II. HAND IMAGE ACQUISITION local minima and K2 be the third local minima, circled in the
A Canon Powershot A430 digital camera is used to capture Fig 4.
ten right hand images of 100 different individuals. The hand
images had a uniform dark intensity background. The uniform
dark intensity background eases the hand image
segmentation. During the hand image acquisition, users are
required to lean their hand against the background and spread
their fingers apart. No special peg alignment is needed in this
work. No special lighting is used to brighten the hand images.
All of the hand images are 1024 x 768 pixels saved using
JPEG compression in Red-Green-Blue (RGB) format. Fig. 2
shows a hand image acquired using digital camera.

Fig. 4. Graph of Dist(PB,PW) versus Index of PB

From the index of K1 and K2 in graph, the location of K1


and K2 in the boundary hand image is determined as
illustrated in Fig 5. The angle of rotation, ș˚ is the turning of
line connecting K1 and K2 (LK1,K2) with origin at K1 in a
clockwise direction so that the line is parallel with the x-axis.
The angle of rotation is visualized in Fig. 5.
Fig. 2. Hand Image

III. HAND IMAGE PRE-PROCESSING


There are three main steps in image pre-processing, which
are image segmentation, image alignment and region-of-
interest selection. Image segmentation separates the hand
image from its background. Since the background is uniform
low intensity color, a global threshold value is selected to
perform the task. A fixed global threshold value cannot
segment different hand images properly. Thus, Otsu’s Method
is used to find a variable global threshold value [12] for
image segmentation. Fig. 3 shows the segmented hand image
using Otsu’s Method. Fig. 5. Angle of Rotation Calculation

Let dK1,K2 be the distance between K1 and K2. According


to experimentation, distance between LK1,K2 and the ROI
mask, dK,ROI, is defined as 0.2 times of dK1,K2. This is to move
the ROI mask towards the center of the palm. The length of
each side of ROI mask, ls, is 1.4 times of dK1,K2. Variable ROI
masks allow cropping of palmprint image according to
different palm size. Fig. 6 shows the determination of the
square ROI mask.

Fig. 3. Segmented Hand Image

Since the hand is free to rotate, image alignment is required


to align the hand images to a predefined orientation. Before
that, the boundary of the hand image is located using
boundary tracking algorithm. The middle of the wrist, PW, is
defined as the median value of the right most pixels. The
distance between boundary pixels, PB and PW, Dist(PB, PW)
are calculated. Fig. 4 shows the graph of Dist(PB, PW) versus
index of PB. PB and PW are shown in Fig. 5. Let K1 be the first
Fig. 6. Square ROI Mask Determination

~ 715

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.
International Conference on Intelligent and Advanced Systems 2007

IV. PALMPRINT EXTRACTION From Fig. 8, it is observed that the histogram equalized
The minimum and maximum for four corners of ROI mask palmprint images darken the palm lines and its background.
in x and y-axis are determined. Referring to these values, a E1 is less clear than E3 because the conversion of RGB to
smaller hand image was cropped. The cropped image is grayscale before image adjustment reduces the extraction of
rotated by ș degrees clockwise. By referring to the index of useful palmprint lines. The Histogram Equalized Palmprint
the first non-zero pixel, m from the diagonal pixels of the Image, E2 and Histogram Equalized Individual Adjusted
rotated image, the rotated image is cropped m pixels from its Palmprint Image, E4 are similar but E2 is clearer than E4.
side to obtain palmprint image. Let the grayscale intensity Since the enhanced images were variable in size, all of the
palmprint image be E0. Fig. 7 shows the palmprint image in enhanced images were resized to 256 x 256 pixels.
RGB format (left) and grayscale intensity format, E0 (right).
V. WAVELET TRANSFORM
Wavelet transform is a multi-resolution analysis tool that
can extract the palm lines in different resolution levels. In
low-resolution decomposition, the fine details of the
palmprint image are extracted. When the decomposition level
increases (higher resolution), the coarser palm lines are
extracted. In wavelet decomposition, the original palmprint
image (level one) or the approximation of the previous
decomposition level (others level except level 1) is used to
further decompose into approximation, horizontal, vertical
(a) (b) and diagonal details of the next decomposition level as in Fig.
Figure 7. Palmprint Image in (a) RGB (b) Grayscale Intensity 9. The size of the approximation and details for the level L +
1 is half of the width and height of the level L.
All of the palmprint images are enhanced using either
histogram adjustment and/ or histogram equalization. The
adjusted palmprint image, E1, is obtained by adjusting
grayscale palmprint image into full 256 bins. Histogram
equalized palmprint image, E2, is the grayscale palmprint
image with its histogram equalized into 256 bins. When all
the color channel of the palmprint image (Red, Green and
Blue) was firstly adjusted into 256 color bins, before
converting it into grayscale image, the individually adjusted
palmprint image, E3, is calculated. Histogram equalized
individual adjusted palmprint image, E4, is the histogram Fig. 9. Six Level of Wavelet Decomposition
equalized of E3. Fig 8 shows the enhanced images using
different enhancement methods. In this work, the palmprint image is analysed using Haar
wavelets, Daubechies wavelets (db2 to db 6), Symlets
Wavelets (sym3 to sym 6) and Coiflets Wavelets (coif1 and
coif2) to decompose the enhanced palmprint image into 6
decomposition levels. Fig. 10 shows the Haar wavelet
coefficients for the image E1 in six decomposition levels. The
details for every decomposition levels are adjusted for easy
viewing.

(a) (b)

(c) (d)
Fig. 8. Enhanced Image. (a) E1, (b) E2, (c) E3 and (d) E4

Fig. 10. Haar Wavelet Coefficients For E1

716 ~

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.
International Conference on Intelligent and Advanced Systems 2007

VI. WAVELET ENERGY The wavelet energy feature for L decomposition levels are
The wavelet coefficient images for different decomposition arranged as in (3).
levels have different size. Thus, the images are divided into 4
x 4 blocks regardless of the size of the image. Each 4 x 4 WL L = [WD H , L , WDV , L , WD D , L ] (3)
block in different decomposition levels represents the wavelet
coefficient value in the same particular area of coefficient
image in different decomposition levels. The feature representation WE1 is calculated using (4) and
The wavelet energy for all the blocks is calculated using (1) (5).
WE ALL = [WL 6 , WL5 , WL 4 , WL3 , WL 2 , WL1 ] (4)
P Q
WE i , j = ¦ ¦ (C p , q ) 2 (1) WE1 = WE ALL / sum(WE ALL ) (5)
p =1 q =1

For feature representation WE2, the feature vector is


where i and j are the location of the blocks from 1 to 4, [P Q] obtained by calculating using (6) and (7).
is the size of the decomposition block, C is wavelet
coefficient value in every block.
NWL L = WL L / sum(WL L ) (6)
Two types of wavelet energy representations are used in
this work. The first wavelet energy representation, WE1, is
calculated by normalizing the sum of the total decomposition WE 2 = [ NWL 6 , NWL 5 , NWL 4 , NWL 3 , NWL 2 , NWL1 ] (7)
levels while the second wavelet energy representation, WE2,
is calculated by normalizing the sum of different Fig. 13 shows the wavelet energy for the same individual
decomposition levels. Figure 11 shows the feature but taken at different time interval while Fig 14 shows the
representation WE1 and Figure 12 shows the feature wavelet energy for different individuals.
representation WE2.

Fig. 11. Feature Representation WE1

Fig. 13. Wavelet Energy For Same Individual At Different Time

Fig. 12. Feature Representation WE2

The wavelet energy feature (horizontal, vertical and


diagonal) in different decomposition levels is arranged, as in
(2). Fig. 14. Wavelet Energy For Different Individuals

WD K , L = [WE1,1 , WE1,2 , WE1,3 ,  , WE 4,3 , WE 4,4 ] (2) From Fig. 13 and 14, it is shown that the wavelet energy is
different in every individual. The wavelet energy can be
regenerated from the same individual at different time.
where WDK,L is the wavelet energy for detail K in L
decomposition level, K is either horizontal, vertical or
diagonal details.

~ 717

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.
International Conference on Intelligent and Advanced Systems 2007

VII. FEATURE MATCHING achieved, or when the maximum epoch of 20000 is reached,
The feature vector (WE1 or WE2) is matched using or when the minimum gradient of 1e-6 is exceeded. Another
Euclidean distance or scaled conjugate gradient-based three sets of feature vector are used to find a suitable
feedforward backpropagation neural network. The Euclidean threshold for determining the genuine and imposter. The
Distance measures the likeliness of two different wavelet remaining four sets of feature vectors are used for testing
energy features using (8). purposes.

Y VIII. RESULTS AND DISCUSSION


ED = ¦ (WEX k , y − WEX l , y ) 2 (8) Table 1 and Table 2 show the Euclidean Distance results
y =0 for WE1 and WE2 respectively.

where WEX is the wavelet representation type 1 or 2 {WE1 Table 1: Euclidean Distance Results for WE1
or WE2}, Y is the length of the feature vector, WEXk and E0 E1 E2 E3 E4
WEXl are any two feature vectors for individual k and l from Coif1 78.640 81.554 82.940 81.551 82.988
the database. Coif2 78.962 82.474 82.429 82.451 82.384
Fig. 15 shows the graph of the normalized genuine and Db2 82.443 86.328 86.140 86.266 86.160
imposter matching distribution versus the threshold index. If Db3 81.468 85.400 84.969 85.330 84.925
the wavelet energy feature belongs to the same individual Db4 79.240 82.547 81.730 82.510 81.757
(intra-class matching), the ED result will be lower than the Db5 79.958 84.396 82.939 84.334 82.984
threshold obtained from the minimum error. For the inter Db6 78.208 81.899 80.856 81.819 80.894
class matching or matching between different individuals, the Haar 89.694 90.660 91.761 90.645 91.726
ED result will be higher than the threshold obtained from the Sym3 81.468 85.400 84.969 85.330 84.925
minimum error. Sym4 77.954 82.169 82.408 82.083 82.424
Sym5 76.430 79.598 80.525 79.618 80.537
Sym6 78.740 82.462 82.335 82.414 82.291

Table 2: Euclidean Distance Results for WE2


E0 E1 E2 E3 E4
Coif1 80.725 83.842 87.043 83.816 87.019
Coif2 80.060 82.777 84.675 82.779 84.648
Db2 82.094 86.437 89.208 86.376 89.235
Db3 83.105 87.399 90.026 87.372 89.992
Db4 82.286 86.831 89.084 86.781 89.065
Db5 83.073 86.180 88.856 86.192 88.833
Db6 82.098 85.118 88.006 84.979 87.972
Haar 86.848 90.765 92.945 90.770 92.894
Sym3 83.105 87.399 90.026 87.372 89.992
Fig. 15. Genuine/ Imposter Matching Distribution Versus Threshold Index
Sym4 78.453 80.818 83.712 80.744 83.766
Sym5 79.052 82.036 84.920 82.009 84.957
In this work, each of the feature vectors is matched with the Sym6 77.946 80.007 82.909 79.908 82.912
remaining 999 feature vectors in the database using Euclidean
Distance. The genuine ED distribution graph and imposter From both tables (Table 1 and Table 2), it is shown that the
ED distribution graphs are drawn. Since for every feature Haar wavelet can achieve the highest accuracy rate compared
vector, there will be nine genuine matching and 990 imposter to other types of wavelet such as Daubechies 2 to 6, Symlets
matching, both of the distribution graph is normalized. From 3 to 6 and Coiflets 1 to 2. The enhanced image (E1, E2, E3
the interceptions between these two distribution graphs, a and E4) can improve the accuracy of the system, compared to
threshold is selected. The accuracy for different types of the original grayscale image E0. The histogram equalized
wavelets using different types of feature representation is images, E2 and E4 can perform better compared to the
calculated. adjusted images, E1 and E3. The enhanced image, E2 can
In the scaled conjugate gradient-based feedforward yield the best accuracy rate compared to others. Wavelet
backpropagation neural network (NN), three out of ten sets of energy representation type 2 is better than the wavelet energy
feature vector (for same individual) are used for training. In representation type 1.
this work, the number of hidden neuron is set to 288, 432, and Table 3 to 5 are the neural network results for 288, 432 and
576, so that it is one, 1.5 and two times the size of the feature 576 hidden neurons respectively. CoifX is the Coiflets
vector. The NN has 100 output neurons to represent the Wavelet of Order X, where X = 1 or 2. DbX is the Daubechies
identity of every user in the system. The output neuron has a Wavelet of Order X, where X = 2, 3, 4, 5 or 6. SymX is the
range from zero to one. Both of the hidden and output Symlets Wavelet of Order X, where X = 3, 4, 5 or 6.
neurons used tangent sigmoid activation function for training.
The NN is trained until the performance goal of 1e-3 is

718 ~

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.
International Conference on Intelligent and Advanced Systems 2007

Table 3: Neural Network Results for 288 Hidden Neuron. pixels, two key points (K1 and K2) were determined. Using
E0 E1 E2 E3 E4 these key points, the angle of rotation and the location of
Coif1 96.58 98.23 97.75 98.10 97.95 square ROI are approximated. The palmprint image is
Coif2 96.93 98.33 98.00 98.59 98.53 cropped out after rotation. The palmprint image is enhanced
Db2 97.34 98.33 98.26 98.83 98.46 using image adjustment and/ or histogram equalization. The
Db3 97.42 97.16 97.84 96.96 97.40 enhanced image is decomposed into six levels using different
Db4 97.10 97.88 97.48 97.74 97.94 types of wavelet such as Haar wavelet, Daubechies wavelet,
Db5 97.04 98.42 98.61 98.14 97.41 Symlets Wavelet and Coiflets Wavelet. The wavelet
Db6 96.93 98.34 97.84 98.23 97.79 coefficients are represented using two types of wavelet energy
Haar 97.22 97.91 97.95 97.78 98.20 features, WE1 or WE2. The wavelet energy feature is
Sym3 97.42 97.16 97.84 96.96 97.40 compared with the remaining of the feature vectors stored in
Sym4 97.19 97.98 98.20 97.81 97.49 the database using the Euclidean Distance or the scaled
Sym5 96.44 97.88 97.47 98.17 98.22 conjugate gradient-based feedforward backpropagation neural
Sym6 97.01 98.15 98.25 97.77 98.29 network. From the results, an accuracy of 99.07 can be
achieved using enhanced image E3 decomposed with
Table 4: Neural Network Results for 432 Hidden Neuron.
Daubechies 5 (Db5) and trained using neural network with
E0 E1 E2 E3 E4
576 hidden neurons. For future work, more test data will be
Coif1 97.39 98.12 97.93 98.43 98.12
gathered to test the accuracy of the neural network. A
Coif2 97.31 98.43 98.70 98.13 98.60
modified wavelet transform will be suggested to further
Db2 97.99 98.70 98.22 98.70 98.41
increase the accuracy of the palmprint biometric system.
Db3 97.58 97.84 98.59 98.67 98.31
Db4 97.65 98.15 97.99 98.22 98.44
97.59 98.50 98.57 98.45 98.75
REFERENCES
Db5 [1] Ton Van Der Putte and Jeroen Keuning, “Biometrical Fingerprint
Db6 97.46 98.56 98.85 98.54 98.17 Recognition: Don’t Get Your Fingers Burned, Smart Card Research and
Haar 97.94 98.50 98.51 98.44 98.61 Advanced Applications,” IFIP TC8/WG8.8 Fourth Working Conference on
Sym3 97.58 97.84 98.59 98.67 98.31 Smart Card Research and Advanced Applications, pp. 289-303, 2001.
[2] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint
Sym4 97.47 98.36 98.17 98.20 98.39
Authentication using Relative Geometric Feature,” 3rd International
Sym5 97.00 98.19 98.55 98.24 98.47 Conference on Artificial Intelligent in Engineering and Technology (ICAIET
Sym6 97.17 98.64 98.60 98.54 98.50 2006), pp 743-748.
[3] C.C. Han, H.L. Cheng, C.L. Lin and K.C. Fan, “Personal Authentication
Table 5: Neural Network Results for 576 Hidden Neuron. using Palm-print features”, Pattern Recognition, vol. 36, issue 2, 2003, pp
371-381.
E0 E1 E2 E3 E4
[4] Nicolae Duta, Anil K. Jain and Kanti V. Mardia, “Matching of
Coif1 97.52 98.53 98.44 96.28 97.32 palmprints”, Pattern Recognition Letters 23, 2002, pp. 477-485.
Coif2 97.27 98.16 98.61 98.20 98.68 [5] Wenxin Li, Jane You and David Zhang, “Texture-Based Palmprint
Db2 98.04 98.49 98.26 98.71 98.63 Retrieval Using a Layered Search Scheme for Personal Identification,” IEEE
Transactions on Multimedia, vol 7, No. 5, Oct 2005, pp 891 – 898.
Db3 97.93 98.85 98.09 98.83 97.71
[6] Xiang-Qian Wu, Kuan-Quan Wang and David Zhang, "Wavelet Based
Db4 97.40 98.47 98.87 98.38 98.79 Palmprint Recognition," Proceedings of the First International Conference
Db5 97.90 98.84 98.49 99.07 98.73 on Machine Learning and Cybernetics, Beijing, 4-5 November 2002, pp.
Db6 97.75 98.94 98.70 98.91 97.59 1253-1257.
[7] Xiangqian Wu, Kuanquan Wang and David Zhang, “Palmprint Texture
Haar 98.33 98.17 97.89 98.62 98.40
Analysis Using Derivative of Gaussian Filters”, International Conference on
Sym3 97.93 98.85 98.09 98.83 97.71 Computational Intelligence and Security 2006, Vol. 1 (2006), pp. 751-754.
Sym4 97.66 98.30 98.74 98.62 98.70 [8] Wenxin Li, David Zhang and Zhuoqun Xu, “Palmprint Identification By
Sym5 97.21 98.45 98.75 98.53 98.21 Fourier Transform”, International Journal of Pattern Recognition and
Artificial Intelligence, Vol. 16, No.4 (2002), pp. 417-432.
Sym6 97.40 98.42 98.89 98.45 98.82
[9] David D. Zhang, Palmprint Authentication, Kluwer Academic Publishers,
2004.
From Table 3 to 5, it is observable that the increase of the [10] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint
hidden neurons can increase the accuracy rate. The increase Identification using Discrete Cosine Transform,” World Engineering
of hidden neurons will increase the storage space required to Congress 2007 (WEC2007), pp 85-91.
[11] Edward Wong, G. Sainarayanan and Ali Chekima, “Palmprint
save the network. An accuracy of 99 percent can be achieved Identification using SobelCode,” Malaysia-Japan International Symposium
using the wavelet energy representation WE2 for Daubechies on Advanced Technology (MJISAT), 12-15 Nov 2007, Accepted for oral
5 (Db5). The scaled conjugate gradient-based feedforward presentation.
backpropagation neural network can classify the genuine and [12] Otsu, N., "A Threshold Selection Method from Gray-Level Histograms,"
imposter better than the Euclidean Distance. IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, 1979,
pp. 62-66.

IX. CONCLUSION
Ten right hand images from 100 different individuals were
acquired using a digital camera. The hand images were
segmented and its boundaries are located. From the boundary

~ 719

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:10 from IEEE Xplore. Restrictions apply.

Вам также может понравиться