Вы находитесь на странице: 1из 6

2010 3rd International Congress on Image and Signal Processing (CISP2010)

A Principal Palm-line Extraction Method for Palmprint Images Based on Diversity and Contrast
Cong Li 1, Fu Liu 1
Communication Engineering College1, Jilin University, Changchun, Jilin Province, China, 130025

Yongzhong Zhang 2
Changchun Automobile Industry Institute2, Changchun, Jilin Province, China, 130011

AbstractLine feature is one of the most important features of palmprint image. A novel approach of principal palm-line extraction is proposed in this paper based on their characteristics of ridge edges. First, gray adjustment and median filtering are used to enhance contrast and weaken noise. Then, palm-lines are detected based on diversity and contrast. After this step, an improved Hilditch algorithm is used to do thinning, an edge tracking approach is applied to get rid of twigs and short lines, and then, the broken lines are connected. Finally, the single pixel principal palm-line image is obtained. The experiment results show that the principle palm-lines of most pictures can be detected correctly by using this algorithm referred above. Moreover, the algorithm can provide effective and accurate statistics for palmprint identification. Keywords-component: palmprint image; ridge edge; diversity; contrast; palm-line extraction

feature of palmprint. Han et al.

[7]

proposed principal line

extraction using sobel and morphological operation. And, in Manishas paper [8], Discrete Cosine Transform (DCT) based on feature vector for feature extraction is proposed. Compared with the structural features, the line feature can describe the palmprint more clearly and palmprint verification by using line feature matching is more effective [9]. In the palmprint image, the line characteristic presents as the dark stripe on the bright background. So the line extraction is marginal check from the condition of low contrast gradient and high noise background
[10]

. Refer to the features of

I. INTRODUCTION In information society, automatic personal identification has become a crucial issue, and the biometrics can efficiently solve this problem
[1]

palmprint image above, a novel line feature extraction algorithm based on the ridge marginal check is designed in this paper. The steps of this algorithm are shown in Figure 1.
ROI Image Gray Adjustment Median Filtering

. At present, personal physiological

characteristics such as fingerprints, facial features, palmprint, retina, iris, sound and dynamic signature have been widely used in automatic personal identity field. Palmprint verification has many unique merits
[2-5]

Twigs and short lines detection

Thinning

Edge Detection Line Image

: (1)

Broken lines linking


Figure 1.

rich texture information and stable line characteristic can be captured easily from low resolution image, (2) the environment condition of getting the palmprint images is easy to control, (3) palmprint is more acceptable by users than other biometrics, and (4) the palmprint is stable and the accuracy of palmprint recognition is quite high. In recent years, because of those features of palmprint, the palmprint recognition has become a focus in biological recognition field. In the process of palmprint recognition, the feature extraction is especially important. Many researchers have put their efforts into feature extraction. Huang et al.
[6]

Flowchart of Palm-line extraction

II. THE GRAY ADJUSTMENT The gray distribution of the original image is concentrated, which is not conducive to feature extraction and classification. Therefore, the gray value should be adjusted to reduce the effect of noise and illumination. The gray standardization algorithm which is proposed by Shi[11] is adopted in this research. I (i, j ) represents the gray value of the point ( i , j ) , and u , v represent the mean value and variance separately. The function of the gray standardization can be written as:

proposed the

modified finite Radon transform (MFRAT) to extract line

978-1-4244-6516-3/10/$26.00 2010 IEEE

1772

ut + I (i, j ) = ut

if I (i, j ) > u otherwise


(1)

A.

The Judgments of Pixel Gray Value

The first step is to judge whether the gray value can satisfy conditions [14]: I (i, j ) represents the gray value of the point ( i , j ) .

Where, the parameter is defined as:


=
v t { I ( i , j ) u }2 v

Avg =
are predetermined

m = 2 m = 2

I (i + m, j + n) (5 5)

n =2 n=2

(3)

And, the parameters

ut and

vt

as ut = 180 and vt = 350 in this experiment. The gray adjustment image is shown in Figure 4(b). III. MEDIAN FILTERING It is necessary to median filter the image, for the noise and palmprint wrinkle are bound to interfere with the line feature extraction. Under some conditions, median filtering
[12]

A flag variable is represented as follows:


1 flag (i, j ) = 0 if I (i, j ) < Avg T others
(4)

Where, T = 5 is the threshold. If flag ( i , j ) = 1 , (i , j ) may be a point on the principal line, and further judgment will be done in the next step. Otherwise, this point is ignored, and a neighbor point will be detected next. B. The Judgments of Diversity and Contrast When flag ( i , j ) = 1 , the diversity and contrast of the point ( i , j ) is calculated with equations below. Therefore, the true edge points can be found after this step. The feature of ridge edge is referring to range of observation and has direction. According to this feature, diversity D and contrast G [15] are defined to analysis features of ridge edge in original palmprint image. 1) The judgment of diversity The neighbor region of current pixel point ( m , n ) is written as:
R = {(i, j) | | i m | L, | j n | L}

can

eliminate the impulse noise effectively and keep good edge feature. A median filtering function I (i , j ) is defined as: I (i, j ) = Med{ f ij }
A

(2)

Where A is the window function, and { f ij } is the gray value sequence of image. Once the window function A is fixed, the gray value of any pixel in the image can be figured out easily with equation (2). In this paper, a 3 3 window is adopted to do the median filtering. The result shows that this algorithm not only eliminates the interferences of wrinkle, but also maintains the principal line feature well. The result of this step is shown in Figure 4(c). IV. THE EDGE DETECTION Palmprint image is typical stripe image. In palmprint image, lines and smooth curves with 3~5 pixels width and more than 15 pixels length are the principal lines. Line features including the principal lines and wrinkles are a kind of ridge edge [13].The gray values in the two sides of the ridge are different from the gray value on the ridge greatly. And the gray value of palmprint line is small. So the two steps of edge detection algorithm are: 1) Whether the gray value is small; 2) Whether it has the character of the ridge edge. The points, which are satisfied the two conditions above, are the edge points.

(5)

Where, the width of the region is L. And, the size of R is (2 L + 1) (2 L + 1) . In the range, the ridge edge can be represented entirely. The diversity D denotes whether the ridge edge is concentrated in one direction in the region R. If points with small gray values are concentrated in some direction, it is most probably that they are on the concave down ridge edge. Otherwise, they may be in smooth region or on the step edge. All pixels in region R are sorted according to the gray value as: {I l | I l I l +1 ,0 l < ( 2 L + 1) ( 2 L + 1)} . Where, I l is the gray value of the first pixel in the sorting.

1773

R1 = {(i, j ) | I 0 I (i, j ) < I ( 2l +1) L } R2 = {(i, j ) | I ( 2 L +1) L I (i, j ) < I ( 2 l +1)( 2 L+1) }

(6) (7)

Where = 0,45 ,90 ,135 . R1 is a region near some


direction. R2 is a region far from some direction.

The contrast G is defined as: G = max{I I | = 0,45,90,135} (12)

Where, I (i , j ) is the gray value of the point ( i , j ) . After sorting, the neighbor region R is divided into two regions R1 and R2. R1 is a region with small gray value and R2 is a region with big gray value. The diversity D is defined as: D = min{d | = 0,45,90,135}
(8)

Where, I 1 =

(i, j) R 1

I(i, j)
1

NR

I = 2

(i, j) R 2

I(i, j)
2

NR

NR1 is the number of pixels in region R1 NR2 is the number of pixels in region R2 . In neighbor region R, if the contrast value is big, the possibility of the existence of ridge edge is high; otherwise the possibility is low.

d =

( i , j )R1

| (i m) tan ( j n) |

= 0,45,90,135 (9)

3) The judgments of diversity and contrast In the neighbor region R, the possibility of the existence of ridge edge in this neighbor region can be calculated by using diversity D and contrast G meanwhile. The possibility of the existence of ridge edge is P which is defined as following:
1 D < T1 and G > T2 P= 0 otherwise

Where, ( m , n ) is the coordinate of the center point in neighbor region. For the equation of line which goes through the center point is shown as y x tan = 0 d denotes the straight-line distance between pixels with small gray value and this line. That is the diversity of pixels with small gray values in the direction . If the value of d is big in any direction, the distribution of pixels with small gray values is disorder. There is less probably that those points are on ridge edge. If the value of d is small in some direction, the pixels with small gray value is concentrate in this direction. There is most probably that those points are on ridge edge. 2) The judgment of contrast The contrast G denotes the variation of gray in neighbor region R. According to the space position, the region R is divided into two regions R1 and R 2 . The region partition graph is shown below:

(13)

Where T1 = 9 and T2 = 4 are thresholds in this experiment. The width of neighbor region is L = 2 . When P = 1 , the ridge edge is in existence. Using this algorithm, the palmprint line can be extracted exactly. The effect image is shown in figure 4(d). V. IMAGE THINNING

The palmprint lines have a certain width, which is redundant for the information extraction. Because feature extraction is only concerned with the direction of the line rather than its thickness, the palm-line image needs thinning treatment. The Hilditch algorithm, which is a classical thinning

Figure 2.

Region Partition Graph

algorithm, is quite effective. According to the principle of image corrosion, when the eight neighbor points are considered, the around situations of the eight points are also
(10)

L R1 = {(i, j ) | (i m) tan ( j n) | } 2

taken into account. In accordance with the relationship between points, the restrictive condition of Hilditch algorithm is re-set
[16]

L R2 = {(i, j ) | (i m) tan ( j n) |> } 2

(11)

. When remove the points for thinning,

unnecessary image pixels will be removed effectively without

1774

losing of key feature points, according to the new judging conditions. In a 3 3 region of an image, the points are marked with letters P1, P2,, P8. And P1 represents the center point.
[17]

0 I (i, j ) = 255

po int (i, j ) on the palm line otherwise

14

A parameter N(i,j) is the assessment degree of point ( i , j ) .When I (i, j) = 255, N(i,j) = 0 . If I (i , j ) = 0 , N(i,j)= N .where,

N is the number of points whose gray value is zero in the eight neighbor points of point ( i , j ) . That is if I (i, j) = 255 0 1 1 N(i, j) = 1 (255 I (i + u, j + v))) otherwise 255( u=1 v=1 15

The background points are assumed as white (value is 0), and the target points are assumed as black (value is 1). Some conditions are given as follows: a) P1 = 1 presents the black point; b) 2 NZ ( P1 ) 6 . Where, NZ(P 1 ) = Pi is the number of i=2 black points in the eight neighbor points of P1; c) Starting from P2, two neighbor points in clockwise turn are considered. If the first point Pi is 0, the latter point is 1, Ti is counted as 1; otherwise, Ti = 0,(i = 2,3, ..., 9). If Z ( P ) = T = 1 , it satisfies the condition.
9 0 i i i=2

If N(i,j) = 2 , the points are end points; N(i,j) = 3 , the points are common points on palm-line; N(i,j) = 4 , the
9

points are bifurcation points. A. Edge Tracking Algorithm Edges should be tracked when twigs and short lines are being removed. The algorithm is described as
[17]

: Starting

from the first feature point, the search direction is defined along the upper left. If the top left is black points, it is the edge point. Otherwise, the search direction spins 45 degrees clockwisely. This step is repeated until the next feature point is found. Figure 3 is a schematic of the edge tracking algorithm, in which arrows represent the search directions.

Where, T

1 Pi = 0and Pi +1 = 1 (i = 2,3, = 0 others

,9) ;And,

if i = 9, i + 1 = 2 .

d) P2 P3 P4 = 0 or Z 0 (P2 ) 1 ; e) P2 P4 P6 = 0 or Z 0 (P4 ) 1 . If point P1 meets the five conditions above, it is deleted. Each point in the image is disposed with this method till all the points can not be removed. Palm-line image after thinning process is given below in Figure 4 (e). VI. GETTING RID OF TWIG AND SHORT LINES After palm-line thinning, there are some small spur lines on the thinned skeleton line, because of the effect of noise and the existence of fold lines. So it is needed to get rid of twigs after palm-line thinning. Twigs and short lines whose lengths are less than the threshold T should be removed. First, the endpoints and bifurcation points are defined as follows: 1) Endpoint: a line starts or ends here; 2) Bifurcation point: a point on a line where it is separated into two or more lines. I(i,j) represents the gray value of a binary palm-line image, in which the gray value of palm lines is 0, and the gray value of background is 255, that is:

Figure 3Schematics of the edge tracking algorithm (a) Definition of searching direction (b) Edge tracking

B.

Getting Rid of Twigs and Short Lines One end of a twig is endpoint, the other is bifurcation

point. A twig is always quite short in length. According to this, starting from any endpoint, going along the track, if bifurcation point is encountered in a very short distance, this line is considered to be a twig which should be deleted. Algorithm is described as follows:

1775

a) Calculate the value of N ( i , j ) of target pixel; b) If the value of N ( i , j ) is 2, then this point is marked and the bifurcation points will be tracked along palm-line from this point; c) If the bifurcation point is found in the threshold range, the points tracked on this line will be deleted, or else deal with next point; d) After all the endpoints have been processed, the algorithm is over, otherwise continue from step a. A piece of isolated palm-line whose length is within threshold range is called short line. The deletion algorithm for short lines is similar to that for twigs. Not go into details here. Figure 4 (f) shows the effect image after removing twigs and short lines.

Similarly, a point c is assumed as another endpoint of the palm line, from which the point d can be got in 10 pixels along the line. Then, the coordinates of a, b, c, d can be obtained as (x1, y1) (x2, y2) (x3, y3) (x4, y4) , respectively. Then the tilt angle of the line ab, cd and the tilt angle between ac can be calculated.

ab = tan 1 (( y2 y1 ) /( x2 x1 )) cd = tan 1 (( y4 y3 ) /( x4 x3 )) ac = tan 1 (( y3 y1 ) /( x3 x1 ))


The Euclidean distance between two endpoints d is shown as:

d = ( x3 x1 ) 2 + ( y3 y1 ) 2 If points meet all the three conditions below:

and d D . Where, Arg1 = Arg 2 = 0.1, D = 15 in this experiment. Points a and c are considered to be the break points and should be connected.
(a) (b) (c)

| ab cd | Arg1 ,

| ac (ab + cd)/2| Arg2 ,

Figure 5. The effect image of breakpoints connection (a)Palm-line image (b) Algorithm example (c) Image after connecting (d) (e) (f) Figure 4. Steps of palm-line extraction: (a) Original image of ROI(b) image after gray normalize(c) Image after median filtering(d) Palm-line image(e) Image after thinning(f) Image after moving fake lines

According to this criterion, a breakpoint join algorithm is designed. The algorithm steps are showed as follows: a) Detect palm-line endpoints according to the definition of endpoints and calculate the tilt angle of the lines on which each endpoints lie; b) Calculate the distance between any two different endpoints and the tilt angle of the lines connecting them; c) According to the algorithm proposed, judge whether the two endpoints are breakpoints, if they are, connect them. Figure 5(c) shows the effect image after connecting the breakpoints. VIII. THE RESULTS AND ANALYSIS In this experiment, the images are obtained by digital camera which is similar to the online palmprint image having low contrast and high noise characteristics. The Region of

VII. CONNECTING THE BROKEN LINES During the image extraction process, illuminate affects the clarity of certain sections of palm-line. Furthermore, median filtering also influences the clarity of palmprint image in some extent. Therefore, there are some broken lines in the palm-line image captured by using the algorithms above. In order to express the palm-line more accurately and avoid affecting the matching process in the following, broken lines need to be repaired. In this paper, broken points are judged by the algorithm of calculating the tilt angle. In order to illustrate the algorithm, the breakpoint region in Figure 5(a) is amplified, which is shown in Figure 5(b). A point a is assumed as one endpoint of the palm line, from which the point b can be got in 10 pixels along the line.

1776

Interest (ROI) image is the biggest center area of palmprint in 256 gray values. To gauge the effectiveness of our method, the palmprint database of 250 palmprint images comprising 50 different palms captured by a CCD-based device was used. The resolution of all of the original palmprint images is 640 480 . By using the algorithm proposed in this paper, the main line information can be extracted effectively. And this algorithm provides effective basis for plamprint recognition. Figure 6 shows the extraction results of several groups of palmprint images in the database.

and lay a good foundation for features extraction and matching in palmprint recognition system. REFERENCES
[1] [2] Ning Du, Miao Qi,Yinan Zhang, Jun Kong, Palmprint Verification based on Specific-user, Computer Society, IEEE, 2009, pp. 314-317. Adams Konga, David Zhang, Mohamed. Kamel Three measures for secure palmprint identification, Pattern Recognition41, 2008, pp. 1329 1337. Jian Yang, Zhang D, Jing-yu Yang, Niu B. Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics, IEEE, Transactions on Pattern Analysis and Machine Intelligence, 2007, April, Vol.29, No.4. Jian-Gang Wang, Wei-Yun Yau, Andy Suwandy, Eric Sung, Person recognition by fusing palmprint and palm vein images based on Laplacianpalm representation, Pattern Recognition 41, 2008, pp. 1514 1527. A.K.Jain, A.Ross, D.Prabhakar, An introduction to biometric recognition, Trans. On Circus and System for Video Technology 14 (1), IEEE, 2004, pp. 4-20. D. S. Huang, W. Jia, D. Zhang, Palmprint verification based principal lines, Pattern Recognition 41, 2008, pp. 1316-1328. C. Han, H. Cheng, C. Lin, and K. Fan, Personal authentication using palmprint features, Pattern Recognition 37(10), 2003, pp. 371-381. Manisha P. Dale, Madhuri A. Joshi, Neena Gilda, Texture Based Palmprint Identification Using DCT Features, Pattern Recognition, IEEE, 2009, pp. 221-224. D.Zhang and W. Shu, Two novel characteristics in palmprint verification: datum point invariance and line feature matching, Pattern Recognition32, IEEE, 1999 , pp. 691702. Dai Qing-yun, Yu Ying-lin, A Line Feature Extraction Method for Online Palmprint Images Based on Morphological Median Wavelet, Chinese Journal of Computers, 26(2), 2003, pp.234-239. W. Shi and D. Zhang, Automatic palmprint verification, International Journal of Image and Graphics 1 (1), 2001, pp. 135151. Yoshiko Hanada, Mitsuji Muneyasu, Akira Asano, An Improvement of Unsupervised Design Method for Weighted Median Filters Using GA, Intelligent Signal Processing and Communication Systems, IEEE, 2009, pp. 204-207. Xiangqian Wu, David Zhang, Kuanquan Wang,Palmprint classification using Principal lines, Pattern Recognition 37, IEEE, 2004, pp. 1987-1998. Li Wenxin, Xia Shengxiong,Zhang Dapeng, A New Palmprint Identification Method Using Bi-Directional Matching Based on Major Line Features, Journal of Computer Research and Development 41(6), 2004, pp. 997-999. YANG Xuan LIANG De Qun, Multiscale Roof Edge Detection in Industry Image, Journal of Infrared and Millimeter Waves 17(6), 1998, pp. 411-414. Seeking Science and Technology, VC++ Digital Imagine Processing Algorithm and Implementation, Beijing, Posts&Telecom Press, 2006. Kuanquan Wang, Jing Liao, Xiangqian Wu, Henggui Zhang, Recognize a Special Structure in Palmprint for Palm Medicine, Twentieth IEEE International Symposium on Computer-Based Medical Systems, 2007.

[3]

[4]

[5]

[6] (a) (b) (c) [7] [8]

[9] (d) (e) (f) [10]

Figure 6. Example of palm-line extraction: (a)(b)(c)Original image(d)(e)(f)Palm-line image after extraction

IX. CONCLUSION This paper proposes a novel method for principal palm-line extraction based on gray, diversity and contrast. The problems of illumination variations, high noise and low contrast of palmprint images can be overcome by using this algorithm. Firstly, the gray is adjusted to enhance the contrast of image, and then, median filter is used to eliminate interference. Secondly, a novel method is used to extract the palm-line. Thirdly, the palm-line is thinned and twigs and short lines in the image are deleted. Finally, broken lines are connected. After these steps, single pixel principal palm-line can be extracted correctly. The algorithm proposed in this paper is easy, realizable and having high calculating rate. It can express the framework of principal palm-line effectively

[11] [12]

[13]

[14]

[15]

[16] [17]

1777

Вам также может понравиться