Вы находитесь на странице: 1из 32

INTRODUCTION

1.1 INTRODUCTION

Biometrics refers to the identification and verification of


human identity based on certain physiological traits of a person. The commonly
used biometric features include speech, fingerprint, face, handwriting, gait, hand
geometry etc. The face and speech techniques have been used for over 25 years,
while iris method is a newly emergent technique. With the need for security
systems going up, Iris recognition is emerging as one of the important methods
of biometrics-based identification systems. This project basically explains the
Iris recognition system, with a few modifications. Firstly, image preprocessing
is performed followed by extracting the iris portion of the eye image. The
extracted iris part is then normalized, and Iris Code is constructed using edge
detected image. Finally, Iris Codes are compared with a specified value given in
the range to find the person who were given as, Experimental image results
show that unique codes can be generated for every eye image.

Low pass filtering is implemented using two different low-


pass filter masks, the averaging and bi-cubic filter mask, for comparison.
Decimation is implemented to output a final image of 100x100 dimensions. The
computational complexity lies in the algorithms for capturing the image, doing
the processing and then for displaying the output image on a monitor. Because,
in the practical case the output from the DSP would eventually stimulate the
grid of electrodes in the eye, the computations for the display function will not
be needed.

An algorithm for object detection from the captured frame to


fit a 100x100 grid is also under consideration. The computational complexity

1
for such an algorithm may be unrealistically high for a low-power DSP and so
initially the above stated existing image processing algorithms are implemented
to test the functionality of the processors in real-time. This project considers
computational load of decimation, and low-pass filtering and the ability of
current DSPs to implement these functions in real time.

1.2 OBJECTIVES

The real-time image processing in detecting IRIS RECOGNITION


consists of the implementation of various image processing algorithms like
Gaussian noise, edge detection, edge enhancement, center of pupil calculation,
iris code generation etc. The algorithmic computations in real-time may have
high level of computational complexity and hence the use of digital image
process for the implementation of such algorithms is proposed.

1.3PROJECT DESCRIPTION

Filtering and decimation are done simultaneously in the convolution


stage. This algorithm implements edge enhancement and edge detection. Edge
Enhancement is done using Laplacian filter mask the edge image obtained using
Laplacian filter mask is scaled and added to the original image to enhance the
edges. The images obtained after convolution of the original image with each
of these masks are added to give the edge detected image. Mean is calculated

2
for the edge detected image and then thresholding is done to find the person
identification.

1.4METHODOLOGY

The current camera captures images having any resolution can be


converted to a image with 100 x 100 resolution. The image processing on these
frames consists of edge detection, edge enhancement, centre of pupil
calculation. The enhanced output image will be a 100x100 image. Currently,
this algorithm is implemented on a TMS320C6713 DSP for quick detect the
person identification.

3
ANALYSIS

2.1 INTRODUCTION

To create or develop a new system first we have to study the prior system
analysis difficult problems faced by the operator of that system. System
Analysis therefore understands such problems and proposes a new system in
which the above problems are rectified.

2.2 FEASIBILITY ANALYSIS

FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business
proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis, the feasibility study of the proposed system
is to be carried out. This is to ensure that the proposed system is not a burden to
the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are

Economical feasibility
Technical feasibility
Social feasibility

1. Economical Feasibility
This study is carried out to check the economic impact that the
system will have on the organization. The amount of fund that the company can
pour into the research and development of the system is limited. The
expenditures must be justified. Thus, the developed system as well within the
budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
2. Technical Feasibility
This study is carried out to check the technical feasibility, that is,
the technical requirements of the system. Any system developed must not have
a high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.

4
3. Social Feasibility
The aspect of study is to check the level of acceptance of the system
by the user. This includes the process of training the user to use the system
efficiently. The user must not feel threatened by the system, instead must accept
it as a necessity. The level of acceptance by the users solely depends on the
methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also
able to make some constructive criticism, which is welcomed, as he is the final
user of the system.

2.3 ALGORITHMS

PROPOSED LIVENESS DETECTION PROCESS:

A two-stage iris liveness detection technique is used. The technique,


depicted in Fig., is
motivated by the novel RGB/NIR hybrid sensor.

5
Fig 1. Workflow of the proposed liveness detection technique.

6
Fig 2. RGB/NIR hybrid camera for smartphones.
(a) Prototype device.
(b) Smartphone form factor optics.
(c) Hybrid sensor used in the prototype device - R, G, B and IR
represents Red, Green, Blue and Infra-Red sampling filters
respectively.

The various steps of the proposed technique are described in the following
subsections.

1. RGB-NIR Image Acquisition

Once the user has pressed the wake up or unlock key of the
smartphone, the device will start acquiring eye images for further
processing. State of the art face detection, eye detection and eye tracking
techniques on the image stream in the visible wavelength are employed to
obtain a sequence of good quality in-focus eye regions. Once a good
quality eye region is detected, both RGB and NIR images of the eye
region are acquired. The RGB and NIR image formation can be explained
as,

7
8
2. Multispectral Feature Extraction

This specific feature extraction technique is chosen as it represents the unique


distribution of information across various image planes in the hyperspectral
image. Also, the proposed technique is found to be computationally inexpensive
and generate a compact feature vector of 4096 elements.

9
3. Intermediate Decision Making

The training feature vectors for the live user are calculated while the iris
enrolling stage. A one class classifier is trained on this live feature vector. Such
a classifier will be able to predict whether the incoming images are most likely
belonging to live person or a presentation attack. The deviation (error) d when
the feature vectors of the query image Fq from the distribution of the
representative feature vectors in the trained model Fdb is computed from a
Bayesian point of view. That is, the distance is measured as the square root of
the entropy approximation to the logarithm of evidence ratio when testing
whether the query image can be represented as the same underlying distribution
of the live images.

10
Iris Recognition

3.1. Objectives:
The image processing in Iris Recognition consists of the implementation
of various image processing algorithms like spatial averaging, Gaussian noise,
edge detection, Polar to Rectangular coordinates, Generation of iris code etc.

3.2. Project Description:

BMP Eye image is taken from the database system. Firstly, convert bmp
image into data, inorder to avoid the noise in the image we have convolve the
image data with Gaussian noise function. Using Some of edge detector like
Canny, Sobel, Laplace etc., for getting the Sharp edges algorithm implements
edge enhancement and edge detection. Enhancement is done using Laplace
filter mask the edge image obtained using Laplacian filter mask is scaled and
added to the original image to enhance the edges. The images obtained after
convolution of the original image with each of these masks are added to give the
edge detected image. After edge detection, we have make a center of the pupil,
Iteration is made with different angle, lower part of the iris is taken for iris,
likewise we make for different image and create a database.

Block Diagram:

BMP GAUSSIAN EDGE CENTRE


TO NOISE DETECTION OF PUPIL
DATA

THRESHOLD MATCHING
& USER DISPLAY IN PC POLOR
TO
RECTA
NGLE

Fig 3. Block Diagram

11
3.3. Implementation:
1. A eye gray scale image is taken as input in 100 x 100 pixels.
2. Spatial averaging is taken to sharpen the image.
3. Edge detection is done for spatial averaged image.
4. Center of the pupil is done in the Edge Detected image
5. The polor image is converted to rectangular which gives the iris code
6. Output is given using graphics which display the name of the user

12
IMPLEMENTATION AND RESULTS

This Project is implemented in C language and cross complied in TMS3206713


kit.

4.1 MODULES

4.1.1. Image acquisition

This step is one of the most important and deciding factors for obtaining a
good result. A good and clear image eliminates the process of noise removal
and also helps in avoiding errors in calculation. In this case, computational
errors are avoided due to absence of reflections, and because the images have
been taken from close proximity. This project uses the image provided by
CASIA (Institute of Automation, Chinese Academy of Sciences,
http://www.sinobiometrics.com/) These images were taken solely for the
purpose of iris recognition software research and implementation. Infra-red light
was used for illuminating the eye, and hence they do not involve any specular
reflections. Some part of the computation which involves removal of errors due
to reflections in the image were hence not implemented.

HUMAN VISION SYSTEM:

THE HUMAN EYE:

The human vision system (HVS) behaves like a band pass filter for spatial
frequencies with bad sensitivity to small colors variation (eg. Image
compression) but it operates over 10 orders of magnitude of illumination. The
retina is composed of two kind of photoreceptors: _ About 100 million of rods

13
provide scotopic vision (low illumination, high resolution, black & white).
About 6.5 million of cones in the center of the retina (fovea) provide photopic
vision (high illumination, low resolution, colors).
The combination of the rods and cones (intermediate illumination) provide
colors vision with high resolution and high sensitivity to intermediate spatial
frequencies (eg. Edges).

Fig 4. Human Eye

4.1.2. Image preprocessing

Due to computational ease, the image was scaled down by 60%.


The image was filtered using Gaussian filter, which blurs the image and reduces
effects due to noise. The degree of smoothening is decided by

Fig 5. Canny edge image


14
Edge detection is followed by finding the boundaries of the iris and the pupil.
Daugman proposed the use of the Integro-differential operator to detect the
boundaries and the radii. It is given by

This behaves as a circular edge detector by searching the gradient image along
the boundary of circles of increasing radii. From the likelihood of all circles, the
maximum sum is calculated and is used to find the circle centers and radii.
The Hough transform is another way of detecting the
parameters of geometric objects, and in this case, has been used to find the
circles in the edge image. For every edge pixel, the points on the circles
surrounding it at different radii are taken, and their weights are increased if they
are edge points too, and these weights are added to the accumulator array. Thus,
after all radii and edge pixels have been searched, the maximum from the
accumulator array is used to find the center of the circle and its radius. The
Hough transform is performed for the iris outer boundary using the whole
image, and then is performed for the pupil only, instead of the whole eye,
because the pupil is always inside the iris.
There are a few problems with the

Fig 6. Normalization process

15
where r1 = iris radius

The radial resolution was set to 100 and the angular resolution to
2400 pixels. For every pixel in the iris, an equivalent position is found out on
polar axes. The normalized image was then interpolated into the size of the
original image, by using the interp2 function. The parts in the normalized image
which yield a NaN, are divided by the sum to get a normalized value.

Fig 7. Unwrapping the iris

Encoding

The final process is the generation of the iris code. For this, the most
become a better choice. Log Gabor filters are constructed using

Since the attempt at implementing this function was unsuccessful, the


Gabor- convolve function written by Peter Kovesi was used. It outputs a cell
containing the complex valued convolution results, of the same size as the input
image. The parameters used for the function were:
nscale = 1
norient = 1
minwavelength = 3
mult = 2

16
sigmaOnf = 0.5
dThetaOnSigma = 1.5
Using the output of Gabor convolve, the iris code is formed by assigning 2
elements for each pixel of the image. Each element contains a value 1 or 0
depending on the sign + or of the real and imaginary part respectively. Noise
bits are assigned to those elements whose magnitude is very small and
combined with the noisy part obtained from normalization. The generated Iris
Code is shown in the appendix.

Code Matching

Comparison of the bit patterns generated is done to check if the two irises
belong to the same person. Calculation of Hamming Distance (HD) is done for
this comparison. HD is a fractional measure of the number of bits disagreeing
between two binary patterns. Since this code comparison uses the iriscode data
and the noisy mask bits, the modified form of the HD is given by:
visible patterns are unique to all individuals and it has been found that the
probability of finding two individuals with identical iris patterns is almost zero.
Though there lies a problem in capturing the image, the great pattern variability
and the stability over time, makes this a reliable security recognition system.

4.2. BACKGROUND

Ophthalmologists Alphonse Bertillon and Frank Burch were one among the
first to propose that iris patterns can be used for identification systems. In 1992,
John Daugman was the first to develop the iris identification software. Other
important contribution was by R.Wildes et al. Their method differed in the
process of iris code generation and also in the pattern matching technique. The
Daugman system has been tested for a billion images and the failure rate has
been found to be very low. His systems are patented by the Iriscan Inc. and are

17
also being commercially used in Iridian technologies, UK National Physical
Lab, British Telecom etc.

4.3. OUTLINE

This thesis consists of six main parts, which are image acquisition,
preprocessing, iris localization, normalization, encoding and the iriscode
comparison. Each section describes the theoretical approach and is followed by
how it is implemented. The paper concludes with the experimental results in the
appendix. the standard deviation, and it is taken to be 2 in this case.

4.4. IRIS LOCALIZATION

The part of the eye carrying information is only the iris part. It lies
between the sclera and the pupil. Hence the next step is separating the iris part
from the eye image. The iris inner and outer boundaries are located by finding
the edge image using the Canny edge detector.

The Canny detector mainly involves three steps, viz. finding the gradient,
non-maximum suppression and the hysteresis thresholding. As proposed by
Wildes, the thresholding for the eye image is performed in a vertical direction
only, so that the influence due to the eyelids can be reduced. This reduces the
pixels on the circle boundary, but with the use of Hough transform, successful
localization of the boundary can be obtained even with the absence of few
pixels. It is also computationally faster since the boundary pixels are lesser for
calculation.

Using the gradient image, the peaks are localized using non-maximum
suppression. It works in the following manner. For a pixel imgrad(x,y), in the
gradient image, and given the orientation theta(x,y), the edge intersects two of

18
its 8 connected neighbors. The point at (x,y) is a maximum if its value is not
smaller than the values at the two intersection points.

The next step, hysteresis thresholding, eliminates the weak edges below a
low threshold, but not if they are connected to a edge above a high threshold
through a chain of pixels all above the low threshold. In other words, the pixels
above a threshold T1 are separated. Then, these points are marked as edge
points only if all its surrounding pixels are greater than another threshold T2.
The threshold values were found by trail and error, and were obtained as 0.2 and
0.19.

Hough transform. Firstly, the threshold values are to be found by trial.


Secondly, it is computationally intensive. This is improved by just having eight-
way symmetric points on the circle for every search point and radius. The
eyelashes were separated by thresholding, and those pixels were marked as
noisy pixels, since they do not include in the iris code.

Fig 8. Image with boundaries

19
4.5 IMAGE NORMALIZATION

Once the iris region is segmented, the next stage is to normalize this part, to
enable generation of the iris code and their comparisons. Since variations in the
eye, like optical size of the iris, position of pupil in the iris, and the iris
orientation change person to person, it is required to normalize the iris image, so
that the representation is common to all, with similar dimensions.

Normalization process involves unwrapping the iris and converting it into its
polar equivalent. It is done using Daugmans Rubber sheet model. The center of
the pupil is considered as the reference point and are mapping formula is used to
convert the points on the Cartesian scale to the polar scale.

The modified form of the model is shown on the next page.

Fig 9. Normalized iris image

discriminating feature in the iris pattern is extracted. The phase information in


the pattern only is used because the phase angles are assigned regardless of the
image contrast. Amplitude information is not used since it depends on
extraneous factors. Extraction of the phase information, according to Daugman,
is done using 2D Gabor wavelets. It determines which quadrant the resulting
phasor lies using the wavelet:

20
where, has the real and imaginary part, each having the value 1 or 0,
depending on which quadrant it lies in.

An easier way of using the Gabor filter is by breaking up the 2D


normalized pattern into a number of 1D wavelets, and then these signals are
convolved with 1D Gabor wavelets.

Gabor filters are used to extract localized frequency information. But, due
to a few of its limitations, log-Gabor filters are more widely used for coding
natural images. It was suggested by Field, that the log filters (which use
gaussian transfer functions viewed on a logarithmic scale) can code natural
images better than Gabor filters (viewed on a linear scale). Statistics of natural
images indicate the presence of high-frequency components. Since the
ordinary Gabor fitlers under-represent high frequency components, the log
filters
Where, Xj and Yj are the two iriscodes, Xnj and Ynj are the corresponding
noisy mask bits and N is the number of bits in each template.
Due to lack of time, this part of the code could not be implemented .

21
OUTPUT SCREENS

1. Browsing image from data base


2. Coverting into gray scale image
3. Identifying the edges
4. Generating binary code for that
5. Comaring with the trained image

22
23
24
25
Final Output

26
Database

27
TESTING

PROCESS

The purpose of testing is to discover errors. Testing is the


process of trying to discover every conceivable fault or weakness in a work
product. It provides a way to check the functionality of components, sub
assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner.
There are various types of test. Each test type addresses a specific testing
requirement.

6.1. TESTING TECHNIQUES

UNIT TESTING

Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program input produces
valid outputs. All decision branches and internal code flow should be validated.
It is the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.

INTEGRATION TESTING

Integration tests are designed to test integrated software components


to determine if they actually run as one program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.

FUNCTIONAL TESTING

Functional tests provide systematic demonstrations that functions


tested are available as specified by the business and technical requirements,
system documentation and user manuals.
28
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.


Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.

SYSTEM TESTING

System testing ensures that the entire integrated software system


meets requirements. It tests a configuration to ensure known and predictable
results. An example of system testing is the configuration oriented system
integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.

WHITE BOX TESTING

White Box Testing is a testing in which the software tester has


knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is used to test areas that cannot be reached from a black box
level.

BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge


of the inner workings, structure or language of the module being tested. Black
box tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box. you cannot see into it. The test provides
inputs and responds to outputs without considering how the software works.

TEST STRATEGY AND APPROACH

Field testing will be performed manually and functional tests will be written in
detail.

Test objectives

29
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.

Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets
the functional requirements.

Test Results

All the test cases mentioned above passed successfully. No defects encountered.

30
CONCLUSION

This thesis presented a novel liveness detection technique to be used with


iris recognition on smartphones. This technique relies on the capability of iris
biometrics enabled smartphones to acquiring RGB and NIR image pairs
simultaneously. It has been demonstrated that harnessing the capabilities of the
hybrid RGB/NIR acquisition workflow can provide robust liveness detection
without requiring additional hardware or significant change in the acquisition
workflow. A detailed system description and suitable workflow are presented.

Initial proof-of-concept experiments have shown that the proposed


technique is effective for detecting a range of presentation attacks including a
sophisticated attack based on a lifelike mannequin that duplicates the
characteristics of a human face and eyes. The conclusions are that iris
biometrics can be made more robust than other well-known biometrics such as
fingerprint and face by taking advantage of the handheld acquisition flow and
the latest visible/NIR CMOS
image sensor technologies.

FUTURE ENHANCEMENTS
Future work will involve developing:

1. an improved database to verify this technique over an enlarged user group


and

2. introducing additional advanced attack scenarios .

31
REFERENCES
1. S. Curtis, Smartphone at 20: IBM Simon to iPhone 6, The Telegraph,
Aug. 2014.

2. S. Curtis, Quarter of the world will be using smartphones in 2016, The


Telegraph, Dec. 2014.

3. H. Orman, Did you want privacy with that?: personal data protection in
mobile devices, IEEE Internet Comput., vol. 17, no. 3, pp. 8386, May.
2013.

4. D. Siewiorek, Generation smartphone, IEEE Spectr., vol. 49, no. 9, pp.


5458, Aug. 2012.

5. A. Smith, U.S. Smartphone Use in 2015, Pew Research Centre, Apr.


2015.

6. N. L. Clarke and S. M. Furnell, Authentication of users on mobile


telephones - A survey of attitudes and practices, Comput. Secur., vol.
24, no. 7, pp. 519527, Oct. 2005.

32

Вам также может понравиться