Вы находитесь на странице: 1из 28

Introduction The authentication of a persons identity with the help of his signature increased considerably during the end

of the last century. We would like to know who signed a contract or a check whether by the authentic person or not by a criminal. Handwritten signature is one of the most widely accepted personal attributes for identity verification. As a symbol of consent and authorization, especially in the prevalence of credit cards and bank cheques, handwritten signature has long been the target of fraudulence. Therefore, with the growing demand for processing of individual identification faster and more accurately, the design of an automatic signature system faces a real challenge. Signature verification is different than the character recognition, because signature is often unreadable, and it is just an image with some particular curves that represent the writing style of the person. Signature is a special case of handwriting and is just a symbol. So it is wisdom and necessary to deal with a signature as a complete image with special distribution of pixels and representing a particular writing style and not as a collection of letters and words [2]. The algorithm applied for off-line signature verification is typically a feature extraction method. In this method features are extracted and adjusted from the original signatures. These extracted features are used to distinguish between original and forgery signatures. Holographic signature is probably one of the most ancient ways to authenticate and express their will, for example in constitutions and marriages. Even before other modern methods were invented, signatures were already culturally accepted and understood. It is important to make it clear the difference between validation and authentication. Validation verifies that the information provided matches previously recorded data. Authentication checks identity against certain previously declared biographic information. It mainly verifies the identity that the person declares; it does not infer identity from current data. There are several approaches to verify identity, including iris and fingerprint scanning. However handwriting is still the most prevalent and natural way to do it. Common security and validation approaches include the ones depicted in Table 1.

From the table, is simple to observe that high accuracy is often in invasive approaches cases and at a high cost. The signature approach is non-invasive, and the rate of false positives is lower than using other fingerprint techniques. As a consequence, it is a good option to perform a quick and easy light validation. Furthermore, validation is a faster process than identification. For this reason, it is more commonly used. Biometric authentication has been widely used as a trusted security solution in protecting sensitive asset. These systems are not based on what a person possesses (password, PIN, token), but on the basis of what the person is. Unlike physiologic biometry, the authentication with holographic signature is non-intrusive. For hundreds of years, the signature has been approved by all cultures and civilizations with a major social implication, and it has been accepted as legal evidence. Nature of Handwritten Signatures It is well known that doesnt exist two signatures equals of the same person. Successive signatures by the same person will differ, may also differ in scale and orientation. For example signatures written with a strange pen and in an unaccustomed place are likely to be different than the normal signatures of an individual. When a signature is being written to be used for comparison, this can also produce a self-conscious, unnatural signature.

Some researchers suggests that a signature has at least three attributes, form, movement and variation, and since signatures are produced by moving a pen on paper, movement perhaps is the most important part of a signature. The movement to signature is produced by muscles of the fingers, hand, wrist, and for some writers the arm, and these muscles are controlled by nerve impulses. Once a person is used to signing his or her signature, these nerve impulses are controlled by the brain without any particular attention to detail. Basic HSV Methodology Most HSV techniques use the following procedure for performance evaluation: 1. Registration - Obtain a number of signatures for each individual at enrolment or registration time (these signatures are called sample signature). 2. Pre-processing and Building Reference Signature(s) - preprocess the sample signatures if required, compute the features required, and produce one or more reference signatures. Decide on what threshold will be used in verifying a signature. 3. Test Signature - when a user wishes to be authenticated, he/she presents the identification of the individual he/she claims to be and presents a signature (we call this signature the test signature). Preprocess as in Step 2 and compute the features of the signature. 4. Comparison Processing - the test signature is compared with the reference signature(s) based on the features values and the difference between the two is then computed using one of the many existing (or specially developed) distance measures. Overview of signature verification systems

For centuries, handwritten signatures have been an integral part of business transactions, contracts and agreements. The distinctiveness of a handwritten signature helps to prove the identity of the signer, while the act of signing a document represents the signer's acceptance of its terms and also codifies the document's contents as being official and complete at the time it was signed [3]. Handwritten signature verification can be divided into on-line and off-line verification.

On-line verification refers to a process that the signer uses a special pen called a stylus to create his or her signature, producing the pen location, speeds and pressures. While off-line verification just deals with signature images acquired by a scanner or a digital camera. In an off-line signature verification system, a signature is acquired as an image. This image represents a personal style of human handwriting, extensively described by the graphometry. In such a system the objective is to detect different types of forgeries, which are related to intra and inter-personal variability. The system applied should be able to overlook inter-personal variability and mark these as original and should be able to detect intra-personal variability and mark them as forgeries [4, 5]. As compared to on-line signature verification systems, off-line systems are difficult to design as many desirable characteristics such as the order of strokes, the velocity and other dynamic information are not available in the off-line case. The verification process has to fully rely on the features that can be extracted from the trace of the static signature image only. Although difficult to design, off-line signature verification is crucial for determining the writer identification as most of the financial transactions in present times are still carried out on paper. Therefore, it becomes all the more essential to verify a signature for its authenticity. The design of any signature verification system generally requires the solution of five sub-problems: data acquisition, pre-processing, feature extraction, comparison process and performance evaluation [3, 5]. Overview of forgery detection systems

Automatic examinations of questioned signatures were introduced in the late 1960s with the advent of computers. As computer systems became more powerful and more affordable, designing an automatic forgery detection system became an active research subject. Most of the work in off-line forgery detection, however, has been on random or simple forgeries and less on skilled or simulated forgeries. Before looking into the landmark contributions in the area of forgery detection, we first enumerate the types of forgeries [3]. Types of forgeries There are three different types of forgeries to take into account. The forgeries involved in handwritten signatures have been categorized based on their characteristic features. Each type of forgery requires different types of verification approach. Different types of forgeries and their variation from original signature are shown in Fig. 1. We have also attempted to classify the various kinds of forgeries into the following types: 1. Random forgeryThe signer uses the name of the victim in his own style to create a forgery known as the simple forgery or random forgery. This forgery accounts for the majority of the forgery cases although they are very easy to detect even by the naked eye. 2. Unskilled forgeryThe signer imitates the signature in his own style without any

knowledge of the spelling and does not have any prior experience. The imitation is preceded by observing the signature closely for a while. 3. Skilled forgeryUndoubtedly the most difficult of all forgeries is created by professional impostors or persons who have experience in copying the signature. For achieving this one could either trace or imitate the signature by hard way. Architecture

Capture signature

Validate signature

3. OVERVIEW OF NEURAL NETWORK In this work there is a challenge of creating a system with the ability to recognize hand written signature and verify its authenticity. This poses a problem because we are trying to get the computer to solve a problem with a method of solution that goes outside the convention of writing an algorithmic process.The challenge involves making the computer solve the problem using a series of new steps. After a lengthy research, the only feasible solution required is using the concept of the Neurons in human brain, which is familiar with medical practitioners. How the Human Brain Works The workings of the neuron of the human brain remain incomplete till date. But from what we understand so far about the human brain a system with similar properties can be developed. In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurons. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes

Fig 1: Brain Neuron 3.1 Artificial Neuron An artificial neuron is designed to simulate the brain neuron under limiting conditions such as a smaller processing power than the human brain, can also be optimized to have similar properties which are 1. Training Which utilizes several inputs to create an output 2. Utilization Which uses an already created output to solve a problem

For each of these stages, a general rule governs the operation. This rule can be called the firing rule [5]. The rule accounts for the high flexibility of a neural network. It simply defines how a neuron should optimize for an input, irrespective of whether it has been previously trained for that input or not. A firing rule based on the Hamming distance principle is stated below. Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison, they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.

Back Propagation Artificial Neural Network There are several algorithms that can be used to create an artificial neural network, but the Back propagation was chosen because it is probably the easiest to implement, while preserving efficiency of the network. Backward Propagation Artificial Neural Network (ANN) use more than one input layers (usually 3). Each of these layers must be either of the following: This layer holds the input for the network This layer holds the output data, usually an identifier for the input. This layer comes between the input layer and the output layer. They serve as a propagation point for sending data from the previous layer to the next layer. A typical Back Propagation ANN is as depicted below. The black nodes (on the extreme left) are the initial inputs. Training such a network involves two phases. In the first phase, the inputs are propagated forward to compute the outputs for each output node. Then, each of these outputs are subtracted from its desired output, causing an error [an error for each output node]. In the second phase, each of these output errors is passed backward and the weights are fixed. These two phases are continued until the sum of square of output errors reaches an acceptable value. Each neuron is composed of two units. The First unit adds products of weights coefficients and input signals while the second unit realizes nonlinear function, called neuron activation function. MATLAB MATLAB (matrix laboratory) is a numerical computing environment and fourthgeneration programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation ofalgorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++,Java, and Fortran. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic

computing capabilities. An additional package, Simulink, adds graphical multidomain simulation and Model-Based Design for dynamic and embedded systems. In 2004, MATLAB had around one million users across industry and academia.[2] MATLAB users come from various backgrounds of engineering, science, and economics. MATLAB is widely used in academic and research institutions as well as industrial enterprises. History Cleve Moler, the chairman of the computer science department at the University of New Mexico, started developing MATLAB in the late 1970s.[3] He designed it to give his students access to LINPACK and EISPACK without them having to learn Fortran. It soon spread to other universities and found a strong audience within the applied mathematics community. Jack Little, an engineer, was exposed to it during a visit Moler made to Stanford University in 1983. Recognizing its commercial potential, he joined with Moler and Steve Bangert. They rewrote MATLAB in C and founded MathWorks in 1984 to continue its development. These rewritten libraries were known as JACKPAC.[4]In 2000, MATLAB was rewritten to use a newer set of libraries for matrix manipulation, LAPACK.[5] MATLAB was first adopted by researchers and practitioners in control engineering, Little's specialty, but quickly spread to many other domains. It is now also used in education, in particular the teaching of linear algebra and numerical analysis, and is popular amongst scientists involved in image processing.[3] Syntax The MATLAB application is built around the MATLAB language, and most use of MATLAB involves typing MATLAB code into the Command Window (as an interactive mathematical shell), or executing text files containing MATLAB code and functions.[6] Variables Variables are defined using the assignment operator, =. MATLAB is a weakly typed programming language. It is a weakly typed language because types are implicitly converted.[7] It is a dynamically typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects,[8] and that their type can change. Values can come from constants, from computation involving values of other variables, or from the output of a function. For example: >> x = 17 x = 17 >> x = 'hat' x = hat

Vectors/matrices As suggested by its name (a contraction of "Matrix Laboratory"), MATLAB can create and manipulate arrays of 1 (vectors), 2 (matrices), or more dimensions. In the MATLAB vernacular, a vector refers to a one dimensional (1N or N1) matrix, commonly referred to as an array in other programming languages. A matrix generally refers to a 2-dimensional array, i.e. an mn array where m and n are greater than 1. Arrays with more than two dimensions are referred to as multidimensional arrays. Arrays are a fundamental type and many standard functions natively support array operations allowing work on arrays without explicit loops. A simple array is defined using the syntax: init:increment:terminator. For instance: >>array = 1:2:9 array = 13579 defines a variable named array (or assigns a new value to an existing variable with the name array) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1 (the init value), increments with each step from the previous value by 2 (the increment value), and stops once it reaches (or to avoid exceeding) 9 (theterminator value). >>array = 1:3:9 array = 147 the increment value can actually be left out of this syntax (along with one of the colons), to use a default value of 1. >>ari = 1:5 ari = 12345 assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default value of 1 is used as the incrementer. Indexing is one-based,[9] which is the usual convention for matrices in mathematics, although not for some programming languages such as C, C++, and Java. Matrices can be defined by separating the elements of a row with blank space or comma and using a semicolon to terminate each row. The list of elements should be surrounded by square brackets: []. Parentheses: () are used to access elements and subarrays (they are also used to denote a function argument list). >> A = [163213; 510118; 96712; 415141] A = 163213 510118 96712

415141 >>A(2,3) ans = 11 Sets of indices can be specified by expressions such as "2:4", which evaluates to [2, 3, 4]. For example, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written as: >>A(2:4,3:4) ans = 118 712 141 A square identity matrix of size n can be generated using the function eye, and matrices of any size with zeros or ones can be generated with the functions zeros and ones, respectively. >>zeros(2,3) ans = 000 000 Structures MATLAB has structure data types. Since all variables in MATLAB are arrays, a more adequate name is "structure array", where each element of the array has the same field names. In addition, MATLAB supports dynamic field names (field look-ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does not support MATLAB structures, therefore just a simple bundling of various variables into a structure will come at a cost.[citation needed] Function handles MATLAB supports elements of lambda calculus by introducing function handles, or function references, which are implemented either in .m files or anonymous/nested functions. Classes Although MATLAB has classes, the syntax and calling conventions are significantly different from other languages. MATLAB has value classes and reference classes, depending on whether the class has handle as a super-class (for reference classes) or not (for value classes). Method call behavior is different between value and reference classes. For example, a call to a method object.method(); can alter any member of object only if object is an instance of a reference class.

Release history Release name Bundled Year JVM Release Date

Version[22]

Number

Notes

MATLAB 1.0

1984

MATLAB 2

1986

MATLAB 3

1987

MATLAB 3.5

1990

Ran on MS-DOS but required at least a 386 processor. Version 3.5m required math coprocessor

MATLAB 4

1992

MATLAB 4.2c

1994

Ran on Windows 3.1. Required a math coprocessor

MATLAB 5.0

Volume 8

1996

December, Unified releases 1996 across all platforms.

MATLAB 5.1

Volume 9 1997

May, 1997

MATLAB

R9.1

5.1.1

MATLAB 5.2

R10 1998

March, 1998

MATLAB 5.2.1

R10.1

MATLAB 5.3

R11 1999

January, 1999

MATLAB 5.3.1

R11.1

November, 1999

MATLAB 6.0

R12 12

1.1.8

First release with November, 2000 bundled Java Virtual 2000 Machine (JVM).

MATLAB 6.1

R12.1

1.3.0

2001 June, 2001

MATLAB 6.5

R13

1.3.1

2002 July, 2002

MATLAB 6.5.1

R13SP1

13 2003

MATLAB 6.5.2

R13SP2

MATLAB 7 R14 14 MATLAB R14SP1

1.4.2 2004

June, 2004

October,

7.0.1

2004

MATLAB 7.0.4

R14SP2

1.5.0 2005

March 7, 2005

MATLAB 7.1

R14SP3

1.5.0

September 1, 2005

MATLAB 7.2

R2006a

15

1.5.0 2006

March 1, 2006

MATLAB 7.3

R2006b

16

1.5.0

September HDF5-based MAT-file 1, 2006 support

MATLAB 7.4

R2007a

17

1.5.0_07

March 1, 2007

2007 MATLAB 7.5 R2007b 18 1.6.0

Last release for Windows 2000 September and PowerPC Mac. 1, 2007 License Server support for Windows Vista[23]

MATLAB 7.6

R2008a

19

1.6.0 2008

March 1, 2008

New Class-Definition Syntax. [24]

MATLAB 7.7

R2008b

20

1.6.0_04

October 9, 2008

MATLAB 7.8

R2009a

21

March 6, 1.6.0_04 2009 2009

First release for 32-bit & 64-bit Microsoft Windows 7.

MATLAB 7.9

R2009b 22

1.6.0_12

First release for Intel September 64-bit Mac, and last 4, 2009 for Solaris SPARC.

MATLAB 7.9.1

R2009bSP1

1.6.0_12

April 1, 2010

MATLAB 7.10

R2010a

23

1.6.0_12 2010

March 5, 2010

Last release for Intel 32-bit Mac.

MATLAB 7.11

R2010b 24

1.6.0_17

September 3, 2010

MATLAB 7.11.1

R2010bSP1

1.6.0_17

March 17, 2011

MATLAB 7.12

R2011a

25

1.6.0_17 2011

April 8, 2011

MATLAB 7.13

R2011b

26

1.6.0_17

September 1, 2011

MATLAB 7.14

R2012a

27

1.6.0_17 2012

March 1, 2012

MATLAB 8 R2012b

28

September First release with 11, 2012 Toolstrip interface.

MATLAB 8.1

R2013a

29

2013

March 7, 2013

The number (or Release number) is the version reported by Concurrent License Manager program FlexLM.

File extensions Native .fig MATLAB figure .m MATLAB function, script, or class .mat MATLAB binary file for storing variables .mex... MATLAB executable (platform specific, e.g. ".mexmac" for the Mac, ".mexglx" for Linux, etc.) .p MATLAB content-obscured .m file (result e() )

What is Image Processing? Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too. Image processing basically includes the following three steps. Importing the image with optical scanner or by digital photography. Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs. Output is the last stage in which result can be altered image or report that is based on image analysis. Purpose of Image processing The purpose of image processing is divided into 5 groups. They are: 1. Visualization - Observe the objects that are not visible.

2. Image sharpening and restoration - To create a better image. 3. Image retrieval - Seek for the image of interest. 4. Measurement of pattern Measures various objects in an image. 5. Image Recognition Distinguish the objects in an image. Types The two types of methods used for Image Processing are Analog and Digital Image Processing. Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing. Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction.

Introduction This worksheet is an introduction on how to handle images in Matlab. When working with images in Matlab, there are many things to keep in mind such as loading an image, using the right format, saving the data as different data types, how to display an image, conversion between different image formats, etc. This worksheet presents some of the commands designed for these operations. Most of these commands require you to have the Image processing tool box installed with Matlab. To find out if it is installed, type ver at the Matlab prompt. This gives you a list of what tool boxes that are installed on your system. Fundamentals A digital image is composed of pixels which can be thought of as small dots on the screen. A digital image is an instruction of how to color each pixel. We will see in detail later on how this is done in practice. A typical size of an image is 512-by-512 pixels. Later on in the course you will see that it is convenient to let the dimensions of the image to be a power of 2. For

example, 29=512. In the general case we say that an image is of size m-by-n if it is composed of m pixels in the vertical direction and n pixels in the horizontal direction. Let us say that we have an image on the format 512-by-1024 pixels. This means that the data for the image must contain information about 524288 pixels, which requires a lot of memory! Hence, compressing images is essential for efficient image processing. You will later on see how Fourier analysis and Wavelet analysis can help us to compress an image significantly. There are also a few "computer scientific" tricks (for example entropy coding) to reduce the amount of data required to store an image. Image formats supported by Matlab The following image formats are supported by Matlab:

BMP HDF JPEG PCX TIFF XWB

Most images you find on the Internet are JPEG-images which is the name for one of the most widely used compression standards for images. If you have stored an image you can usually see from the suffix what format it is stored in. For example, an image named myimage.jpg is stored in the JPEG format and we will see later on that we can load an image of this format into Matlab. Working formats in Matlab If an image is stored as a JPEG-image on your disc we first read it into Matlab. However, in order to start working with an image, for example perform a wavelet transform on the image, we must convert it into a different format. This section explains four common formats. Intensity image (gray scale image) This is the equivalent to a "gray scale image" and this is the image we will mostly work with in this course. It represents an image as a matrix where every element has a value corresponding to how bright/dark the pixel at the corresponding position should be colored. There are two ways to represent the number that represents the brightness of the pixel: The double class (or data type). This assigns a floating number ("a number with decimals") between 0 and 1 to each pixel. The value 0 corresponds to black and the value 1 corresponds to white. The other class is called uint8 which assigns an integer between 0 and 255 to represent the brightness of a pixel. The value 0 corresponds to black and 255 to white. The class uint8 only requires roughly 1/8 of the storage compared to the

class double. On the other hand, many mathematical functions can only be applied to the doubleclass. We will see later how to convert between double and uint8. Binary image This image format also stores an image as a matrix but can only color a pixel black or white (and nothing in between). It assigns a 0 for black and a 1 for white. Indexed image This is a practical way of representing color images. (In this course we will mostly work with gray scale images but once you have learned how to work with a gray scale image you will also know the principle how to work with color images.) An indexed image stores an image as two matrices. . e first matrix has the same size as the image and one number for each pixel. The second matrix is called the color map and its size may be different from the image. The numbers in the first matrix is an instruction of what number to use in the color map matrix. RGB image This is another format for color images. It represents an image with three matrices of sizes matching the image format. Each matrix corresponds to one of the colors red, green or blue and gives an instruction of how much of each of these colors a certain pixel should use. Multiframe image In some applications we want to study a sequence of images. This is very common in biological and medical imaging where you might study a sequence of slices of a cell. For these cases, the multiframe format is a convenient way of working with a sequence of images. In case you choose to work with biological imaging later on in this course, you may use this format. How to convert between different formats The following table shows how to convert between the different formats given above. All these commands require the Image processing tool box! Image format conversion (Within the parenthesis you type the name of the image you wish to convert.) Operation: Matlab command:

Convert between intensity/indexed/RGB format to binary format. dither() Convert between intensity format to indexed format. Convert between indexed format to intensity format. gray2ind() ind2gray()

Convert between indexed format to RGB format. Convert a regular matrix to intensity format by scaling. Convert between RGB format to intensity format. Convert between RGB format to indexed format.

ind2rgb() mat2gray() rgb2gray() rgb2ind()

The command mat2gray is useful if you have a matrix representing an image but the values representing the gray scale range between, let's say, 0 and 1000. The command mat2gray automatically re scales all entries so that they fall within 0 and 255 (if you use the uint8 class) or 0 and 1 (if you use the double class). How to read files When you encounter an image you want to work with, it is usually in form of a file (for example, if you down load an image from the web, it is usually stored as a JPEG-file). Once we are done processing an image, we may want to write it back to a JPEG-file so that we can, for example, post the processed image on the web. This is done using the imread and imwrite commands. These commands require the Image processing tool box! Reading and writing image files Matlab command:

Operation: Read an image. (Within the parenthesis you type the name of the image file you wish to read. Put the file name within single quotes ' '.)

imread()

Write an image to a file. (As the first argument within the parenthesis you type the name of the image you have worked with. imwrite( , As a second argument within the parenthesis you type the name of the file ) and format that you want to write the image to. Put the file name within single quotes ' '.)

Make sure to use semi-colon ; after these commands, otherwise you will get LOTS OF number scrolling on you screen... The commands imread and imwrite support the formats given in the section "Image formats supported by Matlab" above.

Loading and saving variables in Matlab This section explains how to load and save variables in Matlab. Once you have read a file, you probably convert it into an intensity image (a matrix) and work with this matrix. Once you are done you may want to save the matrix representing the image in order to continue to work with this matrix at another time. This is easily done using the commands save and load. Note that save and load are commonly used Matlab commands, and works independently of what tool boxes that are installed.

LITERATURE SURVEY According to Lei, one of the most reliable features is the shape of the signature. The next reliable feature is the time of writing. Due to lack of benchmark databases for on-line signatures, we will not argue the consistency of these features here but pro pose a novel similarity measure for signature verification. Given two signatures to compare, it is natural to ask How similar are they? or What is their similarity? It is intuitive to answer the similarity with a value between 0% - 100%and this value should make sense. For example, when we quantize the similarity of two signatures as 90%, they should be very close to each other o b j e c t i v e l y, e v e n i t i s subjective to say how similar they are. No matter what kind of features is extracted, such a similarity measure is unavoidable. Euclidean distance, DTW (Dynamic Time Warping) or other distances are of relative meaning. That is the distance itself cannot give us any information about similarity without comparing it with other dis tances. We o b s e r v e d t h a t R 2 i s a g o o d s i m i l a r i t y measure with intuitive meaning. Given two sequences, R2 answers the similarity with a value between 0% -100%. This kind of similarity measure is very useful for signature verification, especially noting that only few genuine signatures are available in practice. However, R2 comes from SLR (Simple Linear Regression) which traditionally measure two 1- D sequences. We extend R2 to ER2 for multidimensional sequence matching. Also, the optimal alignment by DTW (Dynamic Time Warping) is coupled into ER2 to enhance robustness of signatureverification.W e f i r s t p r o v i d e t h e b a c k g r o u n d o f S L R a n d R 2 . T h e n , w e e x t e n d 1 D R 2 t o multidimensional ER2. T h e n , w e c o m b i n e D T W a n d E R 2 t o g e t h e r f o r s i g n a t u r e verification

LITERATURE REVIEW

Signature verification systems are different both in their feature selection and their decision makingmethodologies. The features can be categorized in two types: global and local. Global features are those related to thesignature as a whole, including the average signing speed, the signature bounding box, and signing duration. Frequencydomain feature studied in this work are also examples of global features. Local features on the other hand are extracted ateach point or segment along the trajectory of the signature. Examples of local features include distance and curvaturechange between successive points on the signature trajectory and our piecewise AR model.The decision methodology depends on whether global or local features are used. Even the signatures of the sameperson may have different signing durations due to the variability in signing speed. The advantage of global features is thatthere are a fixed number of measurements per signature, regardless of the signature length, making the comparison easier.When local features are used, one needs to use methods which are suitable to compare feature vectors which have differentsize.The use of frequency domain system identification method for online signature verification has not beenextensively considered as it studies this problem in a quite different perspective, though some relative techniques havebeen proposed. In the signature is normalized to a fixed length vector of 1024 complex numbers that encodes the x and ycoordinates of the points on the signature trajectory. Performing FFT, 15 Fourier descriptors with largest magnitude werechosen to be the feature.The system is tested using very small signature dataset (8 genuine signatures of the same user and 152 forgeriesprovided by 19 forgers), achieving 2.5% error rate. In [5], the authors also use the Fourier Transform, and proposedalternatives for the preprocessing, normalization and matching stages. The system is tested on a large database (dataset of around 1500 signatures collected from 94 subjects), and achieved 10% equal error rate for verification. In our project,while Fourier transform is also used, we explored this area more deeply on different types of normalization, featureextraction and decision making method adapted from system identification.Many research works on signature verificationhave been reported.Researchers have applied many technologies, such as neural networks and parallel processing to the problem of signature verification and they are continually introducing new ideas, concepts, and algorithms. Other approaches havebeen proposed and evaluated in the context of random forgeries, like 2D transforms , histograms of directional data orcurvature , horizontal and vertical projections of the writing trace of the signature, structural approaches , localmeasurements made on the writing trace of the signature and the position of feature points located on the skeleton of thesignature. Following Plamondon et al a handwritten signature is the result of a rapid movement.Hence, the shape of the signature remains relatively the same over time when the signature is written down on apre-established frame (context) like a bank check.This physical constraint contributes to the relative time-invariance of thesignatures, which supports using only static shape information to verify signatures. Some other solutions in the case of random forgeries are mainly based on the use of global shape descriptors as the shadow code, investigated by Sabourin etal Other approaches using global

shape descriptors such as shape envelope projections on the coordinate axes, geometricmoments, or even more general global features such as area, height and width, have been widely investigated.In offline models of signature verification are compared based on HMMs. The approach of employs a threeexpert system that evaluates the signature three different ways and judges it as genuine, forgery, or rejection by a majorityvote of the three experts. In a signature verification system is presented that works with both static and dynamic features.In the authors infer that shape similarity and causality of signatures generation are more important than matching thedynamics of signing. This result indicates that this dynamics is not stable enough to be used for signature verification sincethe subject is trying to reproduce a shape rather than a temporal pattern. This is why we use, in this paper, only staticimages to verify signatures

The problem of automatic signature verification has received big attention in past years becauseof its potential applications in banking transactions and security systems. Cavalcanti et al [8]investigates the feature selection for signature identification. He used structural features, pseudo-dynamic features and five moments in his study. Ozgunduz et al [9] has presented an off-linesignature verification and recognition method using the global, directional and grid features. Hehas showed that SVM classifier has better performance than MLP for his proposed method.Mohamadi [10] has presented a Persian offline signature identification system using PrincipalComponent Analysis (PCA) and Multilayer Perceptron (MLP) neural network.Sigari and Pourshahabi [11], [12] proposed a method for signature identification based on GaborWavelet Transform (GWT) as feature extractor and Support Vector Machine (SVM) as classifier.In their study after size normalization and noise removal, a virtual grid is placed on signaturimage and Gabor coefficients are computed on each point of grid. Next, all Gabor coefficients arefed into a layer of SVM classifiers as feature vector. The number of SVM classifiers is equal to thenumber of classes. Each SVM classifier determines whether the input image belongs tocorresponding class or not (one against all method). In their study two experiments on twosignature sets were done. They have achieved identification rate of 96% on Persian signature setand more than 93% on Turkish signature set. Their Persian signature set was the same assignature set that has been used in [10].Coetzer [3], have used Discrete Radon Transform as global feature extractor and a HiddenMarkov Model in a new signature verification algorithm. In their proposed method, The DiscreteRadon Transform is calculated at angles that range from 0 to 360 and each observationsequence is then modeled by an HMM of which the states are organized in a ring. To model andverify signatures of each writer one HMM is considered. Their system is rotation invariant androbust with respect to moderate levels of noise. Using a dataset of 924 signatures from 22writers, their system achieves an equal error rate (EER) of 18% when only high-quality forgeries(skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. Thesesignatures were originally captured

offline. Using another dataset of 4800 signatures from 51writers, their system achieves an EER of 12.2% when only skilled forgeries are considered.These signatures were originally captured online and then digitally converted into static signatureimages

Proposed System The proposed system consists of 3 major tasks: Image pre-processing. Feature extraction. Neural network training for pattern recognition I After an image is acquired it goes through different levels of processing before applying it tothe next step i.e. feature extraction. The image pre-processing is an important task because of the following reasons:

It creates a level of similarity in terms of general features of an signature imagesuch as size, sharpness, etc. This helps to compare two signature images easily.

Signature can vary according to the tools we are using to write .Various other factors can vary signature such as pen, pencil, ink, pressure of hand, etc. Inofffline signature recognition we are not that important in this case and have to be eliminated. Instead of this we are extracting features for matching twosignatures.

Image pre-processing reduces noise, defects enhances the image. It also improvesthe quality of image information.

The techniques used in image pre-processing are:i. Grey scaling.ii. Thresholding.iii. Bluring.iv. Thinning.v. Cropping 3 2 . F e a u t e r E a r t x o i c n : Feature extraction is the next step in signature recognition & verification. It is an important step in signature verification system, as it is the key to identify & differentiate a users signature from another. If we want to compare two signatures there should be somemeasurement on which the comparison can be based. In this step, we are generating featureswhich are used as comparison measurements. As signature verification is highly sensitive process we need more than one feature to be extracted. More extracted features contributes inenhancing the accuracy of the resultsFeature is nothing but a characteristic of an image that can be measured with the help of somedesigned algorithms. These features can be retrieved later by extraction Feature extraction involves two types of features:

Global feature.

Texture feature. We are only focusing on global features here. Global Feature Extraction :i. Signature height-to-width ratio: This feature can be obtained by simply dividing signature height to signature width. Height to width ratio of one persons signature is approximately equal. ii . Signature Area:

Signature can be obtained by calculating no. of pixels belonging to the signature. This gives information about signature density. iii. Maximum Horizontal and Maximum Vertical Histogram: Horizontal histogram can be calculated for each row. The row which has the highest value is maximum horizontal histogram. Similarly the vertical histograms are calculated for each column .The column having highest value is maximum vertical histogram. iv. Horizontal and vertical centre of signature: For each column, those row index values, which are having black pixels are added in row_index_sum. The counter is incremental for each black pixcel encounted. Calculate Cx by using formula:Cx=row_index_sum/total black pixels encounted Similarly Calculate column_index_sum &Cy= column_index_sum/ total black pixels encountedFinally center is obtained byCenter=(Cx+1)*total column in signature +Cy v.Edge point number of signature: Edge point is the pixcel which has only one neighbour which belongs to thesignature in 8neighbour region.These features are given to trained neural n/w for pattern recognition. Neural network is a computational model which is inspired by the biological nervous system.It is an interconnected grouped of artificial neurons. Neurons are processing elementsworking in unison to solve specific problem. Neural networks are known for being veryaccurate and efficient technique for pattern recognition. Neural network is a application of artificial intelligence. This computer application is trained to think like a human being or even better. Neural networks like human beings adopt the idea of learning in order to achieve any task.The learning involves training on a large amount of data, which enables to create a patternthat will be used to verify signatures. Neural networks are very useful in discovering patternswhich are difficult to derive by humans. It can be used in application where high security isrequired. In this we are using Multi Layer perceptrons MLPs neural network. This has a multi layer feed forward structure, where all the nodes of one layer have connection to all the nodes of the next layer and so on. But the nodes do not have connections to its previous layers. For this purpose we have to modify the function as a back propagation neural network using BPal gorithm.The inputs to trained neural network are signature and its extracted features. The output layer consists of a single node which calculates the weighted sum of the connections coming to the output layer. The final output of MLP neural network is a confidence value. Confidence value indicates the likelihood whether the test signature is original or fraud. This confidence value is compared with thresholding value which is user defined .if confidence value is greater than threshold value it is accepted otherwise rejected. 4. Recognition: A trained neural network compares the features of a given signature with the features of signature in the database. The differences between these signatures are calculated. A tag of the signature having least differences is returned with a number that shows percentage of similarity. Based on percentage of similarity the decision is taken whether the signature is original or not. I .Signature is considered original: If the percentage of similarity ranges between85-100%. This is because there are natural differences in the signature of a single person in multiple attempts. ii. Signature is considered relatively doubtable: If the percentage of similarity ranges between 75-85%. iii. Signature is considered highly doubtable: If the percentage of similarity is lower than 75%

CONCLUSIONS Signatures in offline systems are based completely only on the image of the signature. It contains less discriminative information as it is often contaminated with noise either due to scanning hardware or paper background. A novel approach for off-line signature verification is proposed and implemented using a Back Propagation Neural Network algorithm. The system that is proposed based on global and texture features. Features exhibiting good performance are considered, and finally a near-optimal solution using blend of such features in terms of high verification and time efficiency is derived. The system is tested against the genuine and the \forged samples. The values of FAR and FRR are observed and these results look very promising. Although the operations used in obtaining the features are computationally expensive, they are adopted in order to get good results. The performance of the system is satisfactory. However the system has to be tested on many more samples (obtained from real-life data from a bank or any other similar organization.

Вам также может понравиться