Вы находитесь на странице: 1из 47

Abstract

We propose a view-based approach to recognize humans from their gait. Two


different image features have been considered: the width of the outer contour of the
binarized silhouette of the walking person and the entire binary silhouette itself. To
obtain the observation vector from the image features, we employ two different
methods. In the first method, referred to as the indirect approach, the high-
dimensional image feature is transformed to a lower dimensional space by generating
what we call the frame to eemplar !"#$% distance. The "#$ vector captures both
structural and dynamic traits of each individual. "or compact and effective
gait representation and recognition, the gait information in the "#$ vector se&uences
is captured in a hidden 'arkov model !(''%. In the second method, referred to as
the direct approach, we work with the feature vector directly !as opposed to
computing the "#$% and train an (''. We estimate the ('' parameters
!specifically the observation probability % based on the distance between the
eemplars and the image features. In this way, we avoid learning high-dimensional
probability density functions. The statistical nature of the ('' lends overall
robustness to representation and recognition.
Introduction
)inetic biometrics centre on supposedly innate, uni&ue and stable muscle actions
such as the way an individual walks, talks, types or even grips a tool. Those so-called
behavioural measures have been criticised as simply too woolly for effective one to
one matching, given concerns that they are are not stable !for eample are affected by
age or by eternals such as an individual*s health or tiredness on a particular day%,
are not uni&ue
or are simply to hard to measure in a standard way outside the laboratory !with
for eample an unacceptably high rate of false re+ections or matches because
of background noise or poor installation of e&uipment%.
,roponents have responded that such technologies are non-intrusive, are as effective
as other biometrics or should be used for basic screening !for eample identifying
*suspects* re&uiring detailed eamination% rather than verification.
-ignature verification !ie comparing a *new* signature or signing with previously
enrolled reference information% takes two forms: dynamic signature verification
and analysis of a static signature that provides inferential information about how
the paper was signed. It can be conducted online or offline.
$ynamic -ignature .erification !$-.% is based on how an individual signs a
document - the mechanics of how the person wields the pen - rather than scrutiny
of the ink on the paper.
/dvocates have claimed that it is the biometric with which people are most
comfortable !because signing a letter, contract or che&ue is common% and that
although a might be able to achieve the appearance of someone*s signature it is
impossible to duplicate the uni&ue *how* an individual signs. 0ritics have argued
that it provides a blurry measure, with an inappropriate percentage of false re+ects
and acceptances.
$-. schemes typically measure speed, pen pressure, stroke direction, stroke
length and points in time when the pen is lifted from the paper or pad. -ome
schemes re&uire the individual to enrol and thereafter sign on a special digital pad
with an inkless pen. 1thers involve signing with a standard pen on paper that is
placed over such a pad. 'ore recently there have been trials involving three-
dimensional imaging of the way that the individual grasps the pen and moves it
across the paper in signing, a spinoff of some of the facial biometric schemes
discussed earlier in this note.
In practice there appears to be substantial variation in how individuals sign their
names or write other tet !particularly affected by age, stress and health%. -ystems
have encountered difficulties capturing and interpreting the data. In essence, the
mechanics of signing are not invariant over time and there is uncertainty in
matching.
-ome signature proponents have accordingly emphasised static rather than
dynamic analysis, eamining what an image of a signature tells about how it was
written. Typically it uses high-resolution imaging to identify how ink was laid
down on the paper, comparing a reference signature with a new signature. In
practice the technology does not perform on a real time basis and arguably should
not be regarded as a biometric, with proponents having sought the biometric label
on an opportunistic basis for marketing or research funding.
$-. systems have reflected marketing to the financial sector and the research into
handwriting recognition that has resulted in devices such as the 2ewton, ,alm and
Tablet personal computer. /lthough there are a large number of patents and
systems are commercially available uptake has disappointed advocates, with lower
than epected growth and - more seriously - the abandonment by ma+or users of
the technology.
)eystroke dynamics uses the same principles as dynamic signature verification,
offering a biometric based on the way an individual types at a keyboard.
In essence, the keystroke or *typing rhythm* biometric seeks to provide a signature
- ie a uni&ue value - based on two time-based measures -
dwell time - the time that the individual holds down a specific key
flight time - the time spent between keys
with verification being provided through comparison with information captured
during previous enrolment.
Typically development of that reference template involves involves several sessions
where the individual keys a page or more of tet. 0laims about its effectiveness differ3
most researchers suggest that it is dependent on a substantial sample of tet rather
than merely keying a single sentence or three words.
It has been criticised as a crude measure that is biased towards those who can touch
type and that is affected by variations in keyboards or even lighting and seating. /s a
behavioral measure it appears to be affected by factors such as stress and health.
,roponents have argued that it is non-intrusive !indeed that both enrolment and
subse&uent identification% may be done covertly and that users have a higher level of
comfort with keyboards than with eye scanning.
4ecognition on the basis of how an individual walks has attracted interest
from defence and other agencies for remote surveillance or infrared recordings of
movement in an area under covert surveillance. The technology essentially involves
dynamic mapping of the changing relationships of points on a body as that person
moves.
#arly work from the late 5678s built on biomechanics studies that dated. It centred on
the *stride pattern* of a sideways silhouette, with a few measurement points from the
hip to feet. 'ore recent research appears to be encompassing people in the round and
seeking to address the challenge of identification in adverse conditions !eg at night,
amid smoke or at such a distance that the image &uality is very poor%.
The effectiveness of the technology is affected by the availability and &uality of
reference and source data, computational issues and ob+ectives. 'apping may be
inhibited, for eample, if images of people are obscured by others in a crowd or by
architectural features3 the latter is an issue because of the need to see the individual9s
in motion. .ariation because of tiredness, age and health !eg arthritis, a twisted ankle
or prosthetic limb%, bad footwear and carrying ob+ects may also degrade confidence in
results.
,roponents have claimed some non-military applications. / notable instance is the
suggestion that it would aid in automated identification of female shoplifters who
falsely claim to be pregnant, epectant mothers having a different walk to people who
have a cache of purloined +umpers stuffed in their bloomers. /s yet such suggestions
don*t appear to have wowed the market, arguably because of concerns about cost
effectiveness and reliability.
Identification by voice rather than appearance has a long history in literature !a 56:8s%
but automated identification was speculative until the 5668s. $evelopment has largely
been a spin-off of research into voice recognition systems, for eample dictation
software used for creating word processed documents on personal computers and call
centre software used for handling payments or &ueries.
.oice biometric systems essentially take two forms - verification and screening - and
are based on variables such as pitch, dynamics, and waveform. They are one of the
least intrusive schemes and generally lack the negative connotations of eye scanning,
$2/ sampling or finger9palm print reading.
.oice recognition for verification typically involves speaking a previously-enrolled
phrase into a microphone, with a computer then analyses and comparing the two
sound samples. It has primarily been used for perimeter management !including
restrictions on access to corporate ;/2s% and for the verification of individuals
interacting with payment or other systems by telephone.
#nrollment usually involves a reference template constructed by the individual
repeatedly speaking a set phrase. 4epetition allows the software to model a value that
accommodates innate variations in speed, volume and intonation whenever the phrase
is spoken by that individual.
0laims about the accuracy of commercial verification systems vary widely, from
reported false accept and false re+ect rates of around <= to rates of 57= or higher.
/ssessment of claims is inhibited by the lack of independent large-scale trials3 most
systems have been implemented by financial or other organisations that are reluctant
to disclose details of performance.
-creening systems have featured in (ollywood and science fiction literature - with
computers for eample sampling all telephone traffic to identify a malefactor on the
basis of a >voiceprint> that is supposedly as uni&ue as a fingerprint - but have received
less attention in the published research literature.
4easons for caution about vendor and researcher claims include -
variations in hardware !the performance of microphones in telephones, gates
and on personal computers differs perceptibly%
the performance of communication links !the sound &uality of telephone
traffic in parts of the world reflects the state of the wires and other
infrastructure%
background noise
the individual*s health and age
efforts to disguise a voice
the effectiveness of tests for liveness, with some verification schemes for
eample subverted by playing a recording of the voiceprint owner
'ost perimeter management systems thus re&uire an additional mechanism such as a
password9,I2 or access to a .,2.
-o these are the various methods available for identification purposes, in this
pro+ect we are going to concentrate on the ?ait Identification method.
1.1 Overview Of Project
?/IT refers to the style of walking of an individual. 1ften, in surveillance
applications, it is difficult to get face or iris information at the resolution re&uired for
recognition. -tudies in psychophysics indicate that humans have the capability of
recognizing people from even impoverished displays of gait, indicating the presence
of identity information in gait. "rom early medical studies, it appears that there are <@
different components to human gait, and that, if all the measurements are
considered, gait is uni&ue. It is interesting, therefore, to study the utility of gait as a
biometric. / gait cycle corresponds to one complete cycle from rest !standing%
position to-right-foot-forward-to-rest-to-left-foot forward- to-rest position. The
movements within a cycle consist of the motion of the different parts of the body such
as head, hands, legs, etc. The characteristics of an individual are reflected
not only in the dynamics and periodicity of a gait cycle but also in the height and
width of that individual. ?iven the video of an unknown individual, we wish to use
gait as a cue to find who among the individuals in the database the person
is. "or a normal walk, gait se&uences are repetitive and ehibit nearly periodic
behavior. /s gait databases continue to grow in size, it is conceivable that identifying
a person only by gait may be difficult. (owever, gait can still serve as a useful
filtering tool that allows us to narrow the search down to a considerably
smaller set of potential candidates. /pproaches in computer vision to the gait
recognition problem can be broadly classified as being either model-based or
model-free. Aoth methodologies follow the general framework of feature etraction,
feature correspondence and high-level processing. The ma+or difference is with regard
to feature correspondence between two consecutive frames. 'ethods which
assume a priori models match the two-dimensional !<-$% image se&uences to the
model data. "eature correspondence is automatically achieved once matching between
the images and the model data is established. #amples of this approach include the
work of ;ee et al., where several ellipses are fitted to different parts
of the binarized silhouette of the person and the parameters of these ellipses such as
location of its centroid, eccentricity, etc. are used as a feature to represent the gait of a
person. 4ecognition is achieved by template matching. In , 0unado et al. etract a gait
signature by fitting the movement of the thighs to an articulated
pendulum-like motion model. The idea is somewhat similar to an early work by
'urray
who modeled the hip rotation angle as a simple pendulum, the motion of which was
approimately described by simple harmonic motion. In activity specific static
parameters are etracted for gait recognition. 'odel-free
methods establish correspondence between successive frames based upon the
prediction or estimation of features related to position, velocity, shape, teture, and
color. /lternatively, they assume some implicit notion of what is being observed.
#amples of this approach include thework of(uang et al.,whouse optical flow to
derive a motion image se&uence for a walk cycle.
,rincipal components analysis is then applied to the binarized silhouette to
derive what are called eigen gaits. Aenabdelkader et al. use image self-similarity plots
as a gait feature. ;ittle and Aoyd etract fre&uency and phase features from moments
of the motion image derived from optical flow and use template
matching to recognize different people by their gait. /careful analysis of gaitwould
reveal that it has two important components. The first is a structural component that
captures the physical build of a person, e.g., body dimensions, length of limbs,
etc. The secnd component is the motion dynamics of the body during a gait cycle. 1ur
effort in this paper is directed toward deriving and fusing information from these two
components.We propose a systematic approach to gait recognition by building
representations for the structural and dynamic components of gait. The assumptions
we use are: 5% the camera is static and the only motion within the field of view is that
of the moving person and <% the sub+ect is monitored by multiple cameras so that the
sub+ect presents a side viewto at least one of the cameras. This is because the gait of a
person is best brought out in the side view. The image se&uence of that camera which
produces the best side view is used. 1ur eperiments were set up in line with the
above assumptions.
We considered two image features, one being the width of the outer contour of the
binarized silhouette, and the other being the binary silhouette itself. / set of
eemplars that occur during a gait cycle is derived for each individual. To obtain the
observation vector from the image features we employ two different
methods. In the indirect approach the high-dimensional image feature is transformed
to a lower dimensional space by generating the frame to eemplar !"#$% distance.
The "#$ vector captures both structural and dynamic traits of each individual.
"or compact and effective gait representation and recognition, the gait information in
the "#$ vector se&uences is captured using a hidden 'arkov model !(''% for each
individual. In the direct approach, we work with the feature vector directly
and train an ('' for gait representation. The difference between the direct and
indirect methods is that in the former the feature vector is directly used as the
observation vector for the ('' whereas in the latter, the "#$ is used as the
observation vector. In the direct method, we estimate the observation
probability by an alternative approach based on the distance between the eemplars
and the image features. In this way, we avoid learning high-dimensional probability
density functions. The performance of the methods is tested on different databases.
2. Abstract
We propose a view-based approach to recognize humans from their gait. Two
different image features have been considered: the width of the outer contour of the
binarized silhouette of the walking person and the entire binary silhouette itself. To
obtain the observation vector from the image features, we employ two different
methods. In the first method, referred to as the indirect approach, the high-
dimensional image feature is transformed to a lower dimensional space by generating
what we call the frame to eemplar !"#$% distance. The "#$ vector captures both
structural and dynamic traits of each individual. "or compact and effective
gait representation and recognition, the gait information in the "#$ vector se&uences
is captured in a hidden 'arkov model !(''%. In the second method, referred to as
the direct approach, we work with the feature vector directly !as opposed to
computing the "#$% and train an (''. We estimate the ('' parameters
!specifically the observation probability % based on the distance between the
eemplars and the image features. In this way, we avoid learning high-dimensional
probability density functions. The statistical nature of the ('' lends overall
robustness to representation and recognition.
3. Description of the Problem
/n important issue in gait is the etraction of appropriate salient features that will
effectively capture the gait characteristics. The features must be reasonably robust to
operating conditions and should yield good discriminability across individuals.
/s mentioned earlier, we assume that the side view of each individual is available.
Intuitively, the silhouette appears to be a good feature to look at as it captures the
motion of most of the body parts. It also supports night vision capability as it can be
derived from I4 imagery also. While etracting this feature we are faced with two
options.
5% Bse the entire silhouette.
<% Bse only the outer contour of the silhouette.
The choice of using either of the above features depends upon the &uality of the
binarized silhouettes. If the silhouettes are of good &uality, the outer contour retains
all the information of the silhouette and allows a representation, the dimension of
which is an order of magnitude lower than that of the binarized silhouette. (owever,
for low &uality, low resolution data, the etraction of the outer contour from the
binarized silhouette may not be reliable. In such situations, direct use of the binarized
silhouette may be more appropriate.
We choose the width of the outer contour of the silhouette as one of our feature
vectors. In "ig. 5, we show plots of the width profiles of two different individuals for
several gait cycles. -ince we use only the distance between the left and right
etremities of the silhouette, the two halves of the gait cycle are almost
indistinguishable. "rom here on, we refer to half cycles as cycles, for the sake of
brevity.
Eistin! "#stem
There are various biometric based concepts are used in industrial applications
for identification of an individual. They are
-ignature verification !ie comparing a *new* signature or signing with
previously enrolled reference information% takes two forms: dynamic
signature verification and analysis of a static signature
"ace 4ecognition method using ;aplace faces and also using other
methods
Identification by voice rather than appearance has a long history in
literature !a 56:8s%
Iris recognition methods
4ecognition using $igital -ignatures
The eisting system has some drawbacks. The problems are:
o 2ot uni&ue
o #asily malpractice can be possible
o #asily traceable by intruders
o ;ow reliability
o 2o uni&ue identification
Proposed "#stem
?/IT refers to the style of walking of an individual. 1ften, in surveillance
applications, it is difficult to get face or iris information at the resolution re&uired for
recognition. -tudies in psychophysics indicate that humans have the capability of
recognizing people from even impoverished displays of gait, indicating the presence
of identity information in gait4ecognition on the basis of how an individual walks has
attracted interest from defence and other agencies for remote surveillance or infrared
recordings of movement in an area under covert surveillance. The technology
essentially involves dynamic mapping of the changing relationships of points on a
body as that person moves.
#arly work from the late 5678s built on biomechanics studies that dated. It centred on
the *stride pattern* of a sideways silhouette, with a few measurement points from the
hip to feet. 'ore recent research appears to be encompassing people in the round and
seeking to address the challenge of identification in adverse conditions !eg at night,
amid smoke or at such a distance that the image &uality is very poor%.
"#stem Environment
The front end is designed and eecuted with the C<-$)5.@.8
handling the core +ava part with Bser interface -wing component. Cava
is robust , ob+ect oriented , multi-threaded , distributed , secure and
platform independent language. It has wide variety of package to
implement our re&uirement and number of classes and methods can be
utilized for programming purpose. These features make the
programmerDs to implement to re&uire concept and algorithm very
easier way in Cava.
The features of Cava as follows:
0ore +ava contains the concepts like #ception handling,
'ultithreading , -treams can be well utilized in the pro+ect
environment.
The #ception handling can be done with predefined eception
and has provision for writing custom eception for our application.
?arbage collection is done automatically, so that it is very
secure in memory management.
The user interface can be done with the /bstract Window tool
)it and also -wing class. This has variety of classes for components
and containers. We can make instance of these classes and this
instances denotes particular ob+ect that can be utilized in our program.
#vent handling can be performed with $elegate #vent model.
The ob+ects are assigned to the ;istener that observe for event, when
the event takes place the cooresponding methods to handle that event
will be called by ;istener which is in the form of interfaces and
eecuted.
This application make use of /ction;istener interface and the
event click event gets handled by this. The separate method
action,erformed!% method contains details about the response of event.
Cava also contains concepts like 4emote method invocation,
2etworking can be useful in distributed environment.
"#stem $e%uirement
&ardware specifications'
,rocessor : Intel ,rocessor I.
4/' : 5<7 'A
(ard disk : <8 ?A
0$ drive : @8 -amsung
"loppy drive : 5.@@ 'A
'onitor : 5ED -amtron color
)eyboard : 587 mercury keyboard
'ouse : ;ogitech mouse
"oftware "pecification
1perating -ystem F Windows G,9<888
;anguage used F C<sdk5.@.8
(. "#stem Anal#sis
-ystem analysis can be defined, as a method that is determined to use the
resources, machine in the best manner and perform tasks to meet the information
needs of an organization.
(.1 "#stem Description
It is also a management techni&ue that helps us in designing a new systems or
improving an eisting system. The four basic elements in the system analysis are
1utput
Input
"iles
,rocess
The above-mentioned are mentioned are the four basis of the -ystem /nalysis.
(.2 Proposed "#stem Description
?iven the image se&uence of a sub+ect, the width vectors are generated as follows.
5% Aackground subtraction is first applied to the image se&uence. The resultant motion
image is then binarized into foreground and background piels.
<% / bounding bo is then placed around the part of the motion image that contains
the moving person. The size of the bo is chosen to accommodate all the individuals
in the database. These boed binarized silhouettes can be used directly as image
features or further processed to derive the width vector as in the net item.
:% ?iven the binarized silhouettes, the left and right boundaries of the body are traced.
The width of the silhouette along each rowof the image is then stored. The width
along a given row is simply the difference in the locations of the right-most and the
left-most boundary piels in that row. In order to generate the binarized silhouette
only, the first two steps of the above feature are used. 1ne of the direct applications
of the width feature is to parse the video into cycles in order to compute the
eemplars. It is easy to see that the norm of the width vector show a periodic
variation. "ig. < shows the norm of the width vector as a function of time for a given
video se&uence. The valleys of the resultingwaveform correspond to the rest positions
during the walk cycle while the peaks correspond to the part of the cycle where the
hands and legs are maimally displaced.
?iven a se&uence of image features for person +,
G
+
H I G
+
!5%,

G
+
!5%,

JJJJJJ G
+
!T%

K
we wish to build a model for the gait of person and use it to recognize this person
from different sub+ects in the database.
"ig :. -tances corresponding to the gait cycle of two individuals.
!a% ,erson 5!b% ,erson <.
Gait Representation: In this approach, we pick 2 eemplars !or stances%
L H I e5, e<JJJJJJ..en K
from the pool of images that will minimize the error in representation of all the
images of that person. If the overall average distortion is used as a criterion for
codebook design, the selection of the 2 eemplars is said to be optimal if the overall
average distortion is minimized for that choice. There are two conditions for ensuring
optimality. The first condition is that the optimal &uantizer is realized by using a
nearest neighbor selection rule
& ! G% H ei implies that d ! G, ei % MH d!G,e+ % ,
+ not e&ual to i and 5 MH i , + MH 2
where represents an image in the training set, d ! G, ei % is the distance between G ,
and ei , while 2 is the number of eemplars The second condition for optimality is
that each codeword9eemplar ei is chosen to minimize the average distortion in the
cell 0i, i.e.
ei H arg min e # ! d!G, e% G belongs to 0i %
where the 0i s represent the .oronoi partitions across the set of training images. To
iteratively minimize the average distortion measure, the most widely used method is
the )Fmeans algorithm. (owever, implementing the ) -means algorithm raises a
number of issues. It is difficult to maintain a temporal order of the centroids !i.e.,
eemplars% automatically. #ven if the order is maintained, there could be a cyclical
shift in the centroids due to phase shifts in the gait cycle !i.e., different starting
positions%. In order to alleviate these problems, we divide each gait cycle into 2
e&ual segments. We pool the image features corresponding to the th segment for all 2
the cycles. The centroids !essentially the mean% of the features of each part were
computed and denoted as the eemplar for that part. $oing this for all the 2 segments
gives the optimal eemplar set
L H I e5 N, e< NJJJJJJ..en N K
1f course, there is the issue of picking 2. This is the classical problem of choosing
the appropriate dimensionality of a model that will fit a given set of observations, e.g
choice of degree for a polynomial regression.
The application is implemented with Training and Test data sets. The training
to gain knowledge as it is of the form unsupervised learning algorithm.
)nsupervised learnin! - this is learning from observation and discovery. The
data mining system is supplied with objects but no classes are defined so it has to
observe the examples and recognize patterns i.e. class description! by itself. This
system results in a set of class descriptions, one for each class discovered in the
environment. /gain this is similar to cluster analysis as in statistics.
/fter the training process the system should gained the knowledge and then
the Test process is called to identify the human based on the given input.
E. -ystem $esign
$esign is concerned with identifying software components specifying
relationships among components. -pecifying software structure and providing blue
print for the document phase.
'odularity is one of the desirable properties of large systems. It implies that
the system is divided into several parts. In such a manner, the interaction between
parts is minimal clearly specified.
$esign will eplain software components in detail. This will help the
implementation of the system. 'oreover, this will guide the further changes in the
system to satisfy the future re&uirements.
*.1 +orm desi!n
"orm is a tool with a message3 it is the physical carrier of data or information.
*.2 Input desi!n
Inaccurate input data is the most common case of errors in data
processing. #rrors entered by data entry operators can control by input design. Input
design is the process of converting user-originated inputs to a computer-based format.
Input data are collected and organized into group of similar data.
*.3 ,ode Desi!n
The entire application is divided into five modules as follows:
.ideo 0apture and "raming:
o captures video of persons walking and performs file operations on
video files for etracting the se&uence of frames from the video.
'otion $etection:
o apply motion detection algorithms to detect any moving ob+ect!s% in
the video and identify the ob+ect to classify as humans.
Image "ile ,rocessing:
o performs image operations such as reading9writing and other pre-
processing algorithms such as edge-finding, binarizing, thinning, etc.
?ait 4epresentation:
o obtains representation models for silhouettes etracted from the results
of image processing algorithms, and apply the feature etraction steps
for storing the features !fed vectors% into the database. fed is frame-to-
eemplar-distance which is etracted from whole gait cycles from the
input video.
?ait 4ecognition:
o here the input video is applied to the above algorithms and compared
with the stored features to find the best match.
6. Output Design
-.1+orms and $eports
"orms
The user interface form designed in +ava -wing "rame that accept the input image as
,?' image from the path OframesOtestO folder. The images that you want to test to be
stored in this folder. The application is designed to retrieve input image from here.
Then click the 4ecognize Autton. That in term invokes the Training module.
The application is implemented in the ) means algorithm as discussed in section @.
"irst the training process is called knowledge is gained here by the system.
Then it calls the Test process is done with the input image and the following results
were displayed to the user.
.. /estin! and Implementation
..1 "oftware /estin!
-oftware Testing is the process of confirming the functionality and correctness
of software by running it. -oftware testing is usually performed for one of two
reasons:
5% $efect detection
<% 4eliability estimation.
White bo testing is concerned only with testing the software product, it
cannot guarantee that the complete specification has been implemented. Alack
bo testing is concerned only with testing the specification, it cannot guarantee
that all parts of the implementation have been tested. Thus black bo testing is
testing against the specification and will discover faults of omission,
indicating that part of the specification has not been fulfilled. White bo
testing is testing against the implementation and will discover
faults of commission, indicating that part of the implementation is faulty. In
order to fully test a software product both black and white bo testing are
re&uired
The problem of applying software testing to defect detection is that software
can only suggest the presence of flaws, not their absence !unless the testing is
ehaustive%. The problem of applying software testing to reliability estimation is that
the input distribution used for selecting test cases may be flawed. In both of these
cases, the mechanism used to determine whether program output is correct is often
impossible to develop. 1bviously the benefit of the entire software testing process is
highly dependent on many different pieces. If any of these parts is faulty, the entire
process is compromised.
-oftware is now uni&ue unlike other physical processes where inputs are
received and outputs are produced. Where software differs is in the manner in which
it fails. 'ost physical systems fail in a fied !and reasonably small% set of ways. Ay
contrast, software can fail in many bizarre ways. $etecting all of the different failure
modes for software is generally infeasible.
PThe key to software testing is trying to find the myriad of failure modes F
something that re&uires ehaustively testing the code on all possible inputs. "or most
programs, this is computationally infeasible. It is commonplace to attempt to test as
many of the syntactic features of the code as possible !within some set of resource
constraints% are called white box software testing techni&ue. Techni&ues that do not
consider the codeDs structure when test cases are selected are called blac" box
techni#ue.
"unctional testing is a testing process that is black bo in nature. It is aimed at
eamine the overall functionality of the product. It usually includes testing of all the
interfaces and should therefore involve the clients in the process.
"inal stage of the testing process should be -ystem Testing. This type of test
involves eamination of the whole computer system, all the software components, all
the hard ware components and any interfaces.
The whole computer based system is checked not only for validity but also to
meet the ob+ectives.
..2 Implementation
Implementation includes all those activities that take place to convert from the
old system to the new. The new system may be totally new, replacing an eisting
system or it may be ma+or modification to the system currently put into use. This
application is taken the input image from user. The implemented in the form of
training and test process as discussed in Bnsupervised learning algorithm. The
algorithm is implemented with help of ) means algorithm . The input images
were read as ,?' images. The separate class is written to read and write the
,?' image.
0. ,onclusion
We have presented two approaches to represent and recognize people by their
gait. The width of the outer contour of the binarized silhouette as well as the
silhouette itself were used as features to represent gait. In one approach, a low-
dimensional observation se&uence is derived from the silhouettes during a gait cycle
and an ('' is trained for each person. ?ait identification is performed by evaluating
the probability that a given observation se&uence was generated by a particular (''
model. In the second approach, the distance between an image feature and eemplar
was used to estimate the observation probability . The performance of the methods
was illustrated using different gait databases
1iblio!raph#
5. 4oger.-.,ressman 2 "oftware En!ineerin! A Practioners Approach 3
Tata 'c?raw (ill, #dition <885
<. ?aint Identification for (uman F / web $ocument.
@. ,atrick 2aughton , 2 ,omplete $eference 4ava 2, Tata 'c?raw (ill , #dition
<885.
E. 'athew Thomas , 2 A tour of 4ava "win! 5 6uideQ, ,(I , <888.
Appendi
99?ait4ecognition.+ava
import +ava.lang.N3
import +ava.io.N3
import +ava.awt.N3
import +ava.awt.event.N3
import +ava.swing.N3
import +ava.swing.filechooser.N3
class ?ait4ecognition etends C"rame implements /ction;istener
I
C"rame frm'ainHnew C"rame!>?ait4ecognition>%3
C;abel lblTest,athHnew C;abel!>"rameTest,ath:>%3
CTet"ield ttTest,athHnew CTet"ield!>RframesOOtestOO>%3
CAutton bt4ecognizeHnew CAutton!>4ecognize>%3
C;abel lbl4esultHnew C;abel!>4esult:>%3
CTet/rea tt4esultHnew CTet/rea!>>%3
C-croll,ane sp4esultHnew C-croll,ane!tt4esult%3
-tring t4esultH>>3
99constructor
public ?ait4ecognition!%
I
frm'ain.set$efault;ook/nd"eel$ecorated!true%3
frm'ain.set4esizable!false%3
frm'ain.setAounds!588,588,:5E,<E8%3
frm'ain.get0ontent,ane!%.set;ayout!null%3
lblTest,ath.setAounds!5S,5E,588,<8%3
frm'ain.get0ontent,ane!%.add!lblTest,ath%3
ttTest,ath.setAounds!5E,:E,5S8,<8%3
frm'ain.get0ontent,ane!%.add!ttTest,ath%3
lbl4esult.setAounds!5S,TE,588,<8%3
frm'ain.get0ontent,ane!%.add!lbl4esult%3
sp4esult.setAounds!5E,7E,<78,5<8%3
frm'ain.get0ontent,ane!%.add!sp4esult%3
tt4esult.set#ditable!false%3
bt4ecognize.setAounds!56:,:E,588,<8%3
bt4ecognize.add/ction;istener!this%3
frm'ain.get0ontent,ane!%.add!bt4ecognize%3
frm'ain.set.isible!true%3
K
99events
public void action,erformed!/ction#vent evt%
I
if!evt.get-ource!%HHbt4ecognize%
I
t4esultH>>3
tt4esult.setTet!>>%3
test!%3
K
K
99methods
public void add4esultTet!-tring t-tr%
I
-ystem.out.println!t-tr%3
t4esultHt4esultUt-trU>On>3
tt4esult.setTet!t4esult%3
tt4esult.repaint!%3
K
private -tring get#tension"rom"ile2ame!-tring t,ath%
I
-tring t#tensionH>>3
int tposHt,ath.lastInde1f!>.>%3
t#tensionHtposHH-5V>>:t,ath.substring!tposU5%3
return!t#tension%3
K
public doubleWX getfedvector!-tring t,ath%
I
'otion$etection mdHnew 'otion$etection!%3
"ile-ystem.iew fvH"ile-ystem.iew.get"ile-ystem.iew!%3
"ile filesWXHfv.get"iles!new "ile!t,ath%,true%3
99get pgm image count
int matched0ountH83
-tring matched"ilesWXHnew -tringWmatched0ountX3
for!int tH83tMfiles.length3tUU%
I
-tring t"ile2ameHfv.get-ystem$isplay2ame!filesWtX%3
-tring t#tensionHget#tension"rom"ile2ame!t"ile2ame%3
if!t#tension.compareToIgnore0ase!>pgm>%HH8%
I
matched0ountUU3
K
K
int t"rame0ountHmatched0ount3
99add4esultTet!-tring.value1f!t"rame0ount%%3
int incrH53
for!int tH83tMt"rame0ount-53tUHincr%
I
-ystem.out.print!>0reating 'otion .ectors of "rame>UtU>9>U
!t"rame0ount-<%U>Or>%3
-tring tstr5Ht,athUtU>.pgm>3
-tring tstr<Ht,athU!tUincr%U>.pgm>3
-tring tstr:H>motionOOmotion>UtU>.pgm>3
md.setRin"ile,ath5!tstr5%3
md.setRin"ile,ath<!tstr<%3
md.setRout"ile,ath!tstr:%3
md.process!%3
K
-ystem.out.println!%3
99add4esultTet!>done.>%3
99create fed image
99add4esultTet!>On0reating fed image...>%3
int silhouetteWidthHE83
int gait0ycleIntervalH<3
int gait0ycle0ountHt"rame0ount9gait0ycleInterval3
,?' pgm5Hnew ,?'!%3
pgm5.set"ile,ath!t,athU>8.pgm>%3
pgm5.readImage!%3
,?' pgmfedHnew ,?'!%3
pgmfed.set"ile,ath!>fed.pgm>%3
pgmfed.setType!>,E>%3
pgmfed.set0omment!>Yfed image>%3
pgmfed.set$imension!gait0ycle0ountNsilhouetteWidth,pgm5.get4ows!%%3
pgmfed.set'a?ray!pgm5.get'a?ray!%%3
int fedRcH83
for!int tH83tMt"rame0ount-53tUHgait0ycleInterval%
I
-tring tstr5H>motionOOmotion>UtU>.pgm>3
pgm5.set"ile,ath!tstr5%3
pgm5.readImage!%3
99add4esultTet!-tring.value1f!t%%3
for!int cH83cMpgm5.get0ols!%3cUU%
I
int t0ountH83
for!int rH83rMpgm5.get4ows!%3rUU%
I
int invalHpgm5.get,iel!r,c%3
if!invalZH8% t0ountUU3
K
if!t0ount[8%
I
for!int tcHc3tcMcUsilhouetteWidth3tcUU%
I
for!int rH83rMpgm5.get4ows!%3rUU%
I
int invalHpgm5.get,iel!r,tc%3
pgmfed.set,iel!r,fedRc,inval%3
K
fedRcUU3
K
break3
K
K
K
99add4esultTet!>done.>%3
pgmfed.writeImage!%3
,?'RImage"ilter img"ilterHnew ,?'RImage"ilter!%3
img"ilter.setRin"ile,ath!>fed.pgm>%3
img"ilter.setRout"ile,ath!>silhouette.pgm>%3
img"ilter.thin!%3
99create fed vector
,?' pgmsilhouetteHnew ,?'!%3
pgmsilhouette.set"ile,ath!>silhouette.pgm>%3
pgmsilhouette.readImage!%3
double fvectorWXHnew doubleWpgm5.get4ows!%X3
for!int rH83rMpgmsilhouette.get4ows!%3rUU%
I
fvectorWrXH8.83
for!int cH83cMpgmsilhouette.get0ols!%3cUU%
I
fvectorWrXUHpgmsilhouette.get,iel!r,c%3
K
fvectorWrX9Hpgmsilhouette.get4ows!%3
K
return!fvector%3
K
public void test!%
I
int train0ountHT3
double distancesWXHnew doubleWtrain0ountX3
int personsWXHnew intWtrain0ountX3
99get fedvector of testperson
add4esultTet!>0reating fedvector of test person...>%3
double fvector5WXHgetfedvector!ttTest,ath.getTet!%%3
add4esultTet!>Training...>%3
int tmincountH83
for!int iH83iMtrain0ount3iUU%
I
add4esultTet!>,erson>U!iU5%U>...>%3
-tring train,athH>RframesOOtrainOO>U!iU5%U>OO>3
double fvector<WXHgetfedvector!train,ath%3
double dH)22.getdistance!fvector5,fvector<%3
if!d[5% tmincountUH53
distancesWiXHd3
personsWiXHiU53
K
99sort fedvector distances
int tminindeH83
for!int iH83iMtrain0ount-53iUU%
I
for!int +HiU53+Mtrain0ount3+UU%
I
if!distancesWiX[distancesW+X%
I
double tempHdistancesWiX3
distancesWiXHdistancesW+X3
distancesW+XHtemp3
int temp5HpersonsWiX3
personsWiXHpersonsW+X3
personsW+XHtemp53
K
K
K
if!tmincountZHtrain0ount%
I
int matchedHpersonsW8X3
-tring tpathH>RframesOOtrainOO>UmatchedU>OO8.pgm>3
,?' tpgm5Hnew ,?'!%3
tpgm5.set"ile,ath!tpath%3
tpgm5.readImage!%3
tpgm5.set"ile,ath!>matched.pgm>%3
tpgm5.writeImage!%3
add4esultTet!>On'atched ,erson: >Umatched%3
add4esultTet!>"inished.>%3
K
else
I
add4esultTet!>2ot 'atched.>%3
add4esultTet!>On"inished.>%3
K
K
public static void main!-tring argsWX%
I
new ?ait4ecognition!%3
K
K

Вам также может понравиться