Академический Документы
Профессиональный Документы
Культура Документы
Overview:
What are gestures?
What can gestures be used for? How to find a hand in an image? How to recognise its shape? How to recognise its motion?
What is Gesture?
A movement of a limb or the body as an expression of thought or feeling.
--Oxford Concise Dictionary 1995
Mood, emotion
Mood and emotion are expressed by body language Facial expressions Tone of voice Allows computers to interact with human beings in a more natural way
Sign Language
5000 gestures in vocabulary each gesture consists of a hand shape, a hand motion and a location in 3D space facial expressions are important full grammar and syntax each country has its own Sign language Irish Sign Language is different from British Sign Language or American Sign Language
Datagloves
Datagloves
Datagloves provide very accurate measurements of hand-shape But are cumbersome to wear Expensive Connected by wires- restricts freedom of movement
Coloured Gloves
User must wear coloured gloves Very cheap Easy to put on BUT get dirty Eventually we wish to use natural skin
Colour Segment
32
Noise Removal
32
Scale by Area
Demo
Gesture Video
Feature Space
Each point represents a different image Clusters of points represent different hand-shapes Distance between points depends on how similar the images are
Multiple sub-spaces
Classifying a new unknown image
Gesture 2
Gesture 1
Global space
We know the image co-ordinates and the area of the hand in the original image
We can calculate depth and xy-position
Yes/No?
A B C
Hierarchical Search
We need to search thousands of images How to do this efficiently? We need to use a coarse-to-finesearch strategy
Multi-scale Hierarchy
Factor = 3.0
Factor = 2.0
Factor = 1.0
Motion Recognition
Hidden Markov Model ( HMM ) --- time sequence of images modeling
f P(f |HMM1)
P(f |HMM2)
HMM1 (Hello)
Co-articulation
In fluent dialogue signs are modified by preceding and following signs
A
intermediate forms
Future Work:
Occlusions (Atid) Grammars in Irish Sign Language. --- Sentence Recognition Body Language.
Face Recognition
A noisy environment
Errors
Model-based Recognition
Pose-tracking
Facial Expressions
Anger
Fear
Disgust
Happy
Sad
Surprise
Face Recognition
Summary
Single pose Multiple pose Principal components analysis Model-based recognition Neural Networks
Single Pose
Standard head-and-shoulders view with uniform background Easy to find face within image
Aligning Images
Alignment
Faces in the training set must be aligned with each other to remove the effects of translation, scale, rotation etc. It is easy to find the position of the eyes and mouth and then shift and resize images so that are aligned with each other
Nearest Neighbour
Once the images have been aligned you can simply search for the member of the training set which is nearest to the test image. There are a number of measures of distance including Euclidean distance, and the crosscorrelation
Principal Components
PCA reduces the number of dimensions and so the memory requirement is much reduced. The search time is also reduced
-------(2)--------------(3)--------------(4)-------
-------(5)--------------(6)--------------(7)-------
-------(8)--------------(9)--------------(10)-------
These mean that the class is no longer ellipsoidal and must be represented by a manifold
Facial Expressions
Anger
Fear
Disgust
Happy
Sad
Surprise
There are six types of facial expression We could use PCA on the eyes and mouth so we could have eigeneyes and eigenmouths
Multiple Poses
Heads must now be aligned in 3D world space Classes now form trajectories in feature space It becomes difficult to recognise faces because the variation due to pose is greater than the variation between people
Model-based Recognition
We can fit a model directly to the face image
Model consists of a mesh which is matched to facial features such as the eyes, nose, mouth and edges of the face. We use PCA to describe the parameters of the model rather than the pixels.
Model-based Recognition
The model copes better with multiple poses and changes in facial expression.
Coarse Classification
Fingerprints can be divided into 6 basic classes (some systems use other classes)
Arch Tented Arch Whorl Right loop Left Loop Double Loop
Orientation Field
The orientation field gives the ridge direction at each point in the image
Accurate Matching
Orientation fields and PCA are not good enough to give an accurate match They can reduce the number of possible candidates We can then apply a more accurate but time-consuming technique to the remaining candidate images
Minutiae Matching
Minutiae are fine details of the ridges in the fingerprint image such as
Ridge terminations, Crossovers Bifurcations etc.
Minutiae Types
Different systems define different types of minutiae The most common are terminations (ridge endings) and bifurcations(forks)
Thinning
Lines are thinned to a width of 1 pixel
Identifying minutiae
Each black pixel in the image is classified using its crossing number. The crossing number of pixel p is defined as:
cn(p)= Half the sum of the differences between adjacent pixels in the 8neighbourhood of p
Identifying Minutiae
If the crossing number cn is equal to 2 then the pixel is not a minutia but a normal intraridge pixel If the crossing number is not equal to 2 then the pixel is some kind of minutia
Removing false-minutiae
Hough Transform
Discretise the range of values for translation, rotation and distortion Set up an accumulator matrix A in which each element represents a different combination of translation, rotation and distortion For each possible pair of minutiae calculate the best values of translation, rotation and distortion which makes them match Increase the corresponding element of A by 1 At the end of the process the element of A with the largest value represents the best combination
An alternative to minutiaematching
FingerCodes
1. Centre image on core 2. Divide image into circular zones 3. Pass each zone through a set of 8 Gabor filters (more about this next week) 4. Compare the results using Euclidean distance
Iris Recognition
John Daugman
There is only one iris recognition algorithm in use The algorithm was developed mainly by John Daugman, PhD, OBE www.CL.cam.ac.uk/users/jgd1000/ It is owned by the company Iridian Technologies
Masking
The boundaries of the eyelids can be found Eyelashes and specularities (reflections) can be found These areas can be masked out
Gabor Wavelets
Gabor Wavelets
Gabor Wavelets filter out structures at different scales and orientations For each scale and orientation there is a pair of odd and even wavelets A scalar product is carried out between the wavelet and the image (just as in the Discrete Fourier Transform) The result is a complex number
Phase Demodulation
The complex number is converted to 2 bits The modulus is thrown away because it is sensitive to illumination intensity The phase is converted to 2 bits depending on which quadrant it is in
IrisCodes
This process is carried out at a number of points throughout the image. The result is 2048 bits which describe each iris uniquely Two codes from different irises can be compared by finding the number of bits different between them this is called the Hamming distance This is equivalent to computing an XOR between the two codes. This can be done very quickly To allow for rotation of the iris images the codes can be shifted with respect to each other and the minimum Hamming distance found
Hamming Distance
Binomial Distribution
If two codes come from different irises the different bits will be random The number of different bits will obey a binomial distribution with mean 0.5
Identification
If two codes come from the same iris the differences will no longer be random The Hamming distance will be less than expected than if the differences were random If the Hamming distance is < 0.33 the chances of the two codes coming from different irises is 1 in 2.9 million So far it has been tried out on 2.3 million people without a single error
Small target (1 cm) to acquire from a distance (1 m) Moving target ...within another... on yet another Located behind a curved, wet, reflecting surface Obscured by eyelashes, lenses, reflections Partially occluded by eyelids, often drooping Deforms non-elastically as pupil changes size Illumination should not be visible or bright Some negative (Orwellian) connotations