Академический Документы
Профессиональный Документы
Культура Документы
Abstract. In this paper, a new approach is proposed for extracting human gait
features from a walking human based on the silhouette images. The approach
consists of six stages: clearing the background noise of image by morphological
opening; measuring of the width and height of the human silhouette; dividing
the enhanced human silhouette into six body segments based on anatomical
knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg
from the body segment skeletons. The angles of joints, step-size together with
the height and width of the human silhouette are collected and used for gait
analysis. The experimental results have demonstrated that the proposed system
is feasible and achieved satisfactory results.
Keywords: Human identification, Gait analysis, Fuzzy k-nearest neighbour.
1 Introduction
Personal identification or verification schemes are widely used in systems that require
determination of the identity of an individual before granting the permission to access
or use the services. Human identification based on biometrics refers to the automatic
recognition of the individuals based on their physical and/or behavioural characteristics such as face, fingerprint, gait and spoken voice. Biometrics are getting important
and widely acceptable nowadays because they are really personal / unique that one
will not lose or forget it over time.
Gait is unique, as every individual has his/her own walking pattern. Human walking is a complex locomotive action which involves synchronized motions of body
parts, joints and the interaction among them [1]. Gait is a new motion based biometric
technology, which offers the ability to identify people at a distance when other biometrics are obscured. Furthermore, there is no point of contact with any feature capturing device and is henceforth unobtrusive.
H. Badioze Zaman et al. (Eds.): IVIC 2009, LNCS 5857, pp. 596606, 2009.
Springer-Verlag Berlin Heidelberg 2009
597
Basically, gait analysis can be divided into two major categories, namely modelbased method and model-free method. Model-based method generally models the
human body structure or motion and extracts features to match them to the model
components. The extraction process involves a combination of information on the
human shape and dynamics of human gait. This implies that the gait dynamics are
extracted directly by determining joint positions from model components, rather than
inferring dynamics from other measures, thus, reducing the effect of background
noise (such as movement of other objects). For instance, Johnson used activityspecific static body parameters for gait recognition without directly analyzing gait
dynamics [2]. Cunado used thigh joint trajectories as the gait features [3]. The advantages of this method are the ability to derive gait signatures directly from model parameters and free from the effect of different clothing or viewpoint. However, it is
time consuming and the computational cost is high due to the complex matching and
searching process.
On the other hand, model-free method normally distinguishes the entire human
body motion using a concise representation without considering the underlying structure. The advantages of this method are low computational cost and less time consuming. For instance, BenAbdelkader proposed an eigengait method using image selfsimilarity plots [4]. Collins established a method based on template matching of body
silhouettes in key frames during a humans walking cycle [5]. Philips characterized
the spatial-temporal distribution generated by gait motion in its continuum [6].
This paper presents the unique concept of extracting the human gait features of
walking human from consecutive silhouette images. First, the height and width of the
human subject are determined. Next, each human silhouette image is enhanced and
divided into six body segments to construct the two-dimension (2D) skeleton of the
body model. Then, Hough transform technique is applied to obtain the joint angle for
each body segment. The distance between the bottoms of both lower legs can also be
obtained from the body segment skeletons. This concept of joint angle calculation is
found faster in process and less complicated than the model-based method like linear
regression approach by Yoo [7] and temporal accumulation approach by Wagg [8].
598
H. Ng et al.
Original
image
enhancement
Measurement
of width and
height
Measurement
of step-size
Computation
of the
similarities
Human
silhouette
segmentation
Joint angles
extraction
Skeletonization
of body
segments
Determination
of the k-nearest
neighbor
Classification
of the unlabeled
subjects
(1)
where A is the image and B is the structuring element. The opening first performs
erosion operation and followed by dilation operation. Fig. 2 shows the result of applying morphological opening on a human silhouette image.
599
Height
Width
Fig. 3. The width and height of a human silhouette
600
H. Ng et al.
Fig. 4 shows the six segments of the body, where a represent head and neck, b
represents torso, c represents right hip and thigh, d represents lower right leg and
foot, e represents left hip and thigh and f represents lower left leg and foot.
2.4 Skeletonization of Body Segments
To better represent each body segment, morphological skeleton is used to construct
the skeleton for each of the body segments. Skeletonization involves consecutive
erosions and opening operations on the image until the set difference between the two
operations is zero.
Erosion
Opening
Set differences
A\kB
(A\kB) D B
(A\kB) ((A\kB)) D B
(2)
where A is an image, B is the structuring element and k is from zero to infinity. Fig. 5.
shows the skeleton of the body segments.
601
through a parameter space. The skeleton in each body segment, which is the longest
line, is indicated by the highest intensity point in the parameter space. Fig. 6 shows
the joint angle formation from the most probable straight line detected via Hough
transform, where is the joint angle calculated using
90 + = .
(3)
(4)
3 Classification Technique
For the classification, the supervised fuzzy K-Nearest Neighbour (KNN) algorithm is
applied, as there is sufficient data to be used for training and testing. Basically, KNN
is a classifier to distinguish the different subjects based on the nearest training data in
the feature space. In other words, subjects are classified according to the majority of
nearest neighbours.
602
H. Ng et al.
In extension to KNN, J.M. Keller [11] has integrated the fuzzy relation with the
KNN. According to the Kellers concept, the unlabeled subjects membership function of class i is given by Equation (5).
1
ui ( x)
xKNN
|| x x || m1
ui ( x ) =
xKNN
m 1
|| x x ||
(5)
where x , x and Ui(x) represent the unlabelled subjects, labelled subjects and xs
membership of class i respectively. Equation (5) will compute the membership value
of unlabeled subject by the membership value of labelled subject and distance between the unlabelled subject and KNN labelled subjects.
Through the fuzziness, the KNN will annotate the appropriate class to the unlabelled subject by sum of similarities between labelled subjects. The algorithm involved in the identification of the human beings is implemented as follows:
Step 1: Compute the distance between the unlabelled subject, and all labelled or
training subjects. The distance between an unlabelled subject xi and labelled
subject, xj is defined as:
D (xi, xj) = || xi, xj || 2 .
(6)
Step 2: Sort the objects based on the similarity and identify the k-nearest neighbours.
k-nearest neighbours, KNN ={x1, x2, , xk} .
(7)
Step 3: Compute the membership value for every class using Equation (5).
Step 4: Classify unlabelled subject to the class with the maximum membership value
as shown in Fig. 8.
603
From Fig. 8, the values on the lines denote the similarities between unlabelled and
labelled subjects. The sum of membership values for Class 1, m1=0.7 and for Class 2,
m2=0.3. Since m1 is more than m2, so the unlabelled subject is classified as Class 1.
All the features were channelled into the classification process and the distance of
similarity between unlabelled object, xi and labelled object, xj was defined by Equation (8).
(8)
The adopted algorithm was supervised fuzzy KNN, which requires training and testing. For the training part, a minimum of eight set of walking data were used for each
subject. The residual data were used for the testing. The allocation of the training and
testing data for each condition is shown in Table 1.
604
H. Ng et al.
Table 1. Allocation of the data for each condition
Normal speed
Wearing own shoes
Wearing boots
Different values of k nearest neighbours were adopted for the classification, where
k = 3, 4, 5, 6, 7 and 8. Since the minimum of the training data is eight, the maximum
value of k was set to eight. The results obtained are depicted in Fig. 10 and Table 2.
Fig. 10. Graph for the percentage of accuracy versus the value of k
Table 2. The percentage of accuracy for fuzzy KNN
k
3
4
5
6
7
8
Normal speed
(%)
78.3
76.4
75.5
75.5
76.4
77.4
Wearing boots
(%)
83
82
81
82
81
80
From Table 2, it can be concluded that the changes of value k do not have a significant impact on the accuracy of the classification. However, when k = 3, results
were slightly better on others. More satisfactory classification results might be obtained if more features are employed.
In addition to evaluation for each condition, classification results for each subject
were evaluated as well. This was to determine which unlabelled subjects were well
identified and vice versa for all the conditions. Since k = 3 provided the best result for
all three conditions, the experiment was carried out using k = 3 for subject evaluation.
605
The obtained results for respective subject are shown in Table 3. From Table 3, subject 1 produces the most satisfactory classification results for all three conditions. This
was contributed by the adopted features for subject 1 which was highly distinctive
from other subjects. Furthermore, there were not many variations between training
and testing data for subject 1. In other words, subject 1 was well recognizable under
these three conditions. For the rest of the subjects, the accuracy of the classification
was highly depending on the conditions. For instance, under normal speed the accuracy for subject 8 only achieved an accuracy of 36.4%. This was due to the large
number of misclassifications of subject 8 to other subjects.
Table 3. The percentage of the classification results for three conditions when k = 3
Subject 1
Subject 2
Subject 3
Subject 4
Subject 5
Subject 6
Subject 7
Subject 8
Subject 9
Normal speed
(%)
100
81.8
92.3
80
100
100
57.1
36.4
50
Wearing boots
(%)
100
90.9
81.8
81.8
90.9
45.5
85.7
91.7
90
5 Conclusion
We have described a new approach for extracting the gait features from enhanced
human silhouette image. The gait features is extracted from human silhouette by determining the skeleton from body segment. The joint angles are obtained after applying Hough transform on the skeleton. In the future, more gait features will be extracted and applied in order to achieve higher accuracy of classification.
Acknowledgment
The authors would like to thank Prof Mark Nixon, School of Electronics and Computer Science, University of Southampton, United Kingdoms for providing the database for use in this work.
References
1. BenAbdelkader, C., Culter, R., Nanda, H., Davis, L.: EigenGait: Motion-based Recognition of People Using Image Self-similarity. In: Proceeding of International Conference
Audio and Video-Based Person Authentication, pp. 284294 (2001)
2. Bobick, A., Johnson, A.: Gait Recognition Using Static, Activity-specific Parameters. In:
Proceeding IEEE Computer Vision and Pattern Recognition, pp. 423430 (2001)
606
H. Ng et al.
3. Cunado, D., Nixon, M., Carter, J.: Automatic Extraction and Description of Human Gait
Models for Recognition Purposes. Computer and Vision Image Understanding 90, 141
(2003)
4. BenAbdelkader, C., Cutler, R., Davis, L.: Motion-based Recognition of People in Eigengait Space. In: Proceedings of Fifth IEEE International Conference, pp. 267272 (2002)
5. Collin, R., Gross, R., Shi, J.: Silhouette-based Human Identification from Body Shape and
Gait. In: Proceedings of Fifth IEEE International Conference, pp. 366371 (2002)
6. Phillips, P.J., Sarkar, S., Robledo, I., Grother, P., Bowyer, K.: The Gait Identification
Challenge Problem: Dataset and Baseline Algorithm. In: Proceedings of 16th International
Conference Pattern Recognition, pp. 385389 (2002)
7. Yoo, J.H., Nixon, M.S., Harris, C.J.: Extracting Human Gait Signatures by Body Segment
Properties. In: Fifth IEEE Southwest Symposium on Image Analysis and Interpretation,
pp. 3539 (2002)
8. Wagg, D.K., Nixon, M.S.: On Automated Model-based Extraction and Analysis of Gait.
In: Proceedings of 6th IEEE International Conference on Automatic face and Gesture Recognition, pp. 1116 (2004)
9. Shutler, J.D., Grant, M.G., Nixon, M.S., Carter, J.N.: On a Large Sequence-based Human
Gait Database. In: Proceedings of 4th International Conference on Recent Advances in Soft
Computing, pp. 6671 (2002)
10. Dempster, W.T., Gaughran, G.R.L.: Properties of Body Segments Based on Size and
Weight. American Journal of Anatomy 120, 3354 (1967)
11. Keller, J., Gray, M., Givens, J.: A Fuzzy K-Nearest Neighbour Algorithm. IEEE Trans.
Systems, Man, Cybern. 15, 580585 (1985)