Вы находитесь на странице: 1из 11

Extraction and Classification of Human Gait Features

Hu Ng1, Wooi-Haw Tan2, Hau-Lee Tong1, Junaidi Abdullah1,


and Ryoichi Komiya 3
1

Faculty of Information Technology, Multimedia University, Persiaran Multimedia,


63100 Cyberjaya, Selangor Malaysia
2
Faculty of Engineering, Multimedia University, Persiaran Multimedia, 63100 Cyberjaya,
Selangor Malaysia
3
Department of Mechatronic and BioMedical Engineering, Faculty of Engineering and Science,
Universiti Tunku Abdul Rahman, Jalan Genting Kelang, Setapak 53300 Kuala Lumpur
{nghu,twhaw,hltong,junaidi}@mmu.edu.my, ryoichi@utar.edu.my

Abstract. In this paper, a new approach is proposed for extracting human gait
features from a walking human based on the silhouette images. The approach
consists of six stages: clearing the background noise of image by morphological
opening; measuring of the width and height of the human silhouette; dividing
the enhanced human silhouette into six body segments based on anatomical
knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg
from the body segment skeletons. The angles of joints, step-size together with
the height and width of the human silhouette are collected and used for gait
analysis. The experimental results have demonstrated that the proposed system
is feasible and achieved satisfactory results.
Keywords: Human identification, Gait analysis, Fuzzy k-nearest neighbour.

1 Introduction
Personal identification or verification schemes are widely used in systems that require
determination of the identity of an individual before granting the permission to access
or use the services. Human identification based on biometrics refers to the automatic
recognition of the individuals based on their physical and/or behavioural characteristics such as face, fingerprint, gait and spoken voice. Biometrics are getting important
and widely acceptable nowadays because they are really personal / unique that one
will not lose or forget it over time.
Gait is unique, as every individual has his/her own walking pattern. Human walking is a complex locomotive action which involves synchronized motions of body
parts, joints and the interaction among them [1]. Gait is a new motion based biometric
technology, which offers the ability to identify people at a distance when other biometrics are obscured. Furthermore, there is no point of contact with any feature capturing device and is henceforth unobtrusive.

H. Badioze Zaman et al. (Eds.): IVIC 2009, LNCS 5857, pp. 596606, 2009.
Springer-Verlag Berlin Heidelberg 2009

Extraction and Classification of Human Gait Features

597

Basically, gait analysis can be divided into two major categories, namely modelbased method and model-free method. Model-based method generally models the
human body structure or motion and extracts features to match them to the model
components. The extraction process involves a combination of information on the
human shape and dynamics of human gait. This implies that the gait dynamics are
extracted directly by determining joint positions from model components, rather than
inferring dynamics from other measures, thus, reducing the effect of background
noise (such as movement of other objects). For instance, Johnson used activityspecific static body parameters for gait recognition without directly analyzing gait
dynamics [2]. Cunado used thigh joint trajectories as the gait features [3]. The advantages of this method are the ability to derive gait signatures directly from model parameters and free from the effect of different clothing or viewpoint. However, it is
time consuming and the computational cost is high due to the complex matching and
searching process.
On the other hand, model-free method normally distinguishes the entire human
body motion using a concise representation without considering the underlying structure. The advantages of this method are low computational cost and less time consuming. For instance, BenAbdelkader proposed an eigengait method using image selfsimilarity plots [4]. Collins established a method based on template matching of body
silhouettes in key frames during a humans walking cycle [5]. Philips characterized
the spatial-temporal distribution generated by gait motion in its continuum [6].
This paper presents the unique concept of extracting the human gait features of
walking human from consecutive silhouette images. First, the height and width of the
human subject are determined. Next, each human silhouette image is enhanced and
divided into six body segments to construct the two-dimension (2D) skeleton of the
body model. Then, Hough transform technique is applied to obtain the joint angle for
each body segment. The distance between the bottoms of both lower legs can also be
obtained from the body segment skeletons. This concept of joint angle calculation is
found faster in process and less complicated than the model-based method like linear
regression approach by Yoo [7] and temporal accumulation approach by Wagg [8].

2 Overview of the System


First, morphological opening is applied to reduce background noise on the raw human
silhouette images. The width and height of each human silhouette are then measured.
Next, each of the enhanced human silhouettes is divided into six body segments based
on the anatomical knowledge [10]. Morphological skeleton is later applied to obtain
the skeleton of each body segment. The joint angles are obtained after applying
Hough transform on the skeletons. Step-size, which is the distance between the bottoms of both legs are also measured from the skeletons of the lower legs. The dimension of the human silhouette, step-size and six joint angles from body segments
head and neck, torso, right hip and thigh, right lower leg, left hip and thigh, and left
lower leg are then used as the gait features for classification. Fig. 1 summarizes the
process flow of the proposed system.

598

H. Ng et al.

Original
image
enhancement

Measurement
of width and
height

Measurement
of step-size

Computation
of the
similarities

Human
silhouette
segmentation

Joint angles
extraction

Skeletonization
of body
segments

Determination
of the k-nearest
neighbor

Classification
of the unlabeled
subjects

Fig. 1. Flow chart of the proposed system

2.1 Original Image Enhancement


The acquired original raw human silhouette images are obtained from the small subject gait database, University of Southampton [9]. They used static cameras to capture eleven subjects walking along the indoor track in four different angles. Video
data was first preprocessed using Gaussian averaging filter for noise suppression,
followed by Sobel edge detection and background subtraction technique to create the
human silhouette images.
Due to poor lighting condition during the video shooting, shadow was found
especially near to the feet. It appeared as part of the subject body in the binary human
silhouette image as shown in Fig. 2. The present of the artefact affects the gait feature
extraction and the measurement of joint angles. This problem can be reduced by applying
morphological opening with a 77 diamond shape structuring element, as denoted by
A D B = (A\B) B) .

(1)

where A is the image and B is the structuring element. The opening first performs
erosion operation and followed by dilation operation. Fig. 2 shows the result of applying morphological opening on a human silhouette image.

(a) Original image

(b) Enhanced image

Fig. 2. Original and enhanced image after morphological opening

Extraction and Classification of Human Gait Features

599

2.2 Measurement of Width and Height


The width and height of the subject during the walking sequences are measured from
the bounding box of the enhanced human silhouette, as shown in Fig. 3. These two
features will be used for gait analysis in the later stage.

Height

Width
Fig. 3. The width and height of a human silhouette

2.3 Dividing Human Silhouette


At this stage, the enhanced human silhouette is divided into six body segments based
on the anatomical knowledge [10]. First, the centroid of the subject is determined by
calculating the centre of mass of the human silhouette. The area above the centroid is
considered as the upper body head, neck and torso. The area below the centroid is
considered as the lower body hips, legs and feet.
Next, one third of the upper body is divided as the head and neck. The remaining
two thirds of the upper body are classified as the torso. The lower body is divided into
two portions (i) hips and thighs (ii) lower legs and feet with the ratio one to two.
Again, the centroid coordinate is used to divide the two portions into the final four
segments (i) right hip and thigh (ii) lower right leg and foot (iii) left hip and thigh
and (iv) lower left leg and foot.

Fig. 4. Six body segments

600

H. Ng et al.

Fig. 4 shows the six segments of the body, where a represent head and neck, b
represents torso, c represents right hip and thigh, d represents lower right leg and
foot, e represents left hip and thigh and f represents lower left leg and foot.
2.4 Skeletonization of Body Segments
To better represent each body segment, morphological skeleton is used to construct
the skeleton for each of the body segments. Skeletonization involves consecutive
erosions and opening operations on the image until the set difference between the two
operations is zero.
Erosion
Opening
Set differences
A\kB

(A\kB) D B

(A\kB) ((A\kB)) D B

(2)

where A is an image, B is the structuring element and k is from zero to infinity. Fig. 5.
shows the skeleton of the body segments.

Fig. 5. Skeleton on a torso segment

2.5 Joint Angles Extraction


To extract the joint angle for each body segment, Hough transform is applied on the
skeleton. Hough transforms maps pixels in the image space to the straight line

Fig. 6. Joint angle formation

Extraction and Classification of Human Gait Features

601

through a parameter space. The skeleton in each body segment, which is the longest
line, is indicated by the highest intensity point in the parameter space. Fig. 6 shows
the joint angle formation from the most probable straight line detected via Hough
transform, where is the joint angle calculated using

90 + = .

(3)

2.6 Measurement of Step-Size


To obtain the step-size of each walking sequence, the Euclidian distance between the
bottom ends of lower right leg and lower left leg are measured.
Fig. 7 shows all the gait features extracted from a human silhouette, where Angle 7
is the thigh angle, calculated as
Angle 7 = Angle 6 Angle 4 .

(4)

Fig. 7. All the extracted gait features

3 Classification Technique
For the classification, the supervised fuzzy K-Nearest Neighbour (KNN) algorithm is
applied, as there is sufficient data to be used for training and testing. Basically, KNN
is a classifier to distinguish the different subjects based on the nearest training data in
the feature space. In other words, subjects are classified according to the majority of
nearest neighbours.

602

H. Ng et al.

In extension to KNN, J.M. Keller [11] has integrated the fuzzy relation with the
KNN. According to the Kellers concept, the unlabeled subjects membership function of class i is given by Equation (5).

1
ui ( x)

xKNN
|| x x || m1

ui ( x ) =

xKNN
m 1
|| x x ||

(5)

where x , x and Ui(x) represent the unlabelled subjects, labelled subjects and xs
membership of class i respectively. Equation (5) will compute the membership value
of unlabeled subject by the membership value of labelled subject and distance between the unlabelled subject and KNN labelled subjects.
Through the fuzziness, the KNN will annotate the appropriate class to the unlabelled subject by sum of similarities between labelled subjects. The algorithm involved in the identification of the human beings is implemented as follows:
Step 1: Compute the distance between the unlabelled subject, and all labelled or
training subjects. The distance between an unlabelled subject xi and labelled
subject, xj is defined as:
D (xi, xj) = || xi, xj || 2 .

(6)

Step 2: Sort the objects based on the similarity and identify the k-nearest neighbours.
k-nearest neighbours, KNN ={x1, x2, , xk} .

(7)

Step 3: Compute the membership value for every class using Equation (5).
Step 4: Classify unlabelled subject to the class with the maximum membership value
as shown in Fig. 8.

Fig. 8. An example for four nearest neighbours

Extraction and Classification of Human Gait Features

603

From Fig. 8, the values on the lines denote the similarities between unlabelled and
labelled subjects. The sum of membership values for Class 1, m1=0.7 and for Class 2,
m2=0.3. Since m1 is more than m2, so the unlabelled subject is classified as Class 1.

4 Experimental Results and Discussion


The experiment was carried out for nine subjects with three different conditions,
which are walking with normal speed, walking in own shoes and walking in boots.
The major objective was to identify the degree of accuracy for fuzzy KNN technique
by using the different values of k. For each subject, there were approximately twenty
sets of walking data in normal track (walking parallel to the static camera).
In order to obtain the optimized results, five features were adopted for the classification. First, maximum thigh angle, max was determined from all the thigh angles
collected during a walking sequence. When max was located, the corresponding values for the step-size, S and width, w and height, h are determined as well. From the
graph plotted, it can be observed that the width for each subject changes in a sinusoidal patter over time, as shown in Fig. 9. Therefore, the last employed feature is the
average of the maximum width, AP.

Fig. 9. Graph of width versus time

All the features were channelled into the classification process and the distance of
similarity between unlabelled object, xi and labelled object, xj was defined by Equation (8).

D( xi , x j ) = (imax jmax ) 2 + ( wi w j )2 + (hi h j ) 2 + ( Si S j )2 + ( Aip Ajp )2 .

(8)

The adopted algorithm was supervised fuzzy KNN, which requires training and testing. For the training part, a minimum of eight set of walking data were used for each
subject. The residual data were used for the testing. The allocation of the training and
testing data for each condition is shown in Table 1.

604

H. Ng et al.
Table 1. Allocation of the data for each condition
Normal speed
Wearing own shoes
Wearing boots

Testing data set


106
101
100

Training data set


78
79
74

Different values of k nearest neighbours were adopted for the classification, where
k = 3, 4, 5, 6, 7 and 8. Since the minimum of the training data is eight, the maximum
value of k was set to eight. The results obtained are depicted in Fig. 10 and Table 2.

Fig. 10. Graph for the percentage of accuracy versus the value of k
Table 2. The percentage of accuracy for fuzzy KNN
k
3
4
5
6
7
8

Normal speed
(%)
78.3
76.4
75.5
75.5
76.4
77.4

Wearing own shoes


(%)
72.3
75.2
72.3
71.3
71.3
72.3

Wearing boots
(%)
83
82
81
82
81
80

From Table 2, it can be concluded that the changes of value k do not have a significant impact on the accuracy of the classification. However, when k = 3, results
were slightly better on others. More satisfactory classification results might be obtained if more features are employed.
In addition to evaluation for each condition, classification results for each subject
were evaluated as well. This was to determine which unlabelled subjects were well
identified and vice versa for all the conditions. Since k = 3 provided the best result for
all three conditions, the experiment was carried out using k = 3 for subject evaluation.

Extraction and Classification of Human Gait Features

605

The obtained results for respective subject are shown in Table 3. From Table 3, subject 1 produces the most satisfactory classification results for all three conditions. This
was contributed by the adopted features for subject 1 which was highly distinctive
from other subjects. Furthermore, there were not many variations between training
and testing data for subject 1. In other words, subject 1 was well recognizable under
these three conditions. For the rest of the subjects, the accuracy of the classification
was highly depending on the conditions. For instance, under normal speed the accuracy for subject 8 only achieved an accuracy of 36.4%. This was due to the large
number of misclassifications of subject 8 to other subjects.
Table 3. The percentage of the classification results for three conditions when k = 3

Subject 1
Subject 2
Subject 3
Subject 4
Subject 5
Subject 6
Subject 7
Subject 8
Subject 9

Normal speed
(%)
100
81.8
92.3
80
100
100
57.1
36.4
50

Wearing own shoes


(%)
81.8
58.3
70
37.5
53.8
90.9
85.7
80
83.3

Wearing boots
(%)
100
90.9
81.8
81.8
90.9
45.5
85.7
91.7
90

5 Conclusion
We have described a new approach for extracting the gait features from enhanced
human silhouette image. The gait features is extracted from human silhouette by determining the skeleton from body segment. The joint angles are obtained after applying Hough transform on the skeleton. In the future, more gait features will be extracted and applied in order to achieve higher accuracy of classification.

Acknowledgment
The authors would like to thank Prof Mark Nixon, School of Electronics and Computer Science, University of Southampton, United Kingdoms for providing the database for use in this work.

References
1. BenAbdelkader, C., Culter, R., Nanda, H., Davis, L.: EigenGait: Motion-based Recognition of People Using Image Self-similarity. In: Proceeding of International Conference
Audio and Video-Based Person Authentication, pp. 284294 (2001)
2. Bobick, A., Johnson, A.: Gait Recognition Using Static, Activity-specific Parameters. In:
Proceeding IEEE Computer Vision and Pattern Recognition, pp. 423430 (2001)

606

H. Ng et al.

3. Cunado, D., Nixon, M., Carter, J.: Automatic Extraction and Description of Human Gait
Models for Recognition Purposes. Computer and Vision Image Understanding 90, 141
(2003)
4. BenAbdelkader, C., Cutler, R., Davis, L.: Motion-based Recognition of People in Eigengait Space. In: Proceedings of Fifth IEEE International Conference, pp. 267272 (2002)
5. Collin, R., Gross, R., Shi, J.: Silhouette-based Human Identification from Body Shape and
Gait. In: Proceedings of Fifth IEEE International Conference, pp. 366371 (2002)
6. Phillips, P.J., Sarkar, S., Robledo, I., Grother, P., Bowyer, K.: The Gait Identification
Challenge Problem: Dataset and Baseline Algorithm. In: Proceedings of 16th International
Conference Pattern Recognition, pp. 385389 (2002)
7. Yoo, J.H., Nixon, M.S., Harris, C.J.: Extracting Human Gait Signatures by Body Segment
Properties. In: Fifth IEEE Southwest Symposium on Image Analysis and Interpretation,
pp. 3539 (2002)
8. Wagg, D.K., Nixon, M.S.: On Automated Model-based Extraction and Analysis of Gait.
In: Proceedings of 6th IEEE International Conference on Automatic face and Gesture Recognition, pp. 1116 (2004)
9. Shutler, J.D., Grant, M.G., Nixon, M.S., Carter, J.N.: On a Large Sequence-based Human
Gait Database. In: Proceedings of 4th International Conference on Recent Advances in Soft
Computing, pp. 6671 (2002)
10. Dempster, W.T., Gaughran, G.R.L.: Properties of Body Segments Based on Size and
Weight. American Journal of Anatomy 120, 3354 (1967)
11. Keller, J., Gray, M., Givens, J.: A Fuzzy K-Nearest Neighbour Algorithm. IEEE Trans.
Systems, Man, Cybern. 15, 580585 (1985)

Вам также может понравиться