Вы находитесь на странице: 1из 13

Computers in Biology and Medicine 42 (2012) 135146

Contents lists available at SciVerse ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/cbm

Computational methodology for automatic detection of strabismus in digital


images through Hirschberg test
Joa~ o Dallyson Sousa de Almeida n, Aristofanes Correa Silva, Anselmo Cardoso de Paiva,
Jorge Antonio Meireles Teixeira
~
Federal University of MaranhaoUFMA,
Applied Computing GroupNCA/UFMA, Av. dos Portugueses, SN, Campus do Bacanga, Bacanga 65085-580, Sa~ o Lus, MA, Brazil

a r t i c l e i n f o

abstract

Article history:
Received 9 September 2010
Accepted 6 November 2011

Strabismus is a pathology that affects about 4% of the population, causing aesthetic problems, reversible
at any age; however, problems that can also cause irreversible muscular alterations, and alter the vision
mechanism. The Hirschberg test is one of the exams used to detect this pathology. The application of
high technology resources to help diagnose and treat ophthalmological conditions is, lamentably, not
commonly found in the sub-specialty of strabismus. This work presents a methodology for automatic
detection of strabismus in digital images through the Hirschberg test. For such, the work was organized
into four stages: (1) nding the region of the eyes; (2) determining the precise location of the eyes;
(3) locating the limbus and brightness; and (4) identifying strabismus. The methodology has produced
results on the range of 100% sensibility, 91.3% specicity and 94% for the correct identication of
strabismus, ensuring the efciency of its geostatistical functions for the extraction of eye texture and
for the calculation of the alignment between the eyes on digital images obtained from the
Hirschberg test.
& 2011 Elsevier Ltd. All rights reserved.

Keywords:
Medical image
Strabismus
Hirschberg test
Geostatistical functions
Image processing
Pattern recognition
Support vector machine

1. Introduction
Strabismus is an abnormal condition that makes the eyes lose
their parallelism between. While an eye stares at a frontal point,
the other turns aside, or even upwards and downwards. Because
of this, the brain receives two images with different focuses,
instead of two images that converge into a single spot. There are
several types of strabismus: the affected eye can be yawed toward
the nose (convergent strabismus); it can turn aside (divergent
strabismus); or it turns upwards or downwards (vertical strabismus). There can be a combination of horizontal and vertical
yaw in the same patient, as, for example, toward the nose and
upwards.
In general, it can be said that the mechanical component of
strabismus, in other words, the esthetic aspect of the yaw, can be
treated at any age. On the other hand, the sensorial disturbances
are more signicant, and are only treatable at a certain period in
ones lifethe stage of plasticity of the visual system, which
lingers on till the age of nine. Thus, as the main sensorial
complication of a yaw is the strabismic amblyopya, its treatment

Corresponding author. Tel.: 55 98 33018243; fax: 55 98 33018841.


E-mail addresses: jdallyson@yahoo.com.br (J. Dallyson Sousa de Almeida),
ari@dee.ufma.br (A. Correa Silva), paiva@deinf.ufma.br (A. Cardoso de Paiva),
jorgemeireles1@bol.com.br (J. Antonio Meireles Teixeira).
0010-4825/$ - see front matter & 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compbiomed.2011.11.001

must be initiated as soon as a strabismus condition with amblyogenic characteristics is detected [1,2].
To diagnose strabismus, the following exams are performed:
visual acuity, eye background, external examination of the eyes
(cornea, sclera, conjunctiva, iris, lens, etc.), and eye movement
exam, obtained by means of the Cover test and the Hirschberg
test. The Hirschberg test consists basically of sending a thin beam
of light into the patients eyes in order to verify if the reection in
each eye is located at the same place on both corneas. Besides
these exams, there are the devices called electronic synoptophores, which measure strabismus via the projection of two
separate and dissimilar images in the same position in space.
Despite the increasing use of cutting-edge resources to help
with the diagnosis and the treatment of various ophthalmological
conditions, the sub-specialty of strabismus has not been given the
same importance. Considering the fact that it is not easy to nd
professionals with enough experience in this sub-area away from
large urban centers a fact that makes precocious diagnoses
more difcult these technologies have become essential in cities
father away from those more advanced centers.
In 1998, the Brazilian government created the Unied Health
System (SUS). This system provides health assistance from
simple ambulatory assistance to organ transplantations ensuring integral, universal and cost-free health benets for the entire
population. Within this program, and in harmony with the
principles and dictums of the SUS, the Health Program at Schools

136

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

(PSE) was created in September, 2008, aiming at reinforcing


health actions in the primary sphere, caring for the prevention
and the promotion of better health assistance in Brazilian schools.
The PSE structure lies upon four platforms. Let us consider the
rst of these, from where there stems the main objectives of our
work. This includes, among the ever so large numerous health
issues, visual acuity tests. Consequently, the relevance of our
work may be directed mainly toward the precocious diagnoses of
strabismus and also toward the implementation of the ophthalmologic acuity test. This project can be further extended to the
activities developed in the Family Health Program (PSF). Besides,
this is a test that can be safely applied by non-specialized stuff,
helping with patient triage, and contributing to the reduction of
waiting queues and public expenses.
The eld that involves the use of computational tools to
support the diagnosis of strabismus has come under consideration recently. However, some tools have already been or will be
developed so that health professionals can make reliable decisions concerning a number of sight pathologies.
In [3], the development of a device called Trophorometer is
described. The Trophorometer is used to measure the position and
the movement of the eye by employing computerized image
processing to help diagnosing phorias and tropias. The moving
window thinning technique was used to detect the edges of the
pupil and limbus, and Houghs Transform was applied to locate
the pupil.
In [4], it is proposed a method that uses telemedicine to treat
strabismus in locations where a specialist is unavailable. For such,
digital photographic cameras were employed to capture patients
images, and computers were used to send the images via e-mail
to a strabismologist1 so that the images could be analyzed.
Eye motion research laboratories use ocular trackers or magnetic devices to measure yaws and eye movements, but, despite
the precision of these devices, these methods are expensive and
hard to apply in a real situation [5].
There is also the report of the devices used to measure
strabismus, or any other devices that use the same basis as that
of a synoptophore. These devices basically work like a common
optical synoptophore, but the xing images are generated electronically on video, and the measurements are done by means of a
computer [6]. Nevertheless, synoptophores are difcult to use for
eye motion by non-specialized people. Also they are neither
compact nor easy to transport. They can only be used on
collaborative patients. Finally, such devices have not been used
for the evaluation of the yaw in these past decades. This would
require a very special universe of equipments, something like a
laboratory, far from the patients daily reality.
In order to develop a method capable of helping the specialist
in the detection of strabismus, one would require initially to
determine the position of the eyes. Many approaches have been
developed to automatically detect the position of the eyes from
digital images. In [7], a method to detect the eyes in facial images
using Zernikes moments with support vector machines (SVM) is
presented. Here, the eye and non-eye patterns are represented in
terms of the magnitude of Zernikes moments, and are classied
by means of the SVM. Zernikes moments are invariant to
rotation, that is, they can detect the eyes, even if the face has
been rotated. The orthogonal property of Zernikes polynomial
allows each moment to be unique and independent as to the
information provided by an image. This method has achieved
matching rates of 94.6% for detection of eyes in the face images
from the ORL base.

Physician specialized in the treatment of strabismus.

With similar goal in mind, the authors in [8] have proposed a


method for the automatic detection of human faces digital
images by using the semivariogram, geostatistical function to
represent the region of the eyes, and a support vector machine to
classify eye candidates. The detection obtained matching rate of
88.45% for images from the ORL base.
Geostatistical functions have been applied to other works. In
[9], a method to identify people through the analysis of iris
texture by using semivariogram and correlogram functions was
proposed. This method produced success rate of 98.14%, and it
used an iris base called CASIA. In [10], the geostatistical functions
were used to classify lung nodules, as either malignant or benign
in computerized tomography images.
Differently from the equipment and methods presented in the
introduction, and presently being used by ophthalmologists, this
work proposes the development of an easy, fast and cheap way of
automatically diagnosing someone with strabismus. For this
reason, this is a method most useful for the average ophthalmologist. A digital camera and a computer either portable or not
will be used with strabismus detection software installed, in
compliance with the methodology proposed in the present work.
This work, based on a masters dissertation developed by the
author [11], aims to evaluate the efciency and effectiveness of
the use of image processing and pattern recognition techniques to
automatically diagnose strabismus based on digital images of
human faces. The geostatistical measurement emivariogram,
semimadogram, covariogram and correlogram have been used
together with image processing techniques (Cannys Method and
Houghs Transform), selection of features (stepwise Discriminant
Analysis)and pattern recognition (Support Vector Machines) to
verify and determine if a person is strabic, by using the Hirschberg test as reference.
The remaining of the present work was organized into four
sections. Section 2 provides the theoretical basis, without which it
would be difcult to understand our approach. Section 3
describes the four stages (detection of eye region, eyes location,
location of limbus and brightness, and the identication of
strabismus), which comprises the methodology used to detect
strabismus from digital images based on the Hirschberg test. In
Section 4, the results obtained by the proposed methodology
are shown and discusses. Finally, Section 5 presents the work
conclusions, analyzing the efciency of the techniques used.

2. Theoretical basis
This section presents the theoretical basis necessary for the
understanding of the proposed methodology.
2.1. Strabismus
Strabismus, one of the commonest ophthalmologic alternations in childhood, can be dened as an abnormal binocular
interaction between the eyes, where the same image does not
reach the fovea2 of both eyes at the same time; consequently, the
eyes do not xate on the same image.
Once the position of each eye (center of the pupil) is determined, relative to a reference (either the observed point or the
observation point), i.e. the directions of each axis (either
the visual or the pupillary point), strabismus may be dened as
the difference between the expected alignments, i.e. the angle
between the ocular directions, corresponding to a disturbance of
2
The fovea is located in the optical axle of the eye, on which is projected the
image of the focused object, and the image formed on it is very sharp [12].

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

Fig. 1. Example of strabismus.

the binocular positional relation, relative to a given point (normally, the object toward which the sight is directed) [13]. Fig. 1
illustrates the occurrence of strabismus.
The symptoms and the consequences of strabismus differ
according to the age at which it appears and the way it manifests
itself. Strabismus that appears before the age of 6 has an
adaptation mechanism that makes the image created in the
yawed eye be suppressed, and, as a result, the patient does not
present diplopia.3 However, sight diminishment occurs (amblyopya or lazy sight) in the yawed eye. On the other hand, if a
person becomes strabic after 6 years of age, then this person will
present diplopia: each eye will focus the image on to different
positions, relative to the yaw. In a child, diplopia is periodical and
leads to suppression. This suppression consists of a cortical
mechanism of elimination of the image caught by the yawed
eye, something that occurs only in children who still have
cerebral plasticity.
Several techniques can be applied to the treatment of strabismus with the objective of restoring muscular balance and solving
the problem of amblyopya. The medical treatment commonly
used is: prescription of glasses, execution of orthoptical exercises,
and obstruction of the xating eye, alternating with the other eye.
If the medical treatment does not sufce, surgery may be
recommended to ensure the retrocession of the weakened ocular
muscles.
2.2. The Hirschberg method
In order to evaluate the strabismus yaw by using luminous
focus, one should initially describe the Hirschberg test that
calculates the approximate magnitude of the yaw relative to
the luminous reection displacement of the cornea in the
non-xating eye, taking into account the center of its ocular
globe. Depending on the reection location incidence, with
respect to the complex limbus-iris-pupil, one can infer the
magnitude of the yaw. Alternatively, in order to avoid the
variations resulting from the size of the pupil, one may correlate
the luminous reection to the center of the cornea and the limbus
[14]. The term corneal luminous reection is unsuitable, for it is
not a reection from outside the cornea. What we can rst see as
a luminous reection is actually the reection of Purkinjes image,
which is a virtual image located behind the pupil [15].
When examining an individual by means of the Hirschberg
test in order to diagnose strabismus, one must observe that the
xating eye has the rst Purkinjes image aligned to its optical
center, and consequently the other eye, the non-xating eye, is
the eye in which the yaw must be observed. The yaw is inferred
by comparing the reection of the light in the anterior surface of
the cornea with its optical center and by detecting whether there
is a misalignment. As it is difcult to determine on a non-xating
eye its precise location, the yaw will be evaluated in relation to
the anatomic center of the eye, or, in other words, in relation to
the center of the pupil. One can notice from this description the
existence of another variable interfering with the observation of
3
Diplopia consists in the perception of the same object in two different
spatial locations (in retina).

137

the yaw; and that is the Kappa angle.4 This angle must be
measured for that eye and must be taken into consideration
when examining the reex. However, other factors interfere
with the relative positioning of the luminous reection on the
non-xating eye in relation to the position the reex assumes in
the xating eye. These factors are: corneal curvature, the size of
both cornea and eye, and refraction. If the data obtained from
both eyes are too dissimilar, then these can disturb the evaluation; so much so that, when attempting to analyze or quantify the
yaw by means of the Hirschberg method, one must take all of
these factors into consideration [16,17].

2.3. Geostatistical functions for extraction of features


This work proposes the analysis of digital image textures by
means of geostatistical functions, so as to form a textural pattern.
Such functions, largely known as the Geostatistics study area, are
employed in this study to describe and recognize the pattern
identied by regions of eyes and non-eyes (the other areas of the
face: nose, mouth, ears, etc.).
In this context, we use four geostatistical functions (semivariogram, semimadogram, covariogram and correlogram) and a
combination of these measurements in the extraction of features
to identify and elicit the region of the eyes [10]. The advantage of
these functions is that the spatial variability and correlation
features are analyzed all together. These functions encompass
the association between the function distance and a possible
direction.
In statistics, texture can be described in terms of two main
components related to pixels (or any other unit): spatial variability and autocorrelation. The advantage of the use of spatial
statistical techniques is that both aspects can be measured
together, as they will be discussed in the following sections.
These measurements describe the texture obtained from a certain
image through the degree of spatial association present inside the
images geographically referenced elements. The pixels organizational correlation, taken as independent points, can be analyzed
with several measurements, such as those described in the
sequence of this section.

2.3.1. Semivariogram
The curve relating the semivariance as a function of the
distance of a point is called Semivariogram. The greater the
distance between the samples, the greater will be the semivariance; and the smaller the distance between them, the smaller
will be the semivariance.
Semivariogam is dened by
Nh

gh

1 X
x y 2
2Nh i 1 i i

where h is the distance vector (lag distance) between the values of


origins, xi; and the values of extremity, yi, and N(h) is the number
of pairs in the distance h.
The other values used to calculate the semivariogram, such as
lag spacing, lag tolerance, direction, angular tolerance and maximum
bandwidth are illustrated in Fig. 2.

2.3.2. Semimadogram
Semimadogram is the mean of the absolute difference measured in the pairs of the sample, as a function of distance and
4

Angle formed by the visual line and the axle of the pupil.

138

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

"

sh

to

ec

v
ion

)
r (h

Angular tolerance ct
e
Dir

BW h
t
wi

#1=2
8

2.4. Validation

d
an

Lag tolerance

Nh

1 X 2
y m2 h
Nh i 1 i

Lag2

Lag1

Lag3

The methodology uses sensitivity, specicity and accuracy analysis


techniques. These are the metrics commonly used to analyze the
performance of systems based on image processing. Sensitivity (SE) is
dened by TP/(TPFN), specicity (SP) is dened by TN/(TNFP),
and accuracy (AC) is dened by (TPTN)/(TPTNFPFN), where
TN is true-negative, FN is false-negative, FP is false-positive, and TP is
true-positive.

Lag4

Lag increment

X
Fig. 2. Parameters used in the calculation of geostatistical functions [10].

3. Materials and methods

direction [10]. The function is dened by


Nh

1 X
9x y 9
2Nh i 1 i i

mh

where h is the distance vector (lag distance) between the values of


origins, xi, and the values of extremity yi, and N(h) is the number
of pairs in the distance h.

2.3.3. Covariogram
The covariogram measures the correlation between two variables.
In Geostatistics, covariance is calculated as the variance of the sample
minus the value of the variogram. The covariance function tends to
increase as the variables values are closer to each other i.e., when
h0; and tends to decrease as these values are farther away from
each other, or nearer to the limit. Covariogram is dened by
Nh

Ch

1 X
x y mh m h
Nh i 1 i i

where mh is the mean value of the vectors origins, and m h is the
mean value of the vectors extremities
Nh

mh

1 X
x
Nh i 1 i

mh

1 X
y
Nh i 1 i

Nh

2.3.4. Correlogram
The correlation function (correlogram) is the normalized
version of the covariance function. The coefcients of correlation
range from 1 to 1. The correlation is expected to be higher for
units close to each other (correlation 1 for distance zero) and it
tends to zero when the distance between the units increases [10].
Correlation is dened by

rh

Ch

sh s h

where sh is the standard deviation of the values of the vectors


origins, and s h is the values standard deviation of the vectors
extremities
"

sh

Nh

1 X 2
x m2h
Nh i 1 i

#1=2
7

This section describes the procedures used for the automatic


detection of strabismus via digital images from the patients facial
images. First, we introduce the image base used in tests. Next, we
describe a sequence of the stages developed in order to achieve
the goals of the proposed methodology.
3.1. Patients
The images used in the present study were taken of patients
from a private ophthalmologic clinic specialized on strabismus in
the city of So LusMA, Brazil. The images were taken over a
period of 8 months. The patients who volunteered to participate
in the study signed a Consent Term.

3.1.1. Acquisition protocol


The patients were submitted to an acquisition protocol established by the physician. This protocol determines the criteria
observed for the patients inclusion or exclusion from the database. These criteria are described as follows:
The patients included in the database were examined, after the
following criteria have been observed: visual acuity with the best
visual correctness, Biomicroscopy,5 fundoscopy6 (posterior pole),
planning tonometry7 (as often as possible) and eye motion exam8
through Cover Test.
The Cover Test may reveal two situations:
1. There is no yaw: the strabologic evaluation is completed with
4D test and the sensorial evaluation is done by means of the
Titmus stereoscopic acuity test:
(a) if these last two evaluations are normal, the patient will be
included in the control group (without strabismus);
(b) if at least one of these two exams presents alterations, the
patient will be included in the test group (with strabismus).
2. In case of a yaw, the method of alternated prism and cover will
be applied to quantify the yaw. The patient is included in the
test group if his/her vertical and/or horizontal yaw is below or
equal to 15D . Above this value, the patient will be submitted to
one of the exclusion criteria.
5
Biomicroscopy corresponds to the outside view of the eye (cornea, sclera,
conjunctive) as well as of all components of the anterior chamber (iris, aqueous
humor, crystalline and its capsules) and even part of the posterior segment
(anterior vitreous and retina), through proper lens.
6
Fundoscopy is the exam of the eye background.
7
Tonometry is the measurement of the pressure inside the eye.
8
The evaluation of the eye motion is performed through cover test (of the
manual occlusor) or through the corneal luminous reection, asking the patient to
stare at one point (cover test) or light (luminous reection), to verify the yaw of
the eye near and far.

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

139

The following criteria were applied to exclude patients from the


tests:

 Test group (with strabismus)


Having horizontal and/or vertical yaw above 15D .
J Opacity or any other alteration of the cornea and/or eyelid
that compromises the observation of luminous reections
in both corneas.
J Irregularity on the limbus contour.
J Alterations in the size of one of the eyes, such as microphtalmia. Iris or pupil alterations (leucocoria, aniscoria,
discoria) have not been included as exclusion criteria, as
well as alterations of the posterior segment, with or without visual failure.
9
J Nystagmus present.
Control group (without strabismus)
10
J Inability to achieve 1.0/1.0 (on the Snellen table ) of vision
with the best visual correction or to inform visual acuity.
J Inability to achieve on arc of 40 s in the Titmus stereoscopic visual acuity test or inability to perform the test.
J Nistagmo present.
J

3.1.2. Image acquisition


The acquisition of the images was done in the same ophthalmologic clinic by using a SonyR Cyber-shot camera with 8.1 megapixels, with optical zoom of 3  , adjusted to image capturing
mode (with accurate details and sharpness) at a resolution of
2048  1536 pixels.
The picture is taken with the patient sitting on the examination chair. The patients face must be centered at about 4050 cm
away from the camera. The lights of the room are on, but without
any complementary light focus. The patient is asked to stare at an
accommodative picture stuck laterally to the cameras objective
lens. The ash will be on to provide the necessary brightness, or
the rst Purkinjes image.11 The function macro is used to secure
the perfect focusing of the acquired image, even when the patient
is close to the camera. If the patient wears corrective lens, the
picture will be taken with the lens on. Fig. 3 illustrates the picture
of the patients face, taken in the clinic.
The images acquired for the tests comprised 45 pictures of
patients of both genders and of different ages, with and without
corrective lens. From these 45 patients, 15 were pronounced
strabic by the specialist.

3.2. Proposed methodology


The detection of strabismus in digital images depends on the
precise location of the limbus and on the brightness detected on
the images. Thus, to meet such needs, the proposed methodology
has been organized into four stages, represented in Fig. 4.
On the rst stage one seeks to obtain the region of the eyes, to
minimize the search space, and to exclude the regions that play
an important role in the methodology. Next, the precise location
of the eyes is established, reducing still further the search space.
On the third stage, the location of the limbus and brightness is
determined, leading to a diagnosis of the patients condition i.e.,
whether he/she has strabismus or not. All these stages will be
discussed in the following section.
9
Nystagmus are repeated and involuntary rhythmic oscillations of one or
both eyes in some or all of the xating positions.
10
The Snellen table, also known as Snellens optometric scale, is a diagram
used to evaluate the visual acuity of a person.
11
Virtual image located behind the pupil [15].

Fig. 3. Acquisition of the patients face picture.

3.2.1. Detecting the eye region


On the preliminary stage of automatic detection of the eye region,
when adjusting what has been proposed in [18], one aims to reduce
the search space, generating a sub-image within the eye region, and
excluding the non-interesting regions (mouth, nose, hair, background), in order to make the next stage easier; and by that we
mean the eye location. The methodology starts with the acquisition of
the image followed by the re-dimensioning of the image from
2048  1536 pixels, which is the original resolution, to a resolution
10 times lesser, 205  154 pixels, the objective of which is to
minimize the computational costs of image processing. This reduced
image is used until the stage of eye location is reached; since, on the
stage of limbus location, a higher resolution image is needed to avoid
data loss that would corrupt the stage of limbus and brightness
detection.
The image is also converted from RGB color to a gray level color so
as to promote computational efciency. After the conversion, the
homomorphic lter [19] is applied in order to prevent luminosity
divergences. On the next stage, the image is smoothed out by means
of a 3  3 mask Gaussian lter. Next, the input image gradient,
generated on the previous stage, is calculated by using the Sobel
lter [20].
A horizontal projection of this gradient is applied obtaining as a
result the mean of three of the projections highest peaks. It is
relevant to know that the eyes are found in the superior part of the
face and that together with the eyebrows they correspond to the two
peaks closer to each other. This physiologic information, known a
priori, can be used to identify the area of interest, and the peak of the
horizontal projection will supply the horizontal position of the eyes.
At the same time, the vertical projection is applied to the gradient
image. There are two peaks to the left and to the right that
correspond to the limits of the face. From these two limits, the length
of the face is estimated. Combining the results of these projections we
obtain the image coordinates representing the region of the eyes.
3.2.2. Location of the eyes
The stages proposed for locating the eyes, in the region delimited
by the previous stage, follow the sequence represented in Fig. 5.
To use the pattern recognition technique, one needs to
determine the location of the eyes. This is done along two stages:
training and testing. On the training stage, a classication model
is created; while on the testing stage, the samples are classied
via the trained model. The only difference in the implementation
of these two stages lies in the sample used on the next stages. For
the training stage, 1008 samples of 28 images of patients were
selected: being 18 of eye regions, 9 of each eye, and 18 of the
other regions of the face areas. This means that for every image
used on the training stage, 36 samples were manually extracted.
These were formed by a 30  30 pixels window.

140

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

Fig. 4. Stages of the proposed methodology.

Fig. 5. Eye location stages.

For the testing stage, the samples were automatically detected


by the Hough transform, which was employed to locate eye
candidates by using radiuses intervals12 from 4 to 10 pixels. Six
coordinates with more votes in the accumulation vector of the
rst and second half of the image were extracted, corresponding
to the left eye and the right eye, respectively.
The samples are then pre-processed via histogram equalization [20]. After the previous stage pre-processing, the regions of
interest are then carried on to the features extraction stage. At
present, geostatistical functions are used to describe the texture
of objects representing eyes and other regions of the face,
extracted from face images. The functions used were: correlogram, covariogram, semivariogram and semimadogram.
The geostatistical function parameters used to extract features
from each sample were the directions 01, 451, 901 and 1351 with
an angular tolerance of 22.51 and an increment lag (distance)
equal to 1, 2 and 3, corresponding to 29, 14 and 9 lags and
tolerance of each lag distance equivalent to 0.5, 1.0 and 1.5,
respectively. The directions adopted were the ones most used in
the literature for the analysis of images; because to choose the lag
tolerance in accordance with [21], the commonest procedure
would be to adopt half the lag increment.
To build the Features Vector (FV), which represents the sample
signature, 208 features per sample were extracted, corresponding
to the four directions of 52 lags 29 14 9 for each geostatistical
function. A combination of the four geostatistical functions used
in this work results in an FV of 832 features (4  208). Before
implementing their selection, the features undergo a normalization process along a common range of values, such as 1 to
1. This mechanism helps the classier to converge more easily
during the training stage. Besides, this will standardize the

12
The radiuses intervals considered in this work were determined through an
analysis done on the image base used in tests.

distribution of variable values, which may assume different


domains.
Feature extractions performed by the geostatistical functions
generate many variables. So, a selection of the features that better
distinguish the eye and the non-eye classes (the other areas of the
face) is carried out by applying the stepwise Discriminant
Analysis technique [22].
On the nal stage objects such as eye and non-eye are
identied by means of pattern recognition techniques. This
methodology uses Support Vector Machine (SVM) [23].
The image base used for training and testing the SVM classier
is composed of manually selected image samples taken from 28
patients, and from eye candidates identied with the application
of the Hough method after the region detection in the images
submitted to the test. We used the LIBSVM [23] library. The SVM
was used with radial basic function (RBF).
The recognition process comes to an end with the training and
testing of the SVM, which generates the location of the eyes. After
recognition, if necessary, the calculation of similarity among the
regions classied as eye is done by using the Absolute Mean
Error [24].
Similarity is used to locate the two regions corresponding to
both the right eye (RE) and the left eye (LE) when there is more
than one region classied by SVM. Similarity is also used to
determine the corresponding eye in the candidates, when only
one eye is found.

3.2.3. Location of limbus and brightness


The location of the reection generated by the Hirschberg test
is used as a parameter for verifying the alignment of both eyes.
For such, we applied the Canny edge detection algorithm, and the
Hough Transform (HT). Optimum result on this stage depends on
a reasonable percentage of well-dened limbus edges, so that HT
can detect it (the limbus) more accurately.

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

In this step, the acquired images resized to an 819  614 pixels


resolution are used. This is done to reduce the computational cost
of the image processing, without missing the details of the limbus
edge. The images are also converted from RGB to gray levels.
The coordinates of the eyes found on the previous stage are
re-scaled to determine the corresponding values for the resolution of 819  614. Next, the Canny method is applied after being
congured with a derivation factor of 1.2, mask of 5  5 used in
the Gaussian function: with lower bound of 100 and upper bound
of 136. To determine the edges location of the limbus, the HT
technique is used. This technique employs the map of edges
created on the previous stage.
To determine the location of the limbus edges, the HT
technique is used. This technique uses the edges map generated
on the previous stage. We considered the points in the interval
from 0 to 601, 300 to 3601, corresponding to a gap of 1201 on the
right side of the circle drawn on the points of edges, and 2101 to
2401 corresponding to a gap of 1201 on the left side. The points
outside these intervals were excluded from the accumulation
vector. By performing this procedure, we dismiss the inuence of
the eyelids on the location of the limbus. To nd the edge of the
limbus, we used radiuses intervals13 of 15 to 37 pixels, from
where we selected the 80 most voted coordinates14 in the
accumulation vector. The coordinates were ordered in relation
to voting. Next, the most voted coordinate was put aside, and it
was veried if, among the remaining coordinates, there were
others with the same number of votes. If there are other
coordinates with a similar number of votes, we then select those
with smaller radii. After doing so, we proceed with our choice of a
circular region. This procedure is applied to left and right
limbuses.
Having the two main limbus candidates, called, for example,
right limbus (Rl) and left limbus (Ll) with radiuses Rr and Lr,
respectively, we then check if the difference between Rr and Lr is
greater than 2 pixels, since the limbuses must present approximate radiuses. If a difference is detected, we take the smallest
limbus as a reference. By considering the smallest Rr, we search
among the most voted in the accumulation vector of Ll the rst
radius to present the maximum difference of 2 pixels, compared
to Rr. Then, we verify if there are more peaks with radiuses equal
to Lr, in which case we select the one that presents closer
alignment in relation to the horizontal axis of Rl. In this way,
the precise location of both limbuses is ensured.
The HT was initially used for detecting the center and the
radius of the limbus in both eyes in the delimited image. Next, HT
is applied again in the region previously detected to determine
the center of the reection. To locate brightness, we used radiuses
intervals of 24 pixels, and considered all the points of the circles
drawn in the edges map, and then projected on to the Hough
space (accumulation vector). After this, we selected the 80 most
voted coordinates15 in the accumulation vector. The coordinates
were ordered according to voting.
Taking as example the location of the brightness of the right
eye, we have taken into account, considering the six greater peaks
in the accumulation vector, the ones that were closer to the center
of the right limbus, bearing in mind the coordinates x,y. So, we
guarantee that the location of the limbus does not fall on the
region of the eyelids, when the eyes are partially open i.e., it will
not be disturbed by the reections caused by corrective lens. This
procedure is applied to nd the brightness of both the left and the
right eye.

3.2.4. Detection of strabismus


In order to detect strabismus, we use the location of the
luminous reection of the cornea or the rst Purkinje image
generated by the Hirschberg test (Section 2.2) along with the
location of the limbus as parameters to verify the alignment of
both eyes.
As its main goal, the present study attempts to create an easy,
fast and low-cost method for the automatic detection of strabismus. In order to make such method accessible to the average
ophthalmologists not necessarily to those whose specialty
includes strabismus one needs to presuppose the lack of
viability of any of the methods that require the measurement of
the Kappa angle16 in each eye, ceratometry17 and/or ceratoscopy18, or even the axial length of each ocular globe, as demonstrated in other studies [16,17]. In this context, one should
consider mainly the refraction of each eye when analyzing the
photographs, for this represents basic information that can be
easily gathered by the ophthalmologist. In this way, the evaluation of the position of the rst Purkinje image in each eye is
carried out as follows:
1. The distance of the center of the reection to the center of the
limbus in the vertical (VD) and horizontal (HD) directions is
measured.
2. It is evaluated, also, the corneal diameter at 1801, and then this
is compared to the other eye diameter.
3. If there is no difference between the diameters, the VD and HD
of the eyes are compared.
4. If there is a difference between the diameters of the corneas,
then the proportion of the yaw of the reection in the nonxating eye in relation to the position of the reection in the
xating eye is calculated based on the diameter difference
between both corneas by using CPC rl=RL. Where CPC is the
corneal proportionality constant, rl is the radius measurement
of lower limbus area and RL is the radius of the measure of the
greater limbus area.
Thus, it is possible to measure the positioning of the rst
Purkinje image, taking into consideration the differences in the
size of the corneas of both eyes. We start by considering the
differences between refractive errors (anisometropias), exempting the use of contact lenses, when photographing the
anisotropic. This happens because anything done to restrain
diminishment in the size of the cornea, articially caused by
a corrective lens of elevated spherical equivalent, will be
discounted when calculating the CPC between the eyes for the
image captured under the inuence of the dioptrically power of
the corrective lens in use.
The major problem of evaluating these patients by the
proposed method is the anisometropy, for the differences in the size
of the corneal curvature, or the axial length are not so signicant in
most strabric patients. By solving this problem, we can state that the
proposed method is applicable to the patients concerned.
As illustrated in Fig. 6, the distances in pixels for each eye are
calculated from the center of the reection to the center of the
limbus along both vertical and horizontal directions, represented
respectively by distX and distY. To diagnose the alignment, the
CPC is multiplied by the distances, distX (distance calculated with
respect to the x-axis) and distY (distance calculated with respect
to the y-axis) of the eye with greater limbus, replacing the original
16

13

The radiuses intervals considered in this work were determined through an


analysis done on the image base used in tests.
14
This number was chosen after exhaustive tests.
15
This number was chosen after some tests.

141

The angle formed by the optical axle and the xating line.
Computerized exam to measure the curvature of the corneal surface.
18
The computerized ceratoscopy or topography of the cornea is the exam
through which a qualitative and quantitative analysis of the corneal astigmatism
can be done.
17

142

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

distances for these results. Next, the absolute difference of the


distance between the eyes is calculated for both vertical (VDIF)
and horizontal (HDIF) directions.
The application of the Hirschberg test presents a difference
which ranges from 21 to 41 in the visual axis with respect to the
anatomic axis. This may cause the false impression of horizontal
yaw. Thus, we dene cutoff points, or thresholds of up to 1.0 pixel
of VDIF and 2.0 pixels of HDIF for a patient considered normal.
The threshold of VDIF is smaller than that of HDIF, because the
vertical yaws have worse aesthetic effects than those of
horizontal yaws.

4. Results and discussion


This section presents and discusses the results obtained by the
proposed methodology, which is based on the Hirschberg test for
the detection of strabismus with the aid of digital pictures. In
Sections 4.14.4. we discuss, respectively, the results of the stages
of detection of the region of the eyes, location of the eyes, location
of limbus and brightness and detection of strabismus.
4.1. Detection of the eye region
Using a base of patients images, formed by 45 images, we
obtained matching rate of 100% in the detection of eye region. In
Fig. 7b and d we have shown some examples of automatic
detection of eye region in pictures of patients with and without
glasses.
4.2. Location of the eyes
With the reduction of the search space achieved by the
automatic detection of the region of the eyes, the stage of eye
location begins (Section 3.2.2).
As shown in Section 3.1.2, the image base is formed by 45
photographs. From these photographs, a training base was formed
and introduced in Section 3.2.2, followed by the extraction and
selection of features, and the training of the SVM. The features
extraction approach using just one of the geostatistical functions

Fig. 6. Calculation of alignment.

generates a set of 208 features; however, when combining the


four functions, it generates a total of 832 features.
After the stage of features selection using stepwise discriminant analysis, one can obtain a statistically signicant reduction
in the number of features. Of the 208 variables for each geostatistics measure, 37 for semimadogram, 29 for semivariogram, 17
for correlogram and 30 for covariogram have been selected. From
the 832 variables, using all measures, we have selected 59.
Following the ow of the proposed methodology in Section
3.2.2, the next step is the stage of classication and validation of
results. The results obtained by the SVM classier using the
parameters above are listed in Table 1.
The best result is the one with 94.14% sensitivity, 95.38%
specicity and a matching of 95.19% for the conguration of all
geostatistical functions. The rates of 98.78% for PPV (Positive
predictive value) and 83.07% for NPV (Negative predictive value)
indicate that this approach classied eye regions far more
effectively than any other regions of the face; this justies the
use of combined geostatistical functions on the stage of eye
locating.
Fig. 8a presents examples of images for which the methodology, using combined measurements, succeeded in locating the
eyes, having TP of 411. Fig. 8b, on the other hand, shows where
the methodology has failed by obtaining FP of 5. Analyzing the
results, one may notice that the errors occurred mainly in the
regions of eyeglasses frames.
Analyzing the classication of non-eye regions, we can observe
that the amount of TN and FN were, respectively, 103 and 21.
Fig. 8c shows examples of eye regions which were classied as
non-eye. We have noticed that most of the errors occurred in
images of patients that presented reections in the lenses of their
glasses, and those with eyes partially open.
Table 1
Results of the classication of the SVM for patients images.
Measurements

Semimadogram
Semivariogram
Correlogram
Covariogram
All

%
TP

TN

FP

FN

SE

SP

PPV

NPV

AC

408
392
383
369
411

99
90
76
70
103

8
16
29
31
5

25
42
52
70
21

94.23
90.32
88.05
84.06
95.14

92.53
84.96
72.38
69.30
95.38

98.08
96.08
92.96
92.25
98.78

79.84
68.19
59.38
50.00
83.07

93.89
89.26
85.00
81.30
95.19

Fig. 8. Location of the eyes. (a) Correct location of the eyes, (b) failure in the
location of the eyes and (c) eye region classied as non-eye.

Fig. 7. Automatic detection of the region of the patients eyes. (a) Patient without glasses, (b) region of the patients eyes (a) detected. (c) Patient wearing glasses, (d) region
of the patients eyes (b) detected.

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

Having the regions of the eyes duly classied, and following


the methodology cited in Section 3.2.2, we have obtained a
matching of 91.11% in the location of both eyes, with the
occurrence of errors in only four images. Fig. 9 presents examples
of images where the location of the eyes was performed correctly.
Fig. 10ad, on the other hand, presents images where it was
not possible to locate both eyes correctly.
By analyzing the result shown in Fig. 10a, we have noticed in
the output of the accumulation vector used in the HT as
represented in Fig. 11a that the methodology did not succeed in
locating the left eye in candidates through HT. Nevertheless, it
located the right eye (Fig. 11b), but failed on the SVM classication stage. One needs at least one eye classied by the SMV;
otherwise it would not be possible to nd the other eye by
applying our methodology proposed.
The second picture to be analyzed is Fig. 10b, from which we
can observe that it is not possible to identify the circular region of
the eyes through HT, but only the left and right central corner of
the eyeglasses frame, as illustrated in Fig. 12ac. However, the
classication of the eye candidates was done correctly, as in
Fig. 12b and c, which do not represent eyes, these were classied
as non-eye. Besides, because the images did not show any eye, the
methodology ignored the search for new candidates.
Fig. 10c, on the other hand, shows that the right eye was
located correctly by the methodology which classied Fig. 13
correctly. However, there was a failure in the classication of the
left eye candidate as illustrated in Fig. 13c.
The result shown in Fig. 10d is then discussed. Fig. 14a
presents the output image of the accumulation vector after the
application of the HT to the image of the region of the eyes. In this
gure, we can notice that the HT could not nd the circular region
of the eyes. Fig. 14b illustrates the left eye candidate that had
been erroneously classied as eye by the SVM. Fig. 10d points out
the failure on the location of the right eye as having been caused
by the application of similarity between the right eye candidate
and the sample classied by the SVM.

4.3. Location of limbus and brightness


The correct diagnosing of strabismus is directly connected to
the result of the location of limbus and brightness, since this
position is used in the calculation of the alignment (Section 3.2.4).
On this stage, after considering the 41 images that went through
the previous stage, we obtained a matching rate of 95.56% for
location of limbus on both eyes i.e., the methodology failed in just
one of the 41 images. In Fig. 15a and b examples of images are
presented, where the methodology correctly found the region of
limbus.
In Fig. 16, we can see the image where the methodology has
failed to locate the limbus correctly in both eyes. We have also
noticed that the error occurred mainly because of the presence of
luminous reections in the right lens of the glasses, covering, in
this way, the patients limbus. Analyzing the visible part of the

Fig. 9. Examples of images where the location of the eyes was performed
correctly.

143

limbus of the right eye (RE), we can see that its radius is smaller
than that of the limbus in the left eye (LE), with a difference above
2 pixels. Thus, considering that the methodology takes the
smallest limbus for reference, in the case of this possibility, it
would be impossible to locate the limbus in the LE correctly.
Considering the location of brightness on the 40 images taken
of the correct location of the limbus, we have obtained a matching
rate of 100% for location of brightness on both eyes. In Fig. 17a
and b we show some examples of the correct location of brightness by using the methodology.
4.4. Detection of strabismus
In this section, we present the results obtained on the detection stage of strabismus. We will consider the 40 patients whose
limbus and brightness were located on the previous stage.

Fig. 11. Analysis of Fig. 10a. (a) Output image of the accumulation vector after
application of HT to the image of the eye region. (b) Right eye candidate wrongly
classied as non-eye by the SVM.

Fig. 12. Analysis of Fig. 10b. (a) Output image of the accumulation vector after
application of HT to the image of the region of the eyes. (b) and (c) left and right
eye candidates, respectively, classied correctly as non-eye by the SVM.

Fig. 13. Analysis of Fig. 10c. (a) Output image of the accumulation vector after the
application of the HT to the image of the region of the eyes. (b) Right eye
candidates correctly classied as eye. (c) Left eye candidates erroneously classied
as eye by the SVM.

Fig. 14. Analysis of Fig. 10d. (a) Output image of the accumulation vector after the
application of the HT to the image of the region of the eyes. (b) Left eye candidates
erroneously classied as eye by the SVM.

Fig. 10. Examples of images where the methodology has failed.

144

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

Fig. 15. Examples of images where the methodology correctly found the region of limbus.

Table 2
Result obtained from the processing of the 40 patients images for verifying the
alignment of the eyes compared to the specialists analysis. Ppatient, Sspecialist and Mmethodology.
P
Fig. 16. Image where the methodology failed in locating the limbus.

Fig. 17. Examples of correct location of brightness obtained by applying the


methodology.

In Table 2 we present the results achieved by the methodology


along the processing of the 40 images. Where RR is the right
radius, LR is the left radius, distX is the vertical direction, distY is
the horizontal direction, CPC is the corneal proportionality constant, VDIF is the vertical alignment difference and HDIF is the
horizontal alignment difference. All these values are expressed in
pixels.
After analyzing Table 2, we obtained the values of TP10,
FP 4, TN21 and FN 5. So now we can state that our methodology has achieved 67% of sensitivity, 84% of specicity and
77.5% of matching for the 40 patients images; such that, from the
nine errors occurred, seven were caused by the limitations of the
Hirschberg test, which can only detect aesthetic strabismuses,
since this test examines only the anatomical axis of the eye and
not its visual axis.
In Fig. 18a and b examples of images of one of a patients left
and right eyes are presented, in which case the Hirschberg test
failed in view of the apparent yaw. In this case, according to the
specialist, the patient did not present strabismus. However,
the methodology revealed a yaw, and referred the patient to the
strabic group. From the seven patients missed, just two did not
present strabismus, according to the specialist. This mismatch of
the Hirschberg test can be explained by the fact that there is no
strabismus in the presence of an apparent yaw (pseudostrabismus), which is caused by pupillary axis angled to each other even
when the visual axis has been correctly positioned in relation to
the viewed object [13].
Fig. 18c and d, on the other hand, is examples of images of left
and right eyes of ve strabic patients on whom the Hirschberg
test failed to diagnose that condition. In this case, according to the
specialist, the patient presents strabismus. However, the methodology could not detect the misalignment, including the patient
in the normal group. In these cases, the Hirschberg test is limited to
the fact that there can be strabismus masked by a Kappa angle of

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

RE

LE

CPC

RR

distY

distX

LR

distY

distX

34
26
33
25
30
34
31
32
34
27
28
34
31
25
33
26
20
32
27
21
20
24
21
18
23
20
25
19
21
23
16
26
22
21
22
24
22
25
24
19

4
2
1.87
3
3.73
4
2.80
6.78
0.97
3.85
1.92
0
3.87
1.92
5.82
0
3.0
4
1
2
2
2
1.90
2
2
1
2
2
1
1.91
2
1
1
2
3.82
0.95
1
2
2.87
1

4
5
1.88
1
0
2
3.74
1.93
1.94
0
0.96
2
0
1.92
0
4
1.0
1
1
1
1
1
0
0
4
2
2
0
0
0.95
0
0
1
0
0
0
0
1
0.95
2

34
28
31
25
28
34
29
31
33
26
27
34
30
24
32
26
20
33
27
21
20
25
20
18
23
20
25
19
21
22
16
26
23
21
21
23
24
25
23
20

0
0.92
1
0
3
4
0
3
6
3
1
5
2
2
3
2
1.0
2
3
1
3
1.92
1
3
1
1
1
0
1
2
8
1
1.91
2
5
0
1.83
0
2
0

8
1.85
0
3
0
1
4
1
1
0
0
0
1
1
0
1
0
0
0
0
2
0.96
1
2
2
1
2
0
0
3
0
1
0
1
2
0
0
1
1
0.95

1
0.93
0.93
1
0.93
1
0.93
0.96
0.97
0.96
0.96
1
0.96
0.96
0.97
1
1.0
0.97
1
1
1
0.96
0.95
1
1
1
1
1
1
0.95
1
1
0.95
1
0.95
0.96
0.91
1
0.95
0.95

HDIF

4
1.07
1.87
3
0.73
0
2.80
3.78
5.02
0.85
0.92
0
1.87
0.08
2.82
2
2.0
2
2
1
1
0.08
0.90
1
1
0
1
2
0
0.08
6
0
0.91
0
1.18
0.96
0.83
2
0.87
1

VDIF

4
3.14
0.87
2
0
1
0.25
0.93
0.94
0
0.96
1
1
0.92
0
3
1.0
1
1
1
1
0.04
1
2
2
1
0
0
0
2.04
0
1
1
1
2
0
0
0
0.04
1.2

Result
S

Yes
Not
Not
Yes
Not
Not
Yes
Not
Not
Not
Yes
Not
Not
Not
Yes
Yes
Not
Not
Not
Not
Not
Not
Yes
Yes
Yes
Not
Not
Not
Not
Not
Yes
Yes
Not
Not
Yes
Yes
Not
Yes
Not
Yes

Yes
Yes
Not
Yes
Not
Not
Yes
Yes
Yes
Not
Not
Not
Not
Not
Yes
Yes
Not
Not
Not
Not
Not
Not
Not
Yes
Yes
Not
Not
Not
Not
Yes
Yes
Not
Not
Not
Yes
Not
Not
Not
Not
Yes

Fig. 18. Images where the application of the Hirschberg test failed to: identify the
normal patient (a and b) and the strabic patient (c and d).

opposite signal, annulling the appearance of the yaw and giving the
notion of an adequate binocular position, in spite of a yaw [13].
Analyzing Table 2, without considering the patient on whom
the Hirschberg test failed, we have obtained the values of TP 10,

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

Fig. 19. Images where the methodology failed in determining the precise location
of the limbus. (a) and (b) RE and LE of patient 2. (c) and (d) RE and LE of patient 30.

FP 2, TN21 and FN0. Thus, we can verify that the methodology achieved 100% of sensitivity, 91.3% of specicity and 94% of
matching for the 33 remaining images. The two patients
considered strabic, even if they were not strabic, have been
considered as such because of the precision error that occurred
when locating the limbus.
Analyzing Fig. 19a, which presents the right eye of patient 2 from
Table 2, we have noticed that the region of the limbus was not
precisely located, because the center of the limbus was to be closer
to the center of the pupil. This resulted in the vertical misalignment
of the right eye in relation to the left eye (Fig. 19b), making,
therefore, the result of VDIF, of 3.14, indicating the presence of
strabismus, which contradicts the specialists diagnosis.
In Fig. 19d, representing the left eye of patient 30 from Table 2,
one can see that, similar to the last error, the region of the limbus
was not correctly located, leaving a small remainder of the limbus
outside the region located. This caused an increase in the vertical
misalignment of the left eye in relation to the right eye (Fig. 19c),
making the result of VDIF, of 2.04, reveal the presence of
strabismus, contradicting, in this way, the specialists diagnosis.

145

second stage, our methodology obtained 91.11% of matching for


the location of both eyes.
The location of the limbus and brightness, a stage on which we
used Cannys method and Houghs transform, the achieved matching
rates were, respectively, 97.5% and 100%. We can conclude that the
errors that occurred in locating the limbus were mainly due to the
luminous reections generated during image capturing.
The result for the identication of strabismus achieved matching
rate capacity of 77.5% for the 40 images that passed on to the stage of
location of limbus and brightness. This result had a direct inuence
over the seven images for which the Hirschberg test was not used,
due to its limitations in doing an aesthetic evaluation of strabismus.
Nevertheless, disregarding the images for which the Hirschberg test
was not effective, it was possible to achieve a matching rate of 94%.
Even showing great potential as to its application in helping
the specialist to diagnose strabismus, our methodology still
requires the use of other techniques, since the Hirschberg test is
less precise when compared to other methods, such as the
Krimsky and Cover Test.
The number of patients images (30 healthy patients and 15
strabic patients) and the disproportion (more healthy patients than
strabic patients) does not allow us to carry out more precise
analyses to ascertain the total efciency of the proposed method.
Thus, it is necessary to deepen our present analysis with larger and
more balanced image base of patients. Also, it is important to use
other images, captured under different acquisition protocols in order
to better evaluate the behavior of the present methodology.

Conict of interest statement


None declared.

5. Conclusion

References

This work recommends the use of image processing techniques, geostatistical functions and support vector machines for the
automatic detection of strabismus on digital images by using the
Hirschberg method. Along with this study, some other contributions can be veried. The rst one may be seen to occur on the eye
location stage, where an innovating combination of techniques to
locate the eyes on human faces is proposed, using homomorphic
ltering, the Hough Transform, geostatistical functions, stepwise
discriminant analysis, SVM and EMA similarity measurements.
Other contributions have to do with the features extraction stage,
where geostatistical functions are used to extract texture information
from the eyes, allowing then the discrimination of eye regions from
the other regions with a fair amount of precision. The third and main
contribution is the creation of a methodology that gives support to
the automatic identication of strabismus on digital images.
On the stage of automatic detection of eye region by using
projections, the gradient magnitude exhibited a matching rate of
100% for the patients images. On this stage, we concluded that
the use of homomorphic ltering contributed to the overall
matching, since the technique of illumination adjustment made
it more uniform on the image.
On the second stage, the methodology obtained a matching
rate of 95.19%, with the combination of four geostatistical functions, such as texture descriptors in the extraction of features.
Nevertheless, although such results may be seen as very promising, it is still necessary to increase the diverseness of the sample
faces, so that a more robust and generic methodology can be
developed. However, we can conclude that the obtained results
do give the matter its due importance regarding the new
approaches based on geostatistical functions for describing the
texture of eye regions on digital pictures of faces. Still, on the

[1] G.V. Noorden, E. Campos, Binocular Vision and Ocular Motility: Theory and
Management of Strabismus, Mosby Inc, 2001.
[2] J.P. Diaz, C.S. Dias, Strabismus, Butterworth Heinemann, Woburn, Massachusetts, EUA, 2000.
[3] A.S. Jolson, H.R. Myler, A. Weeks, Apparatus for evaluating eye alignment, US
Patent 5,094,521, 1992.
[4] E.M. Helveston, F.H. Orge, R. Naranjo, L. Hernandez, Telemedicine: strabismus
e-consultation, J. Am. Assoc. Pediatr. Ophthalmol. Strabismus 5 (5) (2001)
291296.
[5] M.W. Quick, R.G. Boothe, A photographic technique for measuring horizontal
and vertical eye alignment throughout the eld of gaze, Invest. Ophthalmol.
Vis. Sci. 33 (1) (1992) 234.
[6] I. Subharngkasen, Successful amblyopia therapy by using synoptophore,
J. Med. Assoc. Thailand Chotmaihet thangphaet 86 (2003) S556.
[7] H.J. Kim, W. Kim, Eye detection in facial images using zernike moments with
SVM, ETRI J. 30 (2) (2008) 335337.
[8] J. de Almeida, A. Silva, A. Paiva, Automatic eye detection using semivariogram
function and support vector machine, in: Seventeenth International Conference
on Systems Signals and Image Processing IWSSIP 2010, 2010, pp. 174177.
[9] O.S.S. Souza Jr., A.C. Silva, Z. Abdelouah, Personal identication based on iris
texture analysis using semivariogram and correlogram functions, Int. J.
Comput. Vis. Biomech. 2 (1) (2009) 121129.
[10] A. Silva, P. Carvalho, M. Gattass, Analysis of spatial variability using
geostatistical functions for diagnosis of lung nodule in computerized tomography images, Pattern Anal. Appl. 7 (3) (2004) 227234.
[11] J.D. Sousa de Almeida, Metodologia Computacional para Deteco Automtica de
Estrabismo em Imagens Digitais atravs do Teste de Hirschberg., http://www.
tedebc.ufma.br//tde_busca/arquivo.php?codArquivo=430, 2009.
[12] L.C. Junqueira, J. Carneiro, Histologia basica. 8 Edic- a~ o, Guanabara Koogan.
[13] H. Bicas, Estrabismos: da teoria a pratica, dos conceitos a s suas operacionalizac- oes, Arq. Bras. Oftalmol. 72 (5) (2009) 585615.
[14] J.L. Mims, R.C. Wood, Proportional (fractional) displacement of the Hirschberg corneal light reection (test): a new easily memorized aid for strabometry; photogrammetric standardization (calibration), Binocul. Vis.
Strabismus Q. 17 (3) (2002) 192.
[15] K. Wright, P. Spiegel, L. Thompson, Handbook of Pediatric Strabismus and
Amblyopia, Springer Verlag, 2006.
[16] P.E. Romano, Individual case photogrammetric calibration of the Hirschberg
Ratio (HR) for corneal light reection test strabometry, Binocul. Vis. Strabismus Q. 21 (1) (2006) 45.

146

J. Dallyson Sousa de Almeida et al. / Computers in Biology and Medicine 42 (2012) 135146

[17] S. Hasebe, H. Ohktsuki, R. Kono, Y. Nakahira, Biometric conrmation of the


Hirschberg ratio in strabismic children, Invest. Ophthalmol. Vis. Sci. 39 (13)
(1998) 2782.
[18] K. Peng, L. Chen, S. Ruan, G. Kukharev, A robust algorithm for eye detection
on gray intensity face without spectacles, J. Comput. Sci. Technol. 5 (3) (2005)
127132.
[19] R.d. Melo, E.d. A. Vieira, A. Conci, in: A. Karras, S. Voliotos, M. Rangouse, A.
Kokkosis (Eds.), A System to Enhance Details on Partially Shadowed Images,
2005, 309312.
[20] R.C. Gonzalez, R.E. Woods, Digital Image Processing, Prentice-Hall, New
Jersey, 2002.
[21] E.H. Isaacs, R.M. Srivastava, Applied geostatistics, 1990.
[22] P.A. Lachenbruch, M. Goldstein, Discriminant analysis, Biometrics 35 (1)
(1979) 6985.
[23] C. Chang, C. Lin, LIBSVMA Library for Support Vector Machines, available at
http://www.csie.ntu.edu.tw/  cjlin/libsvm/, 2001.
[24] T. DOrazio, M. Leo, C. Guaragnella, A. Distante, A visual approach for driver
inattention detection, Pattern Recogn. 40 (8) (2007) 23412355.

Anselmo Cardoso de Paiva received BSc in civil engineering from Maranha~ o State
UniversityBrazil in 1990, a MSc in civil engineering-Structures and a PhD in
Informatics from Pontiphical Catholic University of Rio de JaneiroBrazil in 1993
and 2002. He is currently a Professor at the Informatics department, Federal
University of Maranha~ oBrazil. His current interests include medical image
processing, geographical information systems and scientic visualization.

Aristofanes Corr
ea Silva received a PhD degree in Informatics from Pontiphical
Catholic University of Rio de JaneiroBrazil in 2004. Currently he is a Professor at
the Federal University of Maranha~ o (UFMA), Brazil. He teaches image processing,
pattern recognition and programming language. His research interests include
image processing, image understanding, medical image processing, machine
vision, articial intelligence, pattern recognition and, machine learning.

~ Dallyson Sousa de Almeida received BSc in computer science and a MSc in


Joao
Electric Engineering at the Federal University of Maranha~ o (UFMA), Brazil in 2010.
His major interest nowadays is obtaining a PhD degree. Currently he is a Systems
Analyst at the UFMA. His research interests include signals and image processing,
pattern recognition, machine learning and automation systems.
Jorge Antonio Meireles Teixeira received a BSc in Medicine from the Federal
University of Maranha~ o (UFMA), Brazil, in 2004 and PhD in Medicine (Ophthalmology) at the Federal University of Sa~ o PauloPaulista Medical School (2004).
Currently he is a Professor of Medicine at and head of the Course of Medicine of
UFMA. He has experience in Medicine with emphasis on ophthalmology, acting on
the following topics: strabismus and pediatric ophthalmologist.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Вам также может понравиться