Вы находитесь на странице: 1из 5

A Novel Approach for Automated Detection of

Exudates Using Retinal Image Processing


B. Ramasubramanian,
Assistant Professor, Syed Ammal Engineering College, Ramanathapuram
TamilNadu, India. ramatech87@gmail.com
AbstractEven though a number of anatomical structures
contribute to the process of vision, many eye disease that causes
blindness occur mainly in the retina. One among the important
eye disease is Diabetic Retinopathy. DR is a severe eye disease
and is the primary cause of blindness in the case of diabetic
patients. We describe a novel method for automatically detecting
the exudates with the help of retinal photography. The color
retinal images are pre-processed using CLAHE and second order
Gaussian filter. The pre-processed retinal color images are
partitioned into segments with the help of Soft clustering
algorithm. Since OD and exudates are similar in color, a segment
with OD along with exudates is chosen. A set of features is
extracted from the segmented image by Scale Invariant Feature
Transformation (SIFT) algorithm. Then active Support Vector
Machine Classifier is trained with these extracted feature vector
and evaluated on two publicly available database. We tested our
approach on a set of 1000 images and obtained a sensitivity of
99.96% and specificity of 96.6%. Our approach has potential to
be used as a clinical tool in automatic detection of Diabetic
Retinopathy
Index TermsActive Support Vector Machine, CLAHE,
Diabetic Retinopathy, FKCM, SIFT.

I.

INTRODUCTION

Diabetic Retinopathy is a important chronic disease and is the


primary cause of blindness in this industrialized world [1-4].
Early diagnosis of the disease through regular screening is
very important to prevent the visual impairment. Since a large
number of people need to be screened, an automatic computer
assisted diagnostic system will be very useful in detecting
Diabetic Retinopathy.
Figure 1 shows the anatomical features such as Optic Disc,
fovea, blood vessels and certain pathologies like exudates,
haemorrhages and microaneurysms of a typical retinal image.
The OD is the bright yellowish region from where the
ganglion axon exit the eye to form the optic nerve. Exudates
are the bright yellow lipid deposits that appears similar in
color to OD. Exudates occurs mainly due to leakage in the
capillary vessels. Haemorrhages are the dark larger red blot
and microaneurysms appears as a small red dots. The centre of
retina is called fovea. The presence of exudates and
microaneurysms are the primary sign of Diabetic Retinopathy.

Fig. 1. Typical Retinal image showing Exudates and Optic Disc.

The remainder of the paper is organized as follows:


Section II reviews the related work on exudates detection.
While section III explores our proposed methodology of
exudates segmentation and classification. The developed
algorithm performance is verified and the final results are
given in section IV. We end with discussion and conclusion in
section V.
II. STATE OF ART
Forrester et al [5] have presented a method to detect
exudates by using global and local thresholding in green
channel of the Diabetic Retinopathy images. In [6], Ege et al
used an intensity thresholded image for screening Diabetic
Retinopathy. Whereas in [9], the author uses a combination of
adaptive intensity thresholding and recursive region growing
segmentation. [7][9] discusses the method of automatic
bright lesion detection with a high accurate classifier. A
method to identify DR using colour space conversion, median
filtering and histogram equalization was proposed by Iqbal et
al.[10].
In [11], the author proposed a method of exudates detection
using morphological operator. They have used a new
technique for optic disc detection prior to the identification of
exudates. The algorithm was tested on sixty images and they
obtained a sensitivity of 80% and specificity of 99.5%. Wang
et al [12] identified the presence of exudates and cotton wool
spots using a minimum distance discriminent classifier. The
author reported a sensitivity of 100% and a specificity of 70%
on a dataset of 150 images.
Walter et al [13] detected the exudates contour using
mathematical morphology techniques. This algorithm did not

discriminate exudates and cotton wool spots. Niemeijer et al


[14] discriminated exudates from cotton wool spots using a
clustering algorithm. The pixels with high probability are
grouped into lesion pixel clusters. An accuracy of 95.5% and
sensitivity of 77% is obtained for the detection of exudates.
In [15], Gardner et al proposed an automatic
detection of DR with the help of back propagation neural
network. They achieved a sensitivity of 88.4% and a
specificity of 83.5% for the detection of Diabetic Retinopathy.
The purpose of our approach is to develop an
automatic technique for detecting the exudates by a supervised
classifier.
III. PROPOSED METHODOLOGY
This section presents the methods employed to detect exudates
in Dr images using the principles of soft clustering and
supervised classification. The method consists of i)
Acquisition of DR images and Preprocessing ii) Segmentation
using soft clustering algorithm iii) Feature extraction and
classification by Recursive Support Vector Machine.
A. Acquisition of DR images and preprocessing
To test our method, two databases, one dataset consisting of
400 images from MESSIDOR dataset and another database of
600 images from UTHSC SA (University of Texas Health
Science Center in San Antonio) in [16] were used.
The images from publicly available MESSIDOR
dataset [17] were taken using a 3 CCD camera on a Topcon
TRC NW6 retinograph with a field of view (FOV) of 45 at the
service d Opthalmologie hospital, Paris. The size of these
images is 1488x2240 pixels.
Whereas, the 600 images from UTHSC were acquired using
a canon CF-60 UV retinal camera with a 60 FOV and the size
of this images is 2048x2392 pixels. The sample images in our
dataset are shown in figure 1. We selected 100 images from
MESSIDOR dataset to train our algorithm and we did not
perform any training on second set. Finally, the same
classification methods is applied to the images from UTHSC
dataset after resizing the images to 1488x2240 pixels which is
equal to the size of MESSIDOR dataset.
In the preprocessing stage, the original fundus images in
RGB colour space is converted into HIS Colour space and the
intensity band (I-band) alone is separated for further
processing. Then the median filter of size 3x3 is applied to this
intensity band of the HIS image for noise suppression.
Next, for contrast enhancement, Contrast Limited Adaptive
Histogram Equalization (CLAHE) is applied to the noise
suppressed retinal image. Finally, second order Gaussian filter
is applied for the complete noise removal. The output of the
preprocessing stage are shown in figure 2. After
preprocessing, the filtered intensity band is combined with H
and S component and again converted to RGB colour space
and given as input to the next stage.

Fig 2. Output of Preprocessing

B. Exudate segmentation using Soft clustering


algorithm
In this approach, we segment exudates in the retinal images
using two step process i) Colour space conversion ii) soft
clustering using Fuzzy Kernal C-means Clustering algorithm.
In the first stage, the preprocessed RGB image is converted to
Lab Colour space. Then a and b component of the Lab
colour image is separated and taken for further processing.
In the second stage, soft clustering algorithm is applied to this
combined ab component of the image. Eventhough, several
soft clustering is available, we uses Fuzzy Kernal C-means
Clustering (FKCM) algorithm to segment the image since it
outperforms well compared to our conventional FCM
algorithm. FKCM is better than FCM because it integrates
FCM with Mercer Kernal function and its properties are not
only suitable for clusters with spherical shape, but also other
non-spherical shape such as annular ring shape.
FKCM [18] is a data clustering algorithm in which each
datapoint belongs to a cluster to some degree specified by a
membership grade function. FKCM partitions a collection of n
vectors, cj into c-fuzzy groups and finds a cluster center in
each group, such that the objective function based on distance
is minimized.
The Objective function is given by
N

Jm =

U ijxiCj
i=1 j=1

(1)

Where m is an exponential weighting function that


determines the fuzziness of the algorithm. In our experiment,
we choose m=2. N is the number of features, 4 for our case. C
is the number of cluster and for our case C=5, where Uij is the
degree of membership of xi and xi is the ith of d-dimensional
measured data. cj is the d-dimension centre of the cluster. Is
the Euclidean distance norm between the ith cluster centre and
the jth data point.
FKCM is an partitioning method and it is carried out through
an iterative optimization of the objective function and with the
update of membership function Uij and cluster centre cj by the
following equation.

U ij

= 1/

k=1

xjCj
xiCk

2
(m 1 )

(2)

(3)
The iteration will stop when below equation(4) is satisfied
max ij

{|Uij(k+1)Uijk|}<TC

(4)
Where TC is a termination criterion and K is the number of
iteration.
The algorithm consists of following steps
Step 1: Initialize the fuzzy partition matrix U= [uij]
Step 2: At k-step, calculate the cenre vector
C(k) =[cij] with u(k) in accordance with equation 3.
Step 3: Update the fuzzy partition matrix u(k), u(k+1) by the
newly calculated uij according to equation 2.
Step 4: If the difference || u(k+1) u(k)|| < TC, then stop the
iteration, otherwise return to step 2.
The resultant segmented image obtained using above
procedure is shown below.

Fig. 3. Resultant Segmented Image with Optic Disc and Exudates.


Since OD and exudates are similar in color, they alone get separated.

C. Feature Extraction and Classification


To classify the segmented image into benign and malignant, it
is necessary to extract a feature descriptor of the segmented
image. In order to achieve this, we extracted two different
features based on location and texture information. The
location based Point of Interest (POI) are extracted using Scale
Invariant Feature Transforms (SIFT) and the texture based
features are calculated by finding Gray Level Co-occurrence
Matrix (GLCM).

1) Feature Extraction:
a)POI extraction using SIFT:
SIFT provides a set of image features that are not affected by
many of the complication like image scaling and rotation.
SIFT features are also very sensitive to the presence of noise
artifacts in the image. SIFT descriptor is a three dimensional
spatial histogram of the segmented image gradient. The image
gradient at each pixel is a sample of a three dimensional
elementary feature vector formed by the pixel location and the
gradient orientation. Next, the samples are weighted by the
gradient norm and they are accumulated in a 3D histogram.
These histogram forms the SIFT descriptor of the segmented
image.
We uses a set of 16 histogram aligned in a 4x4 grid,
each with eight bins. This results in a feature vector consisting
of 128 elements. SIFT descriptor are computed by using
function in MATLAB R2014a. The output feature vector are
stored in a .mat extension file and used for the further
classification after combining with texture features.
b)Texture feature extraction using GLCM
Four features based on texture are extracted using
Gray Level Co-occurrence Matrix (GLCM) . GLCM is a
tabulation of how often different combination of pixel
brightness values occur in a pixel pair in an image. Each
element (i, j) in GLCM specifies the number of times that the
pixel with value i occurred horizontally adjacent to a pixel
with value j. The resulting matrix was analyzed and based on
the existing information, the feature vectors are formed [19].
The four features are given in Table I.
Table. I. Texture Feature
S.No
1

Feature
CONTRAST

CORRELATI
ON

ENTROPY

ENERGY

Formula

2) Classification Using active Support Vector


Machine (aSVM)
The choice for efficient classification depends on the size of
the dataset. Whenever the dataset is few, low variance
classifier like Naive Bayes can be used. But, in the case of

large dataset, we need a high variance and low bias classifier


such as KNN and Support Vector Machine (SVM). We prefer
active SVM classifier for exudates classification, since it
provides more accuracy on a larger dataset.
The active SVM classifier is trained using the
features extracted in section I above. The trained set contains
both normal and exudates images from MESSIDOR database.
We employed Radial Basis Function (RBF) kernel for training
and testing the SVM [16], since it shows good performance on
many application.
The RBF kernel is defined as

identified). The formula to find sensitivity and specificity are


given by

Sensitivity

(6)
TN
Specificity = (TN + FP)

xixj
K (xi, xj) =

2 2
exp

(7)

(5)

Precision =

Where is a constant which depends on kernel width.

B) Performance Evaluation of Normal Versus Exudates


Classification
The input images that are obtained from the dataset
are preprocessed using the method described in section III-A
and the outputs were shown in figure 2. Then the preprocessed
retinal images are segmented using soft clustering algorithm
and the outputs were discussed in section III B. Next, from
the segmented images, location and texture features are
extracted using SURF and GLCM.
For Performance evaluation of Normal versus
Exudates classification, we selected sensitivity and specificity
which is the two widely used parameter in the literature of
research. These important parameters are found by calculating
the true positive (TP) rate( the number of exudates pixels that
are correctly detected), the false positive (FP) rate ( the
number of non exudates pixels that are wrongly detected as
exudates pixels), the false negative (FN) rate ( the number of
exudates pixel that are not detected) and true negative (TN)
rate ( the number of non-exudates pixels that are correctly

TP
(TP+ FP)

(8)

IV. RESULTS AND DISCUSSION


In this section, we describe the datasets used and performance
evaluation of Normal versus Exudates classification.
A) Dataset:
To evaluate our algorithm, cross dataset validation
was employed, training the classifier with MESSIDOR dataset
and testing the system with UTHSC SA dataset.
At first, publicly available MESSIDOR dataset was
used for training. It contains 1200 color fundus images of the
posterior pole acquired by the Hospital Lariboisiere Paris, the
Faculte de Medecine St. Etienne. 800 of these images were
captured after pupil dilation and 400 were acquired without
dilation from a Topcon TRC NW6 non mydriatic retinograph
with a FOV of 45. The size of these images is 1488x2240.
Out of 1200 images, we used 400 non-dilated images for
training the classifier.
For testing the classifier, we used 600 colour fundus
images from UTHSC SA dataset. These images were acquired
using a canon CF-60 UV retinal camera with a 60 FOV and
the size of this images is 2048x2392 pixels.

TP
(TP+ FN )

Accuracy =

TP+FN
(TP+FP+ FN +TN )

(9)
The obtained result for each image of the testing set are shown
in table II. Our proposed algorithm achieves an average
sensitivity of 99.96%, specificity of 96.6% , precision of
97.71% and accuracy of 99.84%.

Table. II. Performance of our approach


Ima
ge
No.

Sensitivity
(%)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

99.91
100
100
99.96
99.92
100
100
100
100
100
99.97
99.93
100
100
99.87
99.79
100
100
100
99.95

Specificity
(%)

97.51
98.65
96.90
97.82
97.29
97.11
96.95
96.32
97.51
97.23
96.12
97.42
96.13
96.45
96.89
96.47
97.19
97.00
96.4
95.99

Precision
(%)

Accuracy
(%)

100
100
100
100
93.61
95.78
96.59
100
97.13
96.54
98.45
96.76
97.18
98.54
99.14
93.45
100
95.21
96.87
99.13

99.99
100
100
100
100
100
100
99.25
99.75
99.57
99.89
100
99.29
99.86
100
100
99.47
99.90
100
100

Aver
age

99.96

96.96

97.71

99.84

Our experiment took 30 seconds on an AMD E-300 processor


with 4GB RAM to calculate SURF and texture features for
each image. The training phase took 180 second and the active
SVM classifier testing took one second for a image. Since
training phase is performed only once prior to using the
system, it is not a major drawback.
V. CONCLUSION
In this approach, we proposed an efficient system for
automatic detection of exudates in colour retinal images. Since
our algorithm does not require any retraining for classifying
the testing dataset, it shows more robustness compared to
existing methods. We have used the I-band of HIS colour
space for preprocessing and the colour fundus images are
segmented by soft clustering algorithm. The location and
texture features are extracted by using SURF and GLCM.
Finally the images are trained by MESSIDOR dataset and they
are tested by the images from UTHSC dataset. Our proposed
method outperforms other existing method and obtained a
sensitivity of 99.96% and specificity of 96.6%. Our approach
has potential to be used as a clinical tool in automatic
detection of Diabetic Retinopathy.
REFERENCES
[1] H. R. Taylor and J. E. Keeffe, World blindness: A 21st century
perspective, Br. J. Ophthalmol., vol. 85, pp. 261266, 2001.
[2] S. Wild, G. Roglic, A. Green, R. Sicree, and H. King, Global
Prevalence of diabetes: Estimates for the year 2000 and projections for
2030, Diabetes Care, vol. 27, pp. 10471053, 2004.
[3] D. Klonoff and D. Schwartz, An economic analysis of interventions
for diabetes, Diabetes Care, vol. 23, pp. 390404, 2000.B. Smith, An
approach to graphs of linear forms (Unpublished work style),
unpublished.
[4] A.Aquino, Manuel Emilio, D.Marin, Detecting the Optic Disc
Boundary in Digital fundus Images using Morphological, Edge Detection
and Feature extraction technique, IEEE Trans. On Medical Imaging.,
Vol.29, No. 11, pp.1860-1869, 2010.
[5] J. Forrester, R. Phillips, and P. Sharp, Automated detection and
Quantification of retinal exudates, Graefes Arch. Clin. Exp.
Ophthalmol., vol. 231, pp. 9094, 1993.
[6] B. Ege, O. Hejlesen, O. Larsen, K. Moller, B. Jennings, D. Kerr, and
D. Cavan, Screening for diabetic retinopathy using computer based
Image analysis and statistical classification, Comput. Meth. Programs
Biomed., vol. 62, pp. 165175, 2000.
[7] D. Usher, M. Dumskyj, M. Himaga, T. H. Williamson, S. Nussey, and
J. Boyce, Automated detection of diabetic retinopathy in digital retinal
images: A tool for diabetic retinopathy screening, Diabetic Med., vol.

21, no. 1, pp. 8490, 2004.


A. Sopharak and B. Uyyanonvara, Automatic exudates detection from
diabetic retinopathy retinal image using fuzzy c-means and
morphological methods, in Proc. 3rd IASTED Int. Conf. Adv. Comput.
Sci. Technol., Phuket, Thailand, 2007, pp. 359364.
[9] A. D. Fleming, S. Philip, K. A. Goatman, G. J.Williams, J. A. Olson, and
P. F. Sharp, Automated detection of exudates for diabetic retinopathy
screening, Phys. Med. Biol, vol. 52, pp. 73857396, 2007.
[10] Iqbal M.I., Gubbal N.S., Aibinu A.M., Khan A.: Automatic diagnosis
of diabetic retinopathy using fundus images. Masters Thesis, Blekinge
Institute of Technology, October, 2006.
[11] Sopharak, A., Uyyanonvara, B., Barman, S., Williamson, T.H.:
Automatic detection of diabetic retinopathy exudates from non-dilated
retinal images using mathematical morphology methods, Comput.
Med. Imaging Graph., 2008, 32, (8), pp. 720727.
[12] H. Wang, H. Hsu, K. Goh, and M. Lee, An effective approach to detect
lesions in retinal images, in Proc. IEEE Conf. Comput. Vis. Pattern
Recogn., Hilton Head Island, SC, 2000, vol. 2, pp. 181187.
[13] T. Walter, J. Klein, P. Massin, and A. Erginary, A contribution of
Image processing to the diagnosis of diabetic retinopathy, detection of
Exudatess in colour fundus images of the human retina, IEEE Trans.
Med. Imag.,vol. 21, no. 10, pp. 12361243, Oct. 2002.
[14] M. Niemeijer, B. V. Ginneken, S. R. Russell, M. Suttorp, and M. D.
Abramoff, Automated detection and differentiation of drusen, exudates
and cotton-wool spots in digital color fundus photographs for diabetic
retinopathy diagnosis, Invest. Ophthalmol. Vis. Sci., vol. 48, pp. 2260
2267, 2007.
[15] Gardner, G.G., D. Keating, T.H. Williamson and A.T. Elliott, 1996.
Automatic detection of diabetic retinopathy using an artificial neural
network: A screening tool. Br. J. Ophthalmol., 80: 940-944.
DOI: 10.1136/bjo.80.11.940.
[16] C. Agurto, V.Murray, H.Yu., J.Wigdahl., M.Pattichis., S.Nemeth.,
E.Simon., P.Soliz., A Multiscale Optimization approach to detect
exudates in the macula, IEEE Journal of BioMedical and Health
Informatics., Vol.18., No. 4, 2014.
[17] TECHNO-VISION Project. Messidor: Methods to evaluate
Segmentation and indexing techniques in the field of retinal
ophthalmology. (2004). [Online]. Available: http://messidor.crihan.fr/
[18] A. Sopharak and B. Uyyanonvara, Automatic exudates detection from
diabetic retinopathy retinal image using fuzzy c-means and
morphological methods, in Proc. 3rd IASTED Int. Conf. Adv. Comput.
Sci. Technol., Phuket, Thailand, 2007, pp. 359364.
[19] B. Ramasubramanian, An Efficient Integrated approach for
the detection of Exudates and Diabetic Maculopathy in Colour
fundus Images, Advanced Computing: an International
Journal, Vol.3., No..5, 2012.
[20] B.Ramasubramanian, Saranya.K., A Novel Approach for the Detection
of New Vessels in the Retinal Images for Screening Diabetic
Retinopathy IEEE Int. Conf. on Communication and Signal Processing.,
pp.57-61, 2012
[8]

Вам также может понравиться