Академический Документы
Профессиональный Документы
Культура Документы
INTRODUCTION
1
retina decreases. These particles are referred to as exudates. Various methods
have been developed for detection of exudates. These include thresholding and
edge detection based techniques ,FCM based approach , gray level variation
based approach , multilayer perceptron based approach . Optic disk must be
detected and segmented early in the detection process,as often optic disk has
more or less same brightness and contrast as the exudates. So, if optic disk is
not segmented in early stage the process may produce wrong result. In this
paper we have proposed a method which dynamically calculates an optimal
thresholding value to detect hard exudates in fundus images.
2
Nowadays, medical images have become a major component of
diagnostics, treatment planning and procedures, and follow-up studies.
Furthermore, medical images are used for education, documentation, and
research describing, morphology as well as physical and biological functions in
1D, 2D, 3D, and even 4D image data. Today, a large variety of imaging
modalities have been established, such as X-ray, Computed Tomography(CT),
Magnetic Resonance Imaging (MRI), Fluoroscopy, Ultrasound etc. which are
based on transmission, reflection or refraction of light, radiation, temperature,
sound, or spin. Obviously, an algorithm for delineation of an individual that
works with one imaging modality will not be applicable directly to another
modality.
3
of image analysisis very specific, and developed algorithms can be transferred
directly into other application domains. High-level image processing include
methods at the texture, region, object, and scene levels. The required
abstraction can be achieved by increased modelling of a priori knowledge.
Image analysis techniques require.
• Segmentation techniques are used to isolate the desired object from the scene
so that measurements can be made on it subsequently. Segmentation partitions
the image into its constituent connected regions or objects. The level to which
the subdivision is carried depends on the problem being solved. In medical
image processing, the definition accentuates the various diagnostically or
therapeutically relevant image areas, namely, the discrimination between
healthy anatomical structures and pathological tissue. By definition, the result
of segmentation is always at the regional level of abstraction. Depending on the
level of feature extraction required after segmentation, we can methodically
classify the procedures into pixel, edge, and texture or regionoriented
procedures. In addition, there are hybrid approaches, which result from
combination of single procedures.
4
• Description is also called feature selection. It deals with extracting the
attributes that result in some quantitative information of interest or is basic for
differentiating one class of object from another.
5
in computer programs to represent an image (low level). In the medical
domain, there are three main aspects hindering bridging this gap
6
retinopathy blood vessels of retina gets ruptured and lipoprotein substances
gets leaked out of the damaged blood vessels and are deposited in the intra
retinal space. The part of optic nerve beneath these substances are barred from
being excited by light rays and failed to produce any nerve impulse to brain
and leads to partial loss of vision. These lipoprotein substances are yellowish in
color and are termed as exudates. Exudates are the primary as well as most
important sign of the presence of diabetic retinopathy. If the disease is not
detected in early stages then it may lead to complete loss of vision to the
diabetes patients. But detection of exudates is extremely difficult to detect by
visual inspection due to small inner diameter of retina. Sometimes inadequate
illumination makes the problem worse.
7
1.2.2 Automated Segmentation of Blood Vessels for Detection of
Proliferative Diabetic Retinopathy
M. Usman Akram, Ibaa Jamal, Anam Tariq and Junaid Imtiaz
8
1.2.3 New Feature-Based Detection of Blood Vessels and Exudates in
Color Fundus Images
Doaa Youssef, Nahed Solouma, Amr El-dib, Mai Mabrouk, and Abo-Bakr
Youssef Exudates are one of the earliest and most prevalent symptoms
of diseases
leading to blindness such as diabetic retinopathy and wet macular degeneration.
Certain areas of the retina with such conditions are to be photocoagulated by
laser to stop the disease progress and prevent blindness. Outlining these areas is
dependent on outlining the exudates, the blood vessels, the optic disc and the
macula and the region between them. The earlier the detection of exudates in
fundus images, the stronger the kept sight level. So, early detection of exudates
in fundus images is of great importance for early diagnosis and proper
treatment. In this paper, we provide a feature-based method for early detection
of exudates. The method is based on segmenting all objects that have contrast
with the background including the exudates. The exudates could then be
extracted after eliminating the other objects from the image. We proposed a
new method for extracting the blood vessel tree based on simple morphological
operations. The circular structure of the optic disc is obtained using Hough
transform. The regions representing the blood vessel tree and the optic disc are
set to zero in the segmented image to get an initial estimate of exudates. The
final estimation of exudates are obtained by morphological reconstruction. This
method is shown to be promising as we can detect the very small areas of
exudates.
Exudates are formed by the leakage of proteins and lipids from the
bloodstream into the retina via damaged blood vessels. In retinal images, hard
exudates appear as bright yellow lesions with varying sizes, shapes, and
locations. They also have a considerable contrast with respect to the
background. The optic disk, bright circular region from where the blood vessels
emanate, is the only area in the fundus images having the same brightness and
9
colour range like the exudates. the appearance of the exudates and the optic
disc in the colored fundus images. So, detection of exudates could accurately
be done by extracting the bright yellow regions after elimination the optic disc
area from the , Image with hard exudates Several techniques have been
developed for exudates detection in fundus images. Akara et al use maximum
variance to obtain the optic disk center and a region growing segmentation
method to obtain the exudates. Blood vessel intersection property is used in to
obtain the optic disk. Based on its color characteristics, the authors in
composed a simple Bayesian classifier to detect the exudates. Extraction of
exudates and blood vessels by computing the difference map and k-means
clustering is introduced in . Color normalization and local contrast
enhancement followed by fuzzy C-means clustering and neural networks were
used by Osareh . The system works well only on Luv color space but in the
case of non-uniform illumination the detection accuracy is low. In a naïve
Bayes classifier for diagnosis of diseases from retinal image is applied and this
can provide a good decision support to ophthalmologist. Walter et al proposed
a method for automated identification of exudates in colour fundus images
using mathematical morphology techniques.
1.2.4 Automated Detection of Diabetic Retinopathy Using Fundus
Image Analysis
aspreet Kaur ,Dr. H.P.Sinha
10
Hemorrhages are of two types: Flame and Dot-blot hemorrhages. Flame
hemorrhages occur at the nerve fibers and they originate from precapillary
arterioles, which located at the inner layer of the retina. Dot and blot
hemorrhages are round, smaller than micro aneurysms and occur at the various
levels of retina especially at the venous end of capillaries. Hard exudates are
shinny, irregularly shaped and found near prominent microaneurysms or at the
edges of retinal edema. In the early stage, the vision is rarely affected and the
disease can be identified only by regular dilated eye examinations.
Fundus images are used for diagnosis by trained clinicians to check for
any abnormalities or any change in the retina. They are captured by using
special devices called ophthalmoscopes. Each pixel in the fundus image
consists of three values namely red, green and blue, each value being quantised
to 256 levels. Diabetic Maculopathy(DM) is a stage where fluid leaks out of
damaged vessels and accumulates at the center of the retina called macula
(which helps in seeing the details of the vision very clearly) causing permanent
loss of vision. This water logging of the macula area is called clinically
significant macular oedema which can be treated by laser treatment.
11
specificity of 91.2%. While for STARE database proposed method sensitivity
92.15 % and specificity 84.46%. The system could assist the ophthalmologists,
to detect the signs of diabetic retinopathy in the early stage, for a better
treatment plan and to improve the vision related quality of life.
1.2.5 A semi-automated technique for labeling and counting of
apoptosing retinal cells
Retinal ganglion cell (RGC) loss is one of the earliest and most
important cellular changes in glaucoma. The DARC (Detection of Apoptosing
Retinal Cells) technology enables in vivo real-time non-invasive imaging of
single apoptosing retinal cells in animal models of glaucoma and Alzheimer‟s
disease. To date, apoptosing RGCs imaged using DARC have been counted
manually. This is time-consuming, labour-intensive, vulnerable to bias, and has
considerable inter- and intra-operator variability. Automated analysis included
a pre-processing stage involving local-luminance and local-contrast “gain
control”, a “blob analysis” step to differentiate between cells, vessels and noise,
and a method to exclude non-cell structures using specific combined „size‟ and
„aspect‟ ratio criteria. Apoptosing retinal cells were counted by 3 masked
operators, generating „Gold-standard‟ mean manual cell counts, and were also
counted using the newly developed automated algorithm. Comparison between
automated cell counts and the mean manual cell counts on 66 DARC images
showed significant correlation between the two methods (Pearson‟s correlation
coefficient 0.978 (p < 0.001), R Squared = 0.956.
12
CHAPTER 2
STRUCTURE AND FUNCTION OF THE EYE
2.1 Basic Structure of the Eye
The five human senses, the eye is one of its most important sense
organs by the quality and precision of the signals it captures and by the
reduction of quality of life within its absence. It was estimated that 70% of the
sensorial information interpreted by humans is captured by the eye (Davidovits
2001). It has the ability to capture the light and convert it in the retina into
electric signals that are sent to human brain for interpretation. Its functioning is
similar to conventional image capture systems due to its lenses that refract and
focus the incoming light in the sensorial region. The optical system (figure 2.1)
is composed by the lens, the iris and the cornea. As a result of the incoming
light and in order to refract and focus, the lens reshapes with the help of
auxiliary muscles. The control of the amount of light that enters the eye
through the pupil is done by the iris, which is a tissue that is able to contract
and expand, by decreasing or increasing the size of the pupil, respectively
(figure2.1).
The cornea is a transparent and protective layer that covers the iris and
the pupil and is the first refracting layer. With approximately 43 dioptres of
refractive power the human cornea is the major focusing element of the eye,
although it is fixed. The variable focus is obtained by the lens which in a
natural environment has approximately 18 dioptres of refractive power The
light that enters the eye is projected into the retina, which contains the
photoreceptors. There are different types of photoreceptors, some are for low-
light vision and black-white perception (rods) and others are for colour
perception and daytime vision (cones). Although these are dispersed along the
retina, which occupies 72% of a sphere with 22mm of
13
13
Figure2.1 Light Propagation In The Retina. (A) The Light Behaviour For An
Undilated And A Dilated Pupil; (B) Light Refraction In The Eye With A
Relaxed And A Stretched Lens.
diameter, they are more concentrated on the central part of the retina, the
macula. This latter is about 6000μm of diameter and contains a high density of
photoreceptors. The photoreceptors that are outside the macula capture the
peripheral vision, which in case of damage is less noticed or even unnoticed
than in the macula. In the centre of the macula with 1000μm of diameter is the
fovea. It contains 50% of the photoreceptors of the retina being responsible for
the sharp and high resolution central vision used by humans for reading,
watching television, driving, and any other activity where visual detail is of
primary importance. The retina is divided into layers from the nerve fibre layer
to the Bruch‟s Membrane. The light that reaches the retina crosses the nerve
fibre and the ganglions‟ layers to be captured by the photoreceptors (rods and
cones).
These return the light information to the ganglions which gather and
compress the information from several neighbouring photoreceptors and send it
through the nerve to the brain. The Retinal Pigmented Epithelium (RPE) is a
layer of cells that protects and nourishes the retina, removes waste products,
prevents new blood vessel growth into the retinal layer and absorbs light not
absorbed by the photoreceptor cells; these actions prevent the scattering of the
14
light and enhance vision clarity. Finally, the Bruch‟s Membrane is a thin layer
that acts as a blood-retinal barrier and as a support to both RPE and Choroid.
The information collected by the retina is gathered into a set of nerve fibres
which are forwarded to the brain through the optic nerve. This latter includes
also the veins and arteries which supply the blood to the eye. The optic disc is
located within the retina and does not have photoreceptors which create a blind
spot in the visual field.
2.2 Introduction to Retinal Diseases
15
2.3 Diabetic Eye Diseases
There are a number of reasons that can cause reduced visual acuity,
visual impairment, and blindness. In diabetic eye diseases, the cause of visual
disturbances is in most cases related to those vascular changes diabetes is
causing to the eye. The discussion in this section concentrates on the diabetic
eye diseases that encompass a group of eye problems, such as diabetic
retinopathy, cataract, neovascular glaucoma and diabetic neuropathies . The
section discusses how the symptoms of the diabetic eye diseases emerge and
how they affect the vision.
16
The diabetic retinopathy also increase the permeability of the capillary
walls which results in retinal edema and hard exudates (HE). The hard exudates
are lipid formations leaking from the weakened blood vessels and appear
yellowish with well-defined borders. If the local capillary circulation and
oxygen support fail due to obstructed blood vessels, pale areas with indistinct
margins appear in the retina. These areas are small micro infarcts known as soft
exudates (Se) . Intraregional microvascu5lar abnormalities (IRMA) and
venopathy are signs of a more severe stage of diabetic retinopathy, where
intraregional microvascular abnormalities appear as dilation in the capillary
system and venopathy as shape changes in artery and veins.
(d) (e)
17
indicates the presence of diabetic retinopathy in the eye and consist of micro
aneurysms, hemorrhages, exudates, retinal oedema, IRMA and venopathy .
18
2.4 The History of a Changing Diabetes
There have been many improvements in diabetes care from the discovery of
insulin in the 1920s until the present day (1‐3). These developments imply
that many older patients with diabetes today have lived through some
substantial changes. Since most diabetic complications of diabetes (including
DR) develop with time, the incidence and progression of these depend on the
type of diabetes care available in the relevant decades. Consequently, patients
who were diagnosed with diabetes 60 years ago have a different prognosis
compared to those who are diagnosed with diabetes today. Insulin was first
discovered in 1921 by Banting and Best who two years later were awarded the
Nobel Prize for their discovery of its effect on patients with diabetes.
19
CHAPTER 3
DIABETIC RETINOPATHY
3.1 Introduction
20
rate of screening depends on accurate fundus image capturing and especially on
accurate and reliable image processing algorithms for detecting the
abnormalities.
feel their vision blurred. Firstly this sensation can be temporary. However, after
some days or weeks, a severe bleeding is likely to occur and damage the retina
irreversibly. In a funduscopic examination of a retina affected by Diabetic
Retinopathy, the ophthalmologist may find cotton wool spots and
21
haemorrhages (figure3.1). The cotton wool spots are regions where the blood
supply has been obstructed, exhibiting a white reflection in a distorted region
as a consequence. The haemorrhages are the darker spots which exhibit an
irregular shape, as illustrated in figure 3.1
3.2 Drusen in Eye
The image pixels were classified into background and yellowish objects
using minimum distance discrimination, where the countour pixels of extracted
optic disk were used as background color reference and pixels inside the
22
contour were used as yellowish object color reference. The segmented
yellowish areas and their edge information extracted with Kirsch‟s mask were
combined to hard exudate areas using boolean operator. located the bright
abnormal regions in fundus images by applying fuzzy c-means clustering in
LUV color space. The result areas were classified to hard exudates, soft
exudates, and normal findings using support vector machine. Osareh searched
the coarse hard exudate areas using fuzzy c-means clustering with Gaussian
smoothed histograms of each color band of the fundus image.
23
CHAPTER 4
GRAYSCALE
4.1Introduction
24
In computing, although the grayscale can be computed through rational
numbers, image pixels are stored in binary, quantized form. Some early
grayscale monitors can only show up to sixteen (4-bit) different shades, but
today grayscale images (as photographs) intended for visual display (both on
screen and printed) are commonly stored with 8 bits per sampled pixel, which
allows 256 different intensities
(i.e., shades of gray) to be recorded, typically on a non-linear scale. The
precision provided by this format is barely sufficient to avoid visible
banding artifacts, but very convenient for programming due to the fact that
a single pixel then occupies a single byte.
4.2.1 Syntax
I = rgb2gray(RGB)
newmap = rgb2gray(map)
25
4.2.2Description
26
corresponding linear-intensity value (R, G, and B, also in range [0,1]). Then,
luminance is calculated as a weighted sum of the three linear-intensity values.
The sRGB color space is defined in terms of the CIE 1931 linear luminance Y,
which is given by
.
27
CHAPTER 5
MORPHOLOGICAL TRANSFORMATIONS
5.1. what is Morphology?
28
bwareaopen Remove small objects from
binary image
29
5.2.1 Dilation
30
Figure5.1: Dilation operation
5.2.2 Erosion
It is used to reduce objects in the image and known that erosion reduces
the peaks and enlarges the widths of minimum regions, so it can remove
positive noises but affect negative impulsive noises little.
In a binary image, if any of the pixels is set to 0, the output pixel is set to 0.
31
In the above equation fit means all on pixel in the structuring element covers an
on pixel in the image. The following figure illustrates the morphological
erosion of a gray scale image. Note how the structuring element defines the
neighbourhood of the pixel of interest, which is circled. The dilation function
applies the appropriate rule to the pixels in the neighbourhood and assigns a
value to the corresponding pixel in the output image. In the figure, the
morphological erosion function sets the value of the output pixel to 14 because
it is the minimum value of all the pixels in the input pixel's neighbourhood
defined by the structuring element is on.
Figure5.4:Opening operation
In the case of the square of side 10, and a disc of radius 2 as the structuring
element, the opening is a square of side 10 with rounded corners, where the
corner radius is 2.
32
5.2.4 Closing Operation
Closing of an image is the reverse of opening operation.
Figure5.5:Closing operation
The first method proposed is the block analysis where the entire image
is split into a number of blocks and each block is enhanced individually. The
next proposed method is the erosiondilation method which is similar to block
analysis but uses morphological operations (erosion and dilation) for the entire
image rather than splitting into blocks. All these methods were initially applied
for the gray level images and later were extended to colour images by splitting
the colour image into its respective R, G and B components, individually
enhancing them and concatenating them to yield the enhanced image. All the
above mentioned techniques operate on the image in the spatial domain. The
final method is the DCT where the frequency doma in is used. Here we s c a l e
t h e D C coefficients of the image after DCT has been taken. The DC
coefficient is adjusted as it contains the maximum information. Here, we move
from RGB domain to YCbCr domain for processing and in YCbCr, to adjust
(scale) the DC coefficient, i.e. Y (0, 0). The image is converted from RGB to
YCbCr domain because if the image is enhanced without converting, there is a
good chance that it may yield an undesired output image. The enhancement of
images is done using the log operator.
33
5.3 Image Background Analysis By Blocks
34
- Dilation operation, - Erosion operation
Dilation and erosion are the two most common morphological operations
used for back ground analysis by blocks.
5.4 Block Analysis For Gray Level Images
35
(left half of the image) and output image (right half) for block analysis is shown
below:
This method is similar to block analysis in many ways; apart from the
fact that the manipulation is done on the image as a whole rather than
partitioning it into blocks. Firstly minimum Imin (x) and maximum intensity
max I (x) contained in a structuring element (B) of elemental size 3 × 3 is
calculated.
The above obtained values are used to find the background criteria i as
described below
Where min I (x) and max I (x) corresponds to morphological erosion and
dilation respectively,Therefore
36
In this way the contrast operator can be described as in equations By
employing Erosion-Dilation method we obtain a better local analysis of the
image for detecting the background criteria than the previously used method of
Blocks. This is because the structuring element μB permits the analysis of eight
boring pixels at each point in the image. By increasing the size of the
structuring element more pixels will be taken into account for finding the
background criteria . It can be easily visualized that several characteristics that
are not visible at first sight appear in the enhanced images. The trouble with
this method is that morphological erosion or dilation when used with large size
of μ to reveal the background, undesired values maybe generated.
37
is achieved. Image background obtained from the erosion of the opening by
reconstruction Background parameter b(x) is calculated by eroding the above
obtained background criterion τ(x) which is described below:
Like lowpass filtering, median filtering smoothes the image and is thus
useful in reducing noise. Unlike lowpass filtering, median filtering can
preserve discontinuities in a step function and can smooth a few pixels whose
values differ
significantly from their surroundings without affecting the other pixels.
38
Figure5.7:Graph For Median Filter
39
5.8 Edge Detection
40
CHAPTER 6
These contain anatomical structures that are in its surface, such as the
optic disk, the vessels and lesions when they exist (drusen, exudates,
haemorrhages and others), alongside with the high variability among different
patients‟ images due to the differences in each individual‟s physiology.
However, this methodology concentrates only on the drusen and in particular
on those that affect the patients‟ central vision. The other structures, vessels
and optic disk, are also used in this methodology but as auxiliary indicators for
contrast adjustment and to obtain the image resolution. The automatic detection
and quantification of drusen is described as a methodology composed by five
steps, starting with the image pre-processing and finishing with the
quantification of drusen.
41
6.2 The Methodology
42
between Clinical Centres and equipments, the image pre-processing normalizes
the image for the drusen detection and quantification process. This
normalization will include both image intensity and contrast.
43
the specialists. As the consensus among the specialists is sometimes weak, as
well as a moderate intra-operator variability, the comparison between the
automatic detection algorithm and all the specialists should have a statistical
approach to allow some degree of uncertainty in the model. A second phase
pursuing the development of the Gold-Standard is to present the automatic
detection of drusen to the specialists in order to assess its results and adjust the
parameters accordingly to achieve a consensus among them. With the final
parameterization defined, the last phase is to validate the results with a blind
study. Images are randomly presented both to the ophthalmologists and to the
software in order to assess the agreement.
6.3 Region of interest
The first operation over the image is to locate the region of interest
(ROI) where the image operations will produce effects and where drusen are to
be detected and quantified. This is done by locating the macula and specifying
a circular ROI around it. The fundus images are usually already oriented and
aligned. However, to verify the alignment, most of the retinographs add an
orientation mark on their top-right side (figure 6.1). With this orientation it is
possible for the clinicians to orientate the image and identify which eye it is. A
right eye has its Optic disk on the same side as the orientation mark, while the
left eye has the opposite.
The ROI will be a circular area centred on the macula with a diameter of
two optic disk diameters (ODD). The ODD is used in several studies as a
standard reference to calculate the image resolution. Despite that it is a stable
measure among the humans (Jonas, Gusek et al. 1988), there is no consensus
44
within the scientific community in the reference value, ranging from 1500μm
to 1850μm. In this work a reference value of 1500μm is used according to the
International grading recommendations (Bird, Bressler et al. 1995). Although
there are some works relating to the automatic macula and optic disk location
(Hoover and Goldbaum 2003; Tobin, Chaum et al. 2007), they are not in the
scope of this work. These works use the vessels and optic disc geometric
configuration in order to estimate the exact location of the macula.The work
from Tobin et al. (2007) achieved an accuracy higher than 90%, in a set of
images containing several retinal pathologies. Despite these good results, the
work being presented is focused in drusen detection, leaving for the specialist
the manual detection of the macula, which executes a similar cognitive
procedure as the one showed in the Figure. First, they measure the ODD in
pixels and then mark a circle with two ODD centred on the macula. After
defining the ROI, the image is cropped to a square centred on the ROI with a
width of 1.2 x 2 x ODD, in which the factor 1.2 is the margin used to avoid a
special image.
6.3.1 Colour Channel Selection
From the analysis of the image colour channels, as presented and despite
the fact that the images are predominantly red, it was noticed that for colour
images the green channel is the one providing the best contrast and which is
less affected by the illumination non-uniformity.
which shows the individual colour and grey channels, it is again noticed
that the blue channel has almost no information, the red channel is the one with
higher intensity and with eventual saturation, the grey channel has a low and
non-uniform contrast, and the green channel has a medium intensity and good
contrast. The image histograms confirms these observations.
45
The grey channel exhibits a good histogram shape, although the image
contrast is not uniform. From the four histograms presented, the green channel
is the more balanced. It has a peak value near the centre, representing the
background, and has clearly defined secondary peaks corresponding to other
structures as the vessels, the optic disc, the drusen and others. The use of the
hue channel from the IHS colour space is also an alternative which has also
been used gray images. However, the inconsistency of the image colour
between the imaging techniques makes it a less reliable approach.
46
6.3.2 Non-Uniform Illumination Compensation
47
detection and quantification process. This normalization will include both
image intensity and contrast. Due to eye lens problems and to deficient patient
collaboration during the image acquisition the illumination over the retina is in
many cases non uniform. The image pre-processing is also responsible for
correcting this non uniform illumination. The image interpretation is composed
by the detection of drusen locations followedby an analytical characterization
and finally by the affected areas calculation.
48
CHAPTER 7
AREA CALCULATION AND CONUTING
7.1 Introduction
g(i,j)=0 for image elements of background If objects do not touch each other,
and if their gray-levels are clearly distinct from background gray-levels,
thresholding is suitable segmentation method. Correct threshold selection is
crucial for successful threshold segmentation,this selection can be determined
interactively or it can be the result of some threshold detection method. Only
under very unusual circumstances can threshold be successful using a single
threshold for whole image(global thresholding) since even in very
49
7.2Drusen Quantification
With the retinal drusen model it is now possible to proceed to the drusen
quantification. In the epidemiological studies which have been done in the past,
the calculated indicators were mainly the number of drusen and the size of the
biggest druse present, but only within a range of four to five possible values.
However, a high variability among the analysis done by different specialists
was observed. In the Wisconsin Grading Center, considered as one of the
reference Retina Image Grading Centres, a variability of 32.9% for drusen size
measurement was reported (Klein, Davis et al. 1991). The quantification
method that is proposed here quantifies the affected area, estimates the drusen
integral and is able to evaluate the number of drusen. The simplest method of
image segmentation is called the thresholding method. This method is based on
a clip-level (or a threshold value) to turn a gray-scale image into a binary
image.
50
In global thresholding, a single threshold for all the image pixels is used.
When the pixel values of the components and that of background are fairly
consistent in their respective values over the entire image, global thresholding
could be used. In adaptive thresholding, different threshold values for different
local areas are used. Basic Global Thresholding Simplest of all thresholding
techniques is to partition the image histogram by using a single global
threshold, T Segmentation is then accomplished by scanning the image pixel
by pixel
51
1. Choose initial estimate for T.
2. Divide the histogram using T. This will produce 2 groups of pixels:
• G1 - Set of all pixels with grey level values >T.
• G2 - Set of pixels with values <=T.
3. Compute the average grey level values m1 and m2 for the pixels in
regions G1 and G2.
4. Compute a new threshold value:
7.3Integral Estimation
52
Figure7.2 shows the flow chart of the cellscounting process. In data
acquisition, as an input image and it has shown in Figure . These images need
to be enhanced for further analysis. This is a pre-process of an image sequence
before feeding into the segmentation process. Cells segmentation and
extraction is the process to distinguish between red blood cell and other cell in
the blood smear image. The last process is counting the number of cellsusing
Hough transform technique.
53
Any point (x, y) for which f (x, y) > T is called an object point, otherwise the
point is called background point. Thresholding normally results in binary
image and the mathematically; the
operation can be expressed as;
where the pixels labeled 1 is corresponded to object whereas the pixels labeled
0 are corresponding to the background. From the lower pixel image we perform
the morphological area closing to fill the hole and eliminate the unwanted
small pixel. The other image which is higher pixel value has been used as an
input for next processes which are dilation and area closing.
Figure 7.4 – Examples of drusen detection with more than one GGD per
drusen.
on the lower pixel image and higher pixel image. It can be seen that the hole in
the cell has been filled up. From the result after morphological from both lower
and higher pixel, the range value of pixel is determined. The histogram is
suitable to describe where the position of the range value we want in this work.
Histogram of saturation, S image is shown in Figure 6.
54
CHAPTER 8
RESULT AND DISCUSSION
55
MORPHOLOGICAL IMAGES: By using morphological operation it can
perform dilation .Dilation increases and enlarges the width of maximum
regions, so it can remove negative impulsive noises but do little on positives
ones. It is used to reduce objects in the image and known that erosion reduces
the peaks and enlarges the widths of minimum regions, so it can remove
positive noises but affect negative impulsive noises little
56
CHAPTER 9
Conclusion:
57
REFERENCES:
58
[8] Nasrul Humaimi Mahmood and Muhammad Asraf Mansor, “Red Blood
Cells Estimation Using Hough Transform Technique,” Signal & Image
Processing : An International Journal (SIPIJ) Vol.3, No.2, April 2012
[10] Sumeet Chourasiya G Usha Rani, “Automatic Red Blood Cell Counting
using Watershed Segmentation,” Sumeet Chourasiya et al, / (IJCSIT)
International Journal of Computer Science and Information Technologies, Vol.
5 (4) , 2014, 4834-4838
[11] Z. Li, C. Liu, C. Zhao, Y. Cheng, “An Image thresholding Method Based
on Human Visual Perception”, 2009, IEEE.
[12]. D. Kayal, S. Banerjee, “An Approach to Detect Hard Exudates
Using Normalized Cut Image Segmentation Technique in Digital Retinal
59