Академический Документы
Профессиональный Документы
Культура Документы
CLASSIFICATION.
CHAPTER 1
DEPARTMENT OF ECE
Page 1
INTRODUCTION
An image is a two-dimensional picture, which has a similar appearance to some
subject usually a physical object or a person.
Several factor combine to indicate a lively future for digital image processing. A
major factor is the declining cost of computer equipment. Several new technological
trends promise to further promote digital image processing. These includeparallel
processing mode practical by low cost microprocessors, and the use of charge coupled
devices (CCDs) for digitizing, storage during processing and display and large low cost of
image storage arrays.
DEPARTMENT OF ECE
Page 2
DEPARTMENT OF ECE
Page 3
Fig1.1(c) Scanner
DEPARTMENT OF ECE
Page 4
DEPARTMENT OF ECE
Page 5
Page 6
1.1.8 Compression
Compression, as the name implies, deals with techniques for reducing the
storage required saving an image, or the bandwidth required for transmitting it. Although
storage technology has improved significantly over the past decade, the same cannot be
said for transmission capacity. This is true particularly in uses of the Internet, which are
characterized by significant pictorial content. Image compression is familiar to most users
of computers in the form of image file extensions, such as the jpg file extension used in
the JPEG (Joint Photographic Experts Group) image compression standard.
DEPARTMENT OF ECE
Page 7
In binary images, the sets in question are members of the 2-D integer space Z2,
where each element of a set is a 2-D vector whose coordinates are the (x,y) coordinates of
a black(or white) pixel in the image. Gray-scale digital images can be represented as sets
whose components are in Z3. In this case, two components of each element of the set refer
to the coordinates of a pixel, and the third corresponds to its discrete gray-level value.
1.1.10 Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward
successful solution of imaging problems that require objects to be identified individually.
Page 8
1.1.13 Knowledgebase
Knowledge about a problem domain is coded into image processing system in the
form of a knowledge database. This knowledge may be as simple as detailing regions of
an image when the information of interests is known to be located, thus limiting the search
that has to be conducted in seeking that information. The knowledge base also can be
quite complex, such as an inter related to list of all major possible defects in a materials
inspection problem or an image data base containing high resolution satellite images of a
region in connection with change deletion application. In addition to guiding the operation
of each processing module, the knowledge base also controls the interaction between
modules. The system must be endowed with the knowledge to recognize the significance
of the location of the string with respect to other components of an address field. This
knowledge glides not only the operation of each module, but it also aids in feedback
operations between modules through the knowledge base. We implemented preprocessing
techniques using MATLAB.
DEPARTMENT OF ECE
Page 9
Page 10
tissues are shifted, pushed against the skull or are responsible for the damage of the
nerves of the other healthy brain tissues. Scientists have classified brain tumor according
to the location.
Primary brain tumors are the tumors that originated in the brain and are named
for the cell types from which they originated. They can be benign (non-cancerous) and
malignant (cancerous). Benign tumors grow slowly and do not spread elsewhere or
invade the surrounding tissues. However, occupying a short space, even the less
aggressive tumor can exercise much pressure on the brain and makes it dysfunctional.
Conversely, more aggressive tumors can grow more quickly and spread to other
tissues. Each of these tumors has unique clinical, radiographic and biological
characteristics. Secondary brain tumors originate from another part of the body. These
tumors consist of cancer cells somewhere else in the body that have metastasized or
spread to the brain The most common cause of secondary brain tumors are: lung
cancer, breast cancer, melanoma, kidney cancer, bladder cancer, certain sarcomas, and
testicular and germ cell tumors.
DEPARTMENT OF ECE
Page 11
DEPARTMENT OF ECE
Page 12
Raymond V. Damadian invented MRI in 1969 and was the first person to use
MRI to investigate the human body. Eventually, MRI became the most preferred
imaging technique in radiology because MRI enabled internal structures be visualized
in some detail. With MRI, good contrast between different soft tissues of the body can
be observed. This makes MRI suitable for providing better quality images for the brain,
the muscles, the heart and cancerous tissues compared with other medical imaging
techniques, such as computed tomography (CT) or X-rays. In MRI signal processing
considers signal emissions.
Fig1.2(b) Brain MR Images From (a) Axial Plane, (c) Sagittal Plane And (d) Coronal
Plan
DEPARTMENT OF ECE
Page 13
DEPARTMENT OF ECE
Page 14
CHAPTER 2
DEPARTMENT OF ECE
Page 15
LITERATURE SURVEY
1. Meiyan Huang et.al discuss about problem of segmentation of brain tumors. For solving
this several methods are available. In this Paper the aim of author is to solve the
segmentation problem by using LIPC based method. Compared with other coding LAE
method is more suitable in solving linear reconstruction weights under the locality
constraint. The data distribution in each sub manifold was important for the classification,
and he used a soft max model to learn the relationship between the data distribution and
reconstruction error norm. He evaluated the method using both synthetic data and public
available brain tumor image data.
2. Dongjin Kwon et.al discuss about new method for deformable registration of pre-operative
and post-operative brain MR scans of glioma patients. It matches intensities of healthy tissue
as well as glioma to resection cavity. This method extracted pathological information on both
scans using scan specific approaches and then registers scans by combining image based
matching with pathological information. He presented a new deformable registration approach
that matches intensities of healthy tissue as well as glioma to resection cavity. His method
extracted pathological information on both scans using scan-specific approaches and then
registers scans by combining image-based matching with pathological information. To
achieve unbiased deformation fields on either scan, he used a symmetric formulation of our
energy model comprised of image- and shape-based correspondences and smoothness
constraints. He determined the optimal registration results by minimizing the energy function
using a hybrid optimization strategy which takes advantages both of discrete and continuous
optimizations.
3. Andac Hamamci et.al discuss about fast and robust practical tool for segmentation of solid
tumors with minimal user interaction. Segmentation algorithm for problem of tumor
delineation which exhibit varying tissue characteristics. The author discuss a tumor cut
segmentation to partition the tumor tissue further into its necrotic tumor and enhancing parts.
DEPARTMENT OF ECE
Page 16
He presented a segmentation algorithm for the problem of tumor delineation which exhibit
varying tissue characteristics. As the change in necrotic and enhancing part of the tumor after
radiation therapy becomes important, He also applied the Tumor-cut segmentation to partition
the tumor tissue further into its necrotic and enhancing parts. He presented validation studies
over a synthetic tumor database and two real tumor databases: one from Harvard tumor
repository and another from a clinical database of tumors that underwent radiosurgery
planning at Radiation Oncology Department of ASM.
4. Stefan Bauer et.al discuss about a new method which makes use of sophisticated models of
bio physio mechanical tumor growth to adapt a general brain atlas to an individual tumor patient
image. It can be applied for solid tumors and gliomas with distinct boundaries to capture
important mass effect, while the less pronounced in Itration effect is not considered in this case.
The method essentially comprises two steps: patient specific tumor growth modeling in
combination with non rigid registration techniques , where his method for tumor growth
modeling integrates discrete and continuous approaches for simulation. The results show that it
is possible to adapt a healthy atlas to a tumor-bearing patient image using the his model
approach. Quantitative overlap measures indicate that this sophisticated method achieves
reasonable results in a similar range as other models without being very sensitive to the initial
and stopping conditions of the growth model. The accuracy of Dice coefficients is also
comparable to values reported for a different approach , however different data was used in this
case. He expect better results when using the full image resolution for the tumor growth model.
Computation times are significantly longer compared to the approach because several bio
physical and bio mechanical layers are taken into account.
DEPARTMENT OF ECE
Page 17
CHAPTER 3
DEPARTMENT OF ECE
Page 18
EXISTING TECHNIQUIES
3.1 Threshold Method
Image segmentation by using threshold method is quite simple but very powerful
approach for segmenting images based on image-space region i.e. characteristics of the image
[7]. This method is usually used for images having light object on darker background or vice
versa. Thresholding algorithm will choose a proper threshold value T to divide images pixels
into several classes and separate objects from the background. Any pixel (x, y) for which f(x,
y)>=T is considered to be foreground while any pixel (x, y) which has value f(x, y) <T is
considered to be background. Based on the selection of threshold value, there are two types of
thresholding method that are in existence:
a) Global Thresholding
Global (single) thresholding method is used when there the intensity distribution
between the objects of foreground and background are very distinct. When the difference
between foreground and background objects are very distinct, a single value of threshold can
simply be used to differentiate both objects apart. Thus, in this type of thresholding, the value
of threshold T depends solely on the property of the pixel and the grey level value of the
image. Some most common used global thresholding methods are Otsu method, entropy based
thresholding, etc [10].
b) Local thresholding
This method divides an image into several sub regions and then choose various
thresholds Ts for each sub region respectively. Thus, threshold depends on both f(x, y) and
p(x, y). Some common used Local thresholding techniques are simple statistical thresholding,
2-D entropy-based thresholding histogram transformation thresholding etc [7].
DEPARTMENT OF ECE
Page 19
DEPARTMENT OF ECE
Page 20
DEPARTMENT OF ECE
Page 21
Page 22
Supervised classification learning method is widely used in tumor segmentation. Welltrained classifiers can extract discriminative information from the training data and estimate
the label of each voxel in a testing volume. However, the traditional classification methods
classify each voxel into different classes without considering the spatial correlation between
current and nearby voxels. This method may not obtain a global optimized result. To address
this problem, a classification method is generally combined with a regularization step. The
regularization step can be implemented by modeling the boundary or by applying a variant of
a random field spatial prior (MRF/CRF). In the previous studies, context-aware spatial
features and the probabilities obtained by tissue-specific Gaussian mixture models are used as
inputs for classifiers, and satisfied segmentation results are achieved without using post-hoc
regularization.
Page 23
is low, classification may be performed well. However, data distribution is complex in brain
tumor MRI images. In addition, the data distribution of different classes (i.e., tumor, edema,
and brain tissue) may widely vary. Therefore, the data distribution of each class should be
considered when segmenting brain tumors. Our evaluations on synthetic data and public
available brain tumor image data demonstrates that considering the data distribution of
different classes can further improve the segmentation perform.
DEPARTMENT OF ECE
Page 24
CHAPTER 4
DEPARTMENT OF ECE
Page 25
PROPOSED METHOD
The Proposed method consists of four major steps , that are preprocessing, feature
extraction, tumor segmentation using the LIPC method , and post processing .To reduce the
computational costs, to embedded the proposed method.
Block diagram of the proposed system
Training
data
Pre processing
Testing
data
Pre processing
Feature
extraction
LIPC
Post
Processing
Final
result
Feature
extraction
A Series of experiments were performed on the training and testing brain tumor image
data. For the training data, a complete tumor is subdivided into tumor core and edema parts.
The proposed method was evaluated using a five fold cross validation fashion. All
experiments were repeated five times , and the final results were reported as the mean an
standard deviation of the result from the individual runs. For each run, a total of 64 images
were used in training and 16 images were used in testing. Mean while no joint set is existed
between the training and testing datasets.
4.1 Pre-processing
Median filtering is similar to using an averaging filter, in that each output pixel is set
to an average of the pixel values in the neighborhood of the corresponding input pixel.
However with median filtering, the value of an output pixel is determined by the median of
the neighborhood pixels, rather than the mean. The median is much less sensitive than the
mean to extreme values. Median filtering is better able to remove outliers without reducing the
sharpness of the image.
DEPARTMENT OF ECE
Page 26
FEATURE EXTRACTION
1. LBP Method
2. SVM Method
Divide the examined window to cells (e.g. 16x16 pixels for each cell).
For each pixel in a cell, compare the pixel to each of its 8 neighbors (on its left-top, leftmiddle, left-bottom, right-top, etc.). Follow the pixels along a circle, i.e. clockwise or
counter-clockwise.
Where the center pixel's value is greater than the neighbor, write "1". Otherwise, write
"0". This gives an 8-digit binary number (which is usually converted to decimal for
convenience).
Compute the histogram, over the cell, of the frequency of each "number" occurring (i.e.,
each combination of which pixels are smaller and which are greater than the center).
Concatenate normalized histograms of all cells. This gives the feature vector for the
window.
The feature vector now can be processed using the Support vector machine or some other
machine-learning algorithm, to produce a classifier.
DEPARTMENT OF ECE
Page 27
Color generation - possibility to create dozens of new colors, which won't be close to any of
basic colors, such as Red, Green, Blue, etc.
Desktop backgrounds - using several filters its possible to achieve nice patterns, which
may be tiled on desktop screen
Texture generation - different formulas may provide images suitable for procedural image
generation, for example creating carpets or walls
Color = x or y
DEPARTMENT OF ECE
Page 28
Local binary pattern (LBP) is a popular technique used for image/face representation
and classification. LBP has been widely applied in various applications due to its high
discriminative power and tolerance against illumination changes such as texture analysis and
object recognition. It was originally introduced by Ojala et al. [10] as gray-scale and rotation
invariant texture classification. Basically, LBP is invariant to monotonic gray-scale
transformations. The basic idea is that each 3x3-neighborhood in an image is threshold by the
value of its center pixel and a decimal representation is then obtained by taking the binary
sequence (Figure 1) as a binary number such that LBP e [0, 255].
0
30
6
1
42
7
2
5
8
1 2
128 0
64 32
4
8
16
Fig 4.3: The multi-scale LBP operator with (8,1) and (8,2) neighborhoods. Pixel
values are bi linearly interpolated for points which are not in the center pixel.
DEPARTMENT OF ECE
Page 29
comparison of all the pixels including the center pixel with the mean of all the pixels in the
kernel. The decimal result of 9-bits can be mathematically expressed as,
7
LBP xc , yc s in ic .2n
n 0
Eq 1
Where it is the mean grey value in the kernel. Qian Tao and Raymond Veldhuis [13] proposed
simplified local binary pattern (SLBP) for illumination normalization by assigning equal weights
to each of the 8 neighborhood. It was shown that the processed image becomes more robust to
illumination change. There are two advantages for SLBP: the simplified one is not directionalsensitive and the coding number is largely reduced from 256 to 9 patterns. SLBP is defined as,
7
SLBP xc , yc s in ic .1
n 0
Eq 2
However, LBP operator based method still has its small spatial support area (3x3
neighborhood), hence theit-wise comparison made between two single pixel values is much
affected by noise. The local 3x3 LBP therefore does not capture large scale structure (macrostructure) that may be dominant facial feature. To overcome these limitations of LBP operator,
Shengcai Liao et al [9] proposed a multi scale block, local binary pattern (MBLBP), by simply
calculating the average sum of image intensity in each block (e.g. 3x3, 9x9, 15x15), 3x3
MBLBP operator is equivalent to the original LBP, and comparing to its surrounding block as
shown in Fig4.4. The average sum is then threshold by its center block.
7
MBLBP Bc s Bn Bc .2n
n 0
Eq 3
where Be is the average sum obtained at central block and Bn is the average sum obtained at its
neighbourhood. Note that the average sum over each block can be computed efficiently by using
DEPARTMENT OF ECE
Page 30
The basic idea of LLBP is similar to the original LBP but the difference are as follows:
1) its neighbourhood shape is a straight line with length N pixel, unlike in LBP which is a square
shape. 2) the distribution of binary weight is started from the left and right adjacent pixel of
center pixel (20) to the end of left and right side , (2 N /2 , e.g , N 9, 29/22 , where _ _ is ceiling
function) equally as illustrated in Fig 4.5 The algorithm of LLBP, it first obtains the line
binary code along with horizontal and vertical direction separately and its magnitude, which
characterizes the change in image intensity such as edges and corners, is then computed. It can be
mathematically expressed as in (6)-(8). where LLBPH, LLBPV, LLBPM are LLBP on horizontal
direction, vertical direction, and its magnitude respectively (The example of processed image
for each direction and its magnitude are shown in Fig4.5).
DEPARTMENT OF ECE
Page 31
Fig 4.5: LLBP operator with line length 9 pixels, 8 bits considered
4.6 SVM
In machine learning, Support Vector Machines (SVMs, also support vector
networks"1) are supervised learning models with associated learning algorithms that analyze
data used for classification and regression analysis. Given a set of training examples, each
marked for belonging to one of two categories, an SVM training algorithm builds a model
that assigns new examples into one category or the other, making it a non-probabilistic
binary linear classifier. An SVM model is a representation of the examples as points in
space, mapped so that the examples of the separate categories are divided by a clear gap that
is as wide as possible. New examples are then mapped into that same space and predicted to
belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a nonlinear classification using what is called the kernel trick, implicitly mapping their inputs into
high-dimensional feature spaces.
When data are not labeled, a supervised learning is not possible, and an unsupervised
learning is required, that would find natural clustering of the data to groups, and map new data to
these formed groups. The clustering algorithm which provides an improvement to the support
DEPARTMENT OF ECE
Page 32
vector machines is called support vector clustering0 and is often used in industrial applications
either when data is not labeled or when only some data is labeled as a preprocessing for a
classification pass.
We are given a training dataset of ft points of the form
x1
, y1 ,.......,
xn
, yn
Eq 4
Where the yi are either 1 or -1, each indicating the class to which the point
Each
x1
x1
belongs.
x1
which yi 1 which is defined so that the distance between the hyper plane and the nearest
point
x1
from either group is maximized. Any hyper plane can be written as the set of points
satisfying x
. b 0,
Maximum-margin hyper plane and margins for an SVM trained with samples from two
classes. Samples on the margin are called the support vectors.
DEPARTMENT OF ECE
Page 33
where
is the (not necessarily normalized) normal vector to the hyper plane. The
parameter
determines the offset of the hyper plane from the origin along the normal
vector
4.6.1 Hard-margin
If the training data are linearly separable, we can select two parallel hyper planes that
separate the two classes of data, so that the distance between them is as large as possible. The
region bounded by these two hyper planes is called the "margin", and the maximum-margin
hyper plane is the hyper plane that lies halfway between them. These hyper planes can be
described by the equations
. b 1
and
. b 1
, so to maximize the
from falling into the margin, we add the following constraint: for each either
. b
1, if
=1
Or
Eq 5
. b
-1, if
= -1
These constraints state that each data point must lie on the correct side of the margin.
This can be rewritten as:
( . b)
1,
for all 1
n.
Eq 6
DEPARTMENT OF ECE
Page 34
Minimize subject to
The
and
( .
b)
. + b).
are
Eq 7
lies on the
correct side of the margin. For data on the wrong side of the margin, the function's value is
proportional to the distance from the margin. We then wish to minimize
[
Where the parameter
ensuring that the
Eq 8
lie on the correct side of the margin. Thus, for sufficiently small values
of , the soft-margin SVM will behave identically to the hard-margin SVM if the input data
are linearly classifiable, but will still learn a viable classification rule if not.
l=arg max
DEPARTMENT OF ECE
=arg max
Eq 9
Page 35
Before the proposed LIPC was introduced, the following assumption was considered as the
base for LIPC:
Assumption I:
Samples from different classes are located on different non-linear sub manifolds, and a
sample can be approximately represented as a linear combination of several nearest neighbors
from its corresponding sub-manifold.
4.8 LIPC Implementation
4.8.1 Dictionary construction:
The manually labeled original samples in a training set are used to construct D. However,
numerous original training samples possibly produce a large D, which dramatically increases
computational and memory costs. In the present study, more than half a million samples for
each class are available for training. Thus, subsequent processes are impractical when
conducted traditionally. Applying a dictionary learning method is necessary in learning a
compact representation of the original training samples.
The k-means method can obtain the typical structures of the original sample space; and
thus, this method is used in the current study to learn a compact representation of the original
training samples of each class.
s.t
DEPARTMENT OF ECE
Page 36
Eq 10
(k) was
constructed
For dJs that dont belong to the Nx (k), The associated ajs were set to 0
Eq 11
).
In general, data distributions on different sub manifolds are different. The Therefore
reconstruction error norms of the samples cover a wide range and may sometimes violate the
negative correlation as shown in below fig.
Fig4.8: Example that shows how a testing sample may be wrongly classified if the data
distributions on different sub manifolds are not considered.
assigned to Class 2 based on (8). However, assigning the testing sample to be Class 1 is
reasonable after considering the distribution of all training data on different sub manifolds
b- Learning a soft max regression model using the reconstruction error norms to separate the
data into different classes.
Thus classification score can be defined as
Yi=g(
DEPARTMENT OF ECE
Page 37
( |
| )
Eq 12
ESWAR COLLEGE OF ENGG
||
2) /
Eq 13
| )
Algorithm
Input: Training set T = {T i }Ni= 1 = {{xji , lij }nj=i 1 }Ni= 1 ; A testing sample x.
Output: The classification scores y = {yi }Ni= 1 and the label l of x.
Stage-1: Construction of sub dictionary for each class Partition {xji , lij }nj=i 1 into Ni subsets
and calculate the cluster centers denoted as Di = [d1i , d2i , . . . , dNi i ] using k-means methods.
Stage-2: Calculation of locally linear representation coefficients. Reconstruct each x(j ) in T
based on dictionary D using the LAE method and calculate the corresponding coefficient
vector {a(ij ) }Ni= 1.
Stage 3: Soft max regression model determination: Calculate reconstruction error vector i of
all training samples for each class based on dictionary D = {Di }Ni= 1 and coefficient vectors
{a(j ) }nj= 1 .
Stage 4: Core of the proposed method:
Reconstruct the input sample x based on dictionary D using the LAE method and calculate the
coefficient vectors {ai }Ni= 1 . Calculate the reconstruction error vector i of x for each class
according to (2). Calculate the classification scores y = {yi }Ni= 1 of x ,Achieve the label l of x .
DEPARTMENT OF ECE
Page 38
CHAPTER 5
DEPARTMENT OF ECE
Page 39
RESULT
1. For the classification of an image the system is first trained for to get the good
classification.
2.For this total 80 images used out of these 40 are used for training and 40 are
used for testing.
Select the input image from data set , after that apply the Gaussian filter, for to get the
Filtered image.
DEPARTMENT OF ECE
Page 40
DEPARTMENT OF ECE
Page 41
Fig 5.3 counter segmentation method same intensity pixel values are taken.
DEPARTMENT OF ECE
Page 42
DEPARTMENT OF ECE
Page 43
Fig 5.5 data base classification 80 images are used out of this 40 images are
used for training and 40 images are used for testing.
Fig 5.6 LIPC classification image can be divided into four patches. And also
LBP method is applied .
DEPARTMENT OF ECE
Page 44
CONCLUSION
An automatic method is proposed for brain tumor segmentation in MRI images. An
LIPC-based method was introduced to solve the tumor segmentation problem. The proposed
LIPC used local independent projection into the classical classification model, and a novel
classification framework was derived. Compared with other coding approaches, the LAE
method was more suitable in solving the linear reconstruction weights under the locality
constraint. The data distribution in each sub manifold was important for the classification, and
we used a soft max model to learn the relationship between the data distribution and
reconstruction error norm. This work evaluate the proposed method using both synthetic data
and public available brain tumor image data. In both problems, our method outperformed
competing methods.
DEPARTMENT OF ECE
Page 45
References
1.Meiyan Huang, Wei Yang, Yao Wu, Jun Jiang, Wufan Chen, Senior Mem-ber, IEEE, and
Qianjin Feng*, Member," Brain Tumor Segmentation Based on Local Independent
Projection-based Classification \ IEEE Transactions on Biomedical EngineeringDOI
10.1109/TBME.2014.2325410 ,2013
2. Dongjin Kwon*, Member, IEEE, Marc Niethammer, Member, IEEE, Hamed Akbari,
Michel Bilello,Christos Davatzikos, Fellow, IEEE, and Kil-ian M. Pohl, Member, \PORTR:
Pre-Operative and Post-Recurrence Brain Tumor Registration", IEEE Transactions on
Medical Imaging.
4. T. Wang, I. Cheng, and A. Basu, "Fluid vector ow and applications in brain tumor
segmentation," IEEE Transactions on Bio-medical Engineering, vol. 56, no. 3, pp. 781-9, Mar
2009.
DEPARTMENT OF ECE
Page 46
BIO-DATA
(1) Name
Reg no
Fathers name
Address
:
:
:
:
Phone no
Email Id
:
:
(2) Name
Reg no
Fathers name
Address
:
:
:
:
Phone no
Email Id
:
:
(3) Name
Reg no
Fathers name
:
:
:
Address
Phone no
Email Id
:
:
(4) Name
Reg.No
Fathers name
Address
:
:
:
:
Phone no
Email Id
:
:
DEPARTMENT OF ECE
Page 47
Shaik Nowshad
12JE1A0453
Shaik.khayum
Nehuru nagar
Chilakaluripet,Guntur(Dist)
9550263914
shaiknowshad453@gmail.com
Pothukuchi Lakshmi Madhuri
12JE1A0429
P.Ramasasthri
Vidya nagar
Narasaraopet,Guntur(Dist)
7893028550
lakshmimadhuri78@gmail.com
Shaik Hajar munni
12JE1A0419
Shaik Meeravali
Kammavari palli,
Pullalacheruvu,
Prakasam (Dist)
8008514695
hajarmunnishaik@gmail.com
Kavala. Deepthi
12JE1A0411
K. Nagaiah
East Christian pet
Chilakaluripet, Guntur(Dist)
9573985079
deepthikavala@gmail.com