Вы находитесь на странице: 1из 22

Chapter 3

PROPOSED METHODOLOGY FOR SEGMENTATION

3.1 INTRODUCTION In the previous chapter we discuss the overview of different segmentation technique. As there are various algorithms and methods are available for segmentation. In this chapter the main terminologies which are used in this approach are discuss. Segmentation of an image into foreground and background has been done in much literature by means of intensity thresholding. However selecting the threshold which will work for different kind of images is very challenging task. A rigid or fix

threshold may perform very well in some cases but may fail completely in other cases. Therefore there is a need of adaptive threshold selection. There are different approaches of adaptive threshold selection: threshold selection based on mean value, threshold selection based on median value. In this approach threshold value is selected by using fuzzy methodology. This work discusses the fact that in an output image most of the pixels belongs to background and very few of the pixels belongs to the objects present in that which corresponds to foreground. In this project, the foreground is assumed as an outlier and experimentally it shows the outlier is detected. The Generalize block diagram of this approach is as shown in figure 3.1.

22

Preprocessing

Finding Threshold

Foreground & Background Separation

Filtering

Segmentation

Morphological Erosion and Dilation

Figure 3.1 Generalized Block Diagram

3.2 PREPROCESSING Preprocessing of an image is done by different methods depending upon the type of an image or the algorithm which is going too used in the particular type of segmentation. Mainly preprocessing is get classified into following types, 3.2.1 Data processing The images are subject to various types of noises such as irregularities etc. These noises may degrade the quality of the image and consequently it cannot provide correct information for subsequent image segmentation and edge detection. In order to improve the quality of the image, operations need to be performed to remove or decrease degradations suffered in its acquisition. Preprocessing is also needed in order to homogenize and separate the intensity distributions of the malignant and benign tissues. This can be achieved by using several denoising techniques, viz., Gaussian filter, median filter, Weiner filter, morphological open-close reconstruction filter and morphological top hat filtering.

23

3.2.2 Image smoothing Image smoothing act as the pre-processing step for image segmentation, as, almost all of the images suffers from the problem of noise effects. So, pre-processing act as an important aid to the every already existing segmentation methods, in which specialized filters as described above smooth the image and simplifying it for segmentation step. 3.2.3 Image contrast enhancement Poor contrast is usually one of the most common defects found in the acquired image. This degradation probably is caused by inadequate aperture size and noise. Sometimes this is caused of nonlinear mapping of the image intensity. The effect of such defects has a great impact on the contrast of the acquired image. In this case, the gray level of each pixel is scaled to improve the contrast of the acquired image. Contrast enhancement step sometimes proves to be one of the important pre-processing steps, especially in case when image has a poor contrast. In the present work, the contrast of the smoothened image is enhanced using the image processing toolbox functions. This improves the visualization of the original image and thus makes the object of interest more clearly visible. In this approach preprocessing is done by Histogram Normalization on gray scale image. 3.2.4 Histogram normalization

A normalized histogram gives the relative proportions of each pixel in the image and hence approximates to the probability distribution of pixel intensities. A normalized histogram is one in which the sum of the frequencies is exactly 1. Therefore, if you express each frequency as a percentage of the total, you get a normalized histogram.

24

The histogram of digital image with the intensity levels in the range [0, L-1] is a discrete function. = Where is the intensity value. is the number of pixels in the image with intensity . is the histogram of the digital image with Gray Level 3.1

Histograms are frequently normalized by the total number of pixels in the image. Assuming a M N image, a normalized histogram.

= , K=0, 1, 2, 3.L-1

3.2

Where gives an estimate of the probability of occurrence of gray level The Sum of all components of a normalized histogram is equal to1.

A histogram is created by counting the number of pixels at each intensity level between 0 and 255, so a one-dimensional array of 256 elements is required. The histogram is normalized by dividing the frequency at each intensity level by the total number of pixels in the image. A cumulative histogram is created by summing the frequencies up to and including the required value. A 'for' loop traverses the histogram array adding each element to the subsequent element to achieve this. If the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity level is multiplied by 255/130, making the range 0 to 255. Autonormalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format. The normalization
25

process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location. Following figure shows the un-normalized histogram and after applying normalized histogram calculation we get normalized histogram.

Figure 3.2 Histogram Normalization

3.3

THRESHOLDING

Image thresholding is widely used as a popular tool in image segmentation. It is useful to separate objects from background, or discriminate objects from objects that have distinct grey levels. Thresholding involves bi-level thresholding and multilevel thresholding. Bi-level thresholding classifies the pixels into two groups, one including those pixels with grey levels above a certain threshold, the other including the rest. Multilevel thresholding divides the pixels into several classes. During the thresholding process, individual pixel in an image are marked as "object" pixels if

26

their value is greater than some threshold value (assuming an object to be brighter than the background) and as "background" pixels otherwise. This convention is known as threshold above. Variants include threshold below, which is opposite of threshold above; threshold inside, where a pixel is labeled "object" if its value is between two thresholds; and threshold outside, which is the opposite of threshold inside. Typically, an object pixel is given a value of 1 while a background pixel is given a value of 0. Finally, a binary image is created by coloring each pixel white or black, depending on a pixel's labels. 3.3.1 Threshold selection

The key parameter in the thresholding process is the choice of the threshold value (or values, as mentioned earlier). Several different methods for choosing a threshold exist. Users can manually choose a threshold value, or a thresholding algorithm can compute a value automatically, which is known as automatic thresholding. A simple method would be to choose the mean or median value, if the object pixels are brighter than the background, they should also be brighter than the average. In a noiseless image with uniform background and object values, the mean or median will work well as the threshold, however, this will generally not be the case. A more sophisticated approach might be to create a histogram of the image pixel intensities and use the valley point as the threshold. The histogram approach assumes that there is some average value for the background and object pixels, but that the actual pixel values have some variation around these average values. However, this may be computationally expensive, and image histograms may not have clearly defined valley points, often making the selection of an accurate threshold difficult. One method that is relatively simple, does not require much specific knowledge of the image, and is robust against image noise, is the following iterative method,
27

1. An initial threshold (T) is chosen; this can be done randomly or according to any other method desired. 2. The image is segmented into object and background pixels as described above, creating two sets: 1. G1 = {f(m,n):f(m,n)>T} (object pixels) 2. G2 = {f(m,n):f(m,n)< T} (background pixels) (note, f(m,n) is the value of the pixel located in the mth column, nth row) 3. The average of each set is computed. 1. m1 = average value of G1 2. m2 = average value of G2 4. A new threshold is created that is the average of m1 and m2 1. T = (m1 + m2)/2 5. Go back to step two, now using the new threshold computed in step four, keep repeating until the new threshold matches the one before it.

This iterative algorithm is a special one-dimensional case of the k-means clustering algorithm, which has been proven to converge at a local minimummeaning that a different initial threshold may give a different final result. In this approach threshold selection is done by Adaptive Thresholding using fuzzy methodology to the sorted row vector containing the data (Non-Zero) in increasing order.

3.3.2 Fuzzy methodology for automatic thresholding Fuzzy set theory assigns a membership degree to all elements among the universe of discourse according to their potential to fit in some class. The membership degree can be expressed by a mathematical function that assigns, to each element in the set, a

28

membership degree between 0 and 1. Let be the universe (finite and not empty) of discourse and an element of. A fuzzy set in is defined as = ( , ) The -function is used for modeling the membership degrees [18].This type of function is suitable to represent the set of bright pixels and is defined as

Where, b = (1/2) (a+c). The S-function can be controlled through parameters a and c. Parameter b is called the crossover point where = 0.5. Higher the gray level of a pixel (closer to white), the higher membership value and vice versa. A typical shape of the -function is presented in Fig. 1. The -function is used to represent the dark pixels and is defined by an expression obtained from -function as follows: = ; . . = 1 (; , , ) Both membership functions could be seen, simultaneously, in Figure 3.4.

Figure 3.3 Typical Shape of S-Function


29

Figure 3.4. Histogram of S-Function

In proposed methodology contain some steps which are as follows; Firstly one function is get defined in which input data a sorted row vector is containing the data in increasing order and we get the output in the form of row vector having segmented data. Assume S shaped Curve, consider its start value, peak value and cross over value. On the basis of these three values with the sorted data we have to calculate the S-Membership function for the particular sorted data. After finding membership function calculates the membership value for each element in the data matrix.

Calculating the threshold using the help of membership function. Finally after finding threshold the data matrix get binaries it means foreground and background are get separated.

This fuzzy method for the selection of threshold is very useful in color image segmentation. By using this terminology the inverted image also get segmented very cleanly. The main advantage of this method is that there is no hard code threshold selection is required for the binarization. The algorithm itself is capable of calculating the threshold automatically for any image. But it will more effective on the images with the uniform background. 3.4 FOREGROUND AND BACKGROUND SEPARATION

Reading of the foreground text is difficult in documents having multi colored complex background. Automatic foreground text separation in such document images is very much essential for smooth reading of the document contents. Separation of text information from complex background in color document images remains a challenging problem in character recognition applications such as bank cheque

30

processing, postal address sorting. In day today life we come across many documents that are designed deliberately with colorful and complex backgrounds such as travel tickets, grade sheets and decorative postal envelopes. Presence of uniform/nonuniform background patterns, multiple colors of the background, mix up of foreground text color with background color in such documents cause difficulties in reading the document contents. Also automatic O-ring of such documents results in low character recognition accuracy. The simplest method for extraction of text from the background is thresholding. Global thresholding techniques extract objects from images having uniform background. Local thresholding methods are suggested to withstand the adverse effect of varying background but at a price of processing cost. The processing cost can be reduced by capturing the regions containing text and then thresholding only those regions instead of thresholding the entire document image. Some binarization or foreground extraction techniques that have been proposed for color document images are broadly based on color clustering or color segmentation principle and in this sense, they are quite different from traditional threshold selection algorithms. However, despite the large number of proposed algorithms for color image segmentation only a handful of them have found direct application for the document image processing. This is so because use of classical segmentation algorithm exhibit difficulties to tackle several document defects like stains, humidity marks, degradation of ink, paper, etc. Large size of color document images is another bottleneck for efficient use of the traditional segmentation strategies. Rather, generic algorithms, in few cases, have been customized in several ways for efficient background-foreground separation in color document images.

31

3.4.1

Need of background and foreground separation

As we know the segmentation is the very first step in the image processing. So the background and foreground separation is also the very much essential step in image segmentation. It also known as the Binarization of image, in which each pixel gets set in between 0 and 1. As segmentation has many applications in image processing, so background and foreground separation is also very important for, Video surveillance. Traffic monitoring. Human detection. Video editing

The background and foreground separation is the very essential and important part of image segmentation in image processing. The different methods are available for the separation of background and foreground separation. Different people used different algorithms and method for binarization or background and foreground separation. In our approach as name suggested that by using adaptive or automatic thresholding we have to find threshold and then by setting the particular threshold value we have binaries the image. After detecting the thresholding value the foreground and the background of image get separated. 3.4.2 Outliers detection The outliers of an object are defined as elements of object which lie at the boundary in some representation. In our case the outliers of an image are defined as pixels which lie at boundary in true representation of image or some other representation of image. Since in the case of foreground and background separation ( binary segmentation) in case of static and uniform background, number of pixels belonging to background will be much more than the number of pixels belonging to foreground, therefore the
32

foreground will lie at the boundaries or at the extremes in normalized histogram representation of an image. Figure 3.5 shows a gray image and figure 3.6 shows possible background and foreground in histogram representation of image. It can be clearly observed that the number of pixels belonging to foreground is much lower than number of pixels belonging to background.

Fig 3.5 Gray Image

Figure 3.6 Normalized histogram indicating background and foreground

33

Figure 3.7 below shows the binary image after using the information of foreground and background from normalized histogram in figure 3.6.

Figure 3.7 Binary image

The white pixels in figure 3.7 show the foreground and the black pixels shows the background. It is clear from figure 3.7 that the foreground includes all the objects of interest. 3.5 MORPHOLOGICAL EROSION AND DILATION

Morphology is a broad set of image processing operations that process images based on shapes. Morphological operations apply a structuring element to an input image, creating an output image of the same size. In a morphological operation, the value of each pixel in the output image is based on a comparison of the corresponding pixel in the input image with its neighbors. By choosing the size and shape of the neighborhood, you can construct a morphological operation that is sensitive to specific shapes in the input image. Morphological operations are affecting the form, structure or shape of an object. Applied on binary images (black & white images Images with only 2 colors: black and white). They are used in pre or post processing (filtering, thinning, and pruning) or for getting a representation or description of the shape of objects/regions
34

(boundaries, skeletons convex hulls). The most basic morphological operations are dilation and erosion. Dilation adds pixels to the boundaries of objects in an image, while erosion removes pixels on object boundaries. The number of pixels added or removed from the objects in an image depends on the size and shape of the structuring element used to process the image. In the morphological dilation and erosion operations, the state of any given pixel in the output image is determined by applying a rule to the corresponding pixel and its neighbors in the input image. The rule used to process the pixels defines the operation as dilation or erosion is as follows.
Table3.1 Rules for Dilation and Erosion

Operation Dilation

Rules The value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1. The value of the output pixel is the minimum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to 0, the output pixel is set to 0.

Erosion

3.5.1

Dilating an image

To dilate an image, use the imdilate function. The imdilate function accepts two primary arguments:

The input image to be processed (grayscale, binary, or packed binary image) A structuring element object, returned by the strel function, or a binary matrix defining the neighborhood of a structuring element

imdilate also accepts two optional arguments: SHAPE and PACKOPT. The SHAPE argument affects the size of the output image. The PACKOPT argument identifies the input image as packed binary.

35

Dilation of image f by structuring element s is given by f s. The structuring element s is positioned with its origin at (x, y) and the new pixel value is determined using the rule:

1 if s hits f g ( x, y ) 0 otherwise
Following figure shows the Dilation process of an image. Also there are some examples shown below.

Figure 3.8 Original image 1

Figure 3.9 Processed image1

Original image

Dilation by 3*3 square Structuring element


Figure3.10 Texture dilation 1

Dilation by 5*5 square structuring element

36

Figure3.11 Shape dilation 1

3.5.2

Eroding an image

To erode an image, use the imerode function. The imerode function accepts two primary arguments:

The input image to be processed (grayscale, binary, or packed binary image) A structuring element object, returned by the strel function, or a binary matrix defining the neighborhood of a structuring element

imerode also accepts three optional arguments: SHAPE, PACKOPT, and M. The SHAPE argument affects the size of the output image. The PACKOPT argument identifies the input image as packed binary. If the image is packed binary, M identifies the number of rows in the original image. The following example erodes the binary image; Read the image into the MATLAB workspace. o BW1 = imread ('circbw.tif'); Create a structuring element. The following code creates a diagonal structuring element object. Call the imerode function, passing the image BW and the structuring element SE as arguments. o BW2 = imerode (BW1, SE);

37

Erosion of image f by structuring element s is given by f s. The structuring element s is positioned with its origin at (x, y) and the new pixel value is determined using the rule:

1 if s fits f g ( x, y ) 0 otherwise

Following figure shows the Erosion process of an image. Also there are some examples shown below.

Figure3.12 Original Image 2

Figure3.13 Processed Image 2

Figure3.14 Texture Erosion 2

38

Figure3.15 Shape Erosion 2

Dilation and erosion are often used in combination to implement image processing operations. For example, the definition of a morphological opening of an image is erosion followed by dilation, using the same structuring element for both operations. The related operation, morphological closing of an image, is the reverse: it consists of dilation followed by erosion with the same structuring element. 3.5.3 Connected component analysis

In binary images analysis objects are usually extracted by means of the connected components labeling operation, which consists in assigning a unique label to each maximal connected region of foreground pixels. Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher-dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. Blob extraction is generally performed on the resulting binary image from a thresholding step. Connected components labeling scans an image and groups its pixels into components based on pixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a gray level or a color (color labeling) according to the component it was assigned to.

39

How it works Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share the same set of intensity values V. (For a binary image V={1}; however, in a image V will take on a range of values, for example: V={51, 52, 53, ..., 77, 78, 79, 80}.) Connected component labeling works on binary or gray level images and different measures of connectivity are possible. However, for the following we assume binary input images and 8-connectivity. The connected components labeling operator scans the image by moving along a row until it comes to a point p (where p denotes the pixel to be labeled at any stage in the scanning process) for which V={1}. When this is true, it examines the four neighbors of p which have already been encountered in the scan. Based on this information, the labeling of p occurs as follows:

If all four neighbors are 0, assign a new label to p, else if only one neighbor has V={1}, assign its label to p, else if more than one of the neighbors have V={1}, assign one of the labels to p and make a note of the equivalences.

After completing the scan, the equivalent label pairs are sorted into equivalence classes and a unique label is assigned to each class. As a final step, a second scan is made through the image, during which each label is replaced by the label assigned to its equivalence classes. For display, the labels might be different gray level or colors.

40

Guidelines for use To illustrate connected components labeling, we start with a simple binary image containing some distinct artificial objects

After scanning this image and labeling the distinct pixels classes with a different gray value, we obtain the labeled output image

Note that this image was scaled, since the initial gray values (1 - 8) would all appear black on the screen. However, the pixels initially assigned to the lower classes (1 and 2) are still indiscernible from the background. If we assign a distinct color to each gray level we obtain

The full utility of connected components labeling can be realized in an image analysis scenario wherein images are pre-processed via some segmentation (e.g. thresholding) or classification scheme. In our approach it is used to find the object or cluster in the particular binaries image for further segmentation

41

3.6 SEGMENTATION Segmentation is the very first step in almost all the image processing application where the properties of objects in image need to be analyzed e.g. in medical imaging problems like tumor detection, coronary vessels detection; in automotive vision in vehicle detection; object recognition in content based image retrieval etc. The objective of the image segmentation is to extract the dominant colors. The image segmentation is very important to simplify an information extraction from images, such as color, texture, shape, and structure. The applications of image segmentation are diversely in many fields such as image compression, image retrieval, object detection, image enhancement, and medical image processing. Several approaches have been already introduced for image segmentation, The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application The segmentation is based on measurements taken from the image and might be grey level, colour, texture, depth or motion Usually image segmentation is an initial and vital step in a series of processes aimed at overall image understanding Applications of image segmentation include, Identifying objects in a scene for object-based measurements such as size and shape. Identifying objects in a moving scene for object-based video compression (MPEG4).

42

Identifying objects which are at different distances from a sensor using depth measurements from a laser range finder enabling path planning for mobile robots. Image segmentation is the foundation of object recognition and computer vision. In general, image noise should be eliminated through image preprocessing. And there is some specifically-given work (such as region extraction and image marking) to do after the main operation of image segmentation. The final output of the proposed algorithm is the segmented image. 3.7 CONCLUDING REMARK

The discussion in the previous chapter shows that, there are many problems comes while segmenting the image. This chapter gives the proper methodology and an innovatively new approach to solve the many problems about segmentation. Threshold selection is the main difficulty in the segmentation. This approach provides the best solution on this. The threshold value is automatically getting selected for segmenting an image. Morphological operations used in this approach helps to improve the final segmented output.

43

Вам также может понравиться