Вы находитесь на странице: 1из 6

An Unsupervised Natural Image Segmentation Algorithm Using Mean Histogram Features

Md. Mahbubur Rahman


Computer Science and Engineering Discipline, Khulna University, Khulna-9208, Bangladesh. mahbubcse@yahoo.com

AbstractA new integrated feature distributions based natural image segmentation algorithm has been proposed. The proposed scheme uses histogram based new color texture extraction method which inherently combines color texture features rather then explicitly extracting it. Use of non parametric Bayesean clustering makes the segmentation framework fully unsupervised where no a priori knowledge about the number color textures regions are required. The feasibility and effectiveness of the proposed method have been demonstrated by various experiments using images of natural scenes. The experimental results reveal that superior segmentation results can be obtained through the proposed unsupervised segmentation framework. Index TermsImage segmentation, Natural scene, Color texture feature, Mean histogram, Non parametric Bayesian clustering

I. I NTRODUCTION Segmentation of images is an important yet challenging early vision task where pixels with similar features are grouped into homogeneous regions. Especially segmentation of natural images is a challenging task as the images are of complex composition of both color and texture regions. A difcult problem for segmentation of natural images arises from the fact that, those images contain more or less pure textures and the texture properties are not well dened. There have been many methods proposed for color segmentation and texture segmentation, but only a few number for colored texture segmentation. Although signicant progress has been made in texture segmentation and color segmentation separately, the area of combined color and texture segmentation remains open and active [1]. Recently, some methods have been proposed for colored textured image segmentation. In [2], the color and texture features were extracted separately and combined for color texture segmentation using Kolmogorov-Smirnov test. Chen et al. [3] proposed a segmentation method using the distributions of color and local edge patterns which is used to derive a homogeneity measure for color texture segmentation. Jain and Healey [4] introduced a method for color texture classication based on unichrome features computed from the three spectral bands independently and opponent color features that utilize the spatial correlation between spectral bands. It was found that the inclusion of color increases the classication results without signicantly complicating the feature extraction algorithms. Pietiekainen et al. [5] presented a color texture classication based on separate processing of color and pattern information. From the classication results it was concluded that color and texture have complementary roles. This study presents a new framework for natural image segmentation which uses integrated color and the texture

features along with an unsupervised segmentation algorithm. Rather than extracting color and texture features separately, we propose a new inherent color texture feature for segmentation of images of natural scenes which in our opinion is effective for such case. From color and the color texture features, mean histograms are calculated. A fully unsupervised multichannel histogram clustering method is employed for initial segmentation. Final segmentation is obtained from region merging. The proposed segmentation framework is depicted in Figure 1. This paper is organized as follows. Section 2 gives proposed feature extraction method. Section 3 discusses the image segmentation algorithm using the proposed features. In Section 4, experimental results and performance of the proposed method are discussed. Section 5 concludes the paper. II. P ROPOSED F EATURE E XTRACTION M ETHOD A. Constructing Feature Vectors: Color Though RGB color format is the most common color format for digital images the RGB space has the major drawback in that it is not perceptually uniform. Other color spaces, such as CIE-LAB (Lab), CIE-LUV offer improved perceptual uniformity. HSV color space is also compatible with human color perception. For color features, we construct histograms of square window centering around each pixel on an equidistant grid in each image plane using both Lab and HSV color spaces. Finally mean histograms are calculated. We use 5 5 window size in extracting histograms for three image channels. B. Constructing Feature Vectors: Color Texture using neighbourhood statistics In the literature, Gabor lters are mostly used to extract texture features for the segmentation. Unfortunately, Gabor lters have the decisive drawback that they induce a lot of redundancy and thus lots of feature channels. We propose a new color texture feature extraction method based on higher order image statistics which denes texture regularity in neighbourhood structures. The image statistics can be recovered through unsupervised learning as proposed in [6]. Here image is considered as random eld X with set of lattice points S where {s}sS is the set of pixels in the image. In order to extract such feature we employed an unsupervised, information-theoretic, adaptive lter (UINTA) [6] that improves the predictability of pixel intensities from their neighbourhoods by decreasing their joint entropy h(X|Y = y), of the conditional PDF for each pixel-neighborhood pair, (X = x, Y = y) by manipulating the value of each center pixel x. For this in each iteration and for each image region, z m ,

h(X|Y = y m )/xm

(1)

inherent similarity, we can put equal weights reasonably to each features, i.e., wi = 1/N. III. S EGMENTATION Our segmentation algorithm is composed of two stages. In the rst step, we cluster the image using a non parametric Bayesian clustering method. The clustering process generates a over segmented image. The over segmented image is then merged using a region homogeneity based merging algorithm in the next step. We describe the steps in the following sections. A. Clustering using Dirichlet Process Mixture Model We select a nonparametric Bayesian approach based on Dirichlet process mixture models (DPMM) [8] which can provide a framework for Bayesian clustering with an unknown number of groups. A brief description of the Dirichlet Process Mixture Model (DPMM) and Gibbs sampling for clustering is provided here. Let Y = (y1 , ..., yn ) be p-dimensional observations arising from a mixture of distributions f (.|i ). The model parameter specic to individual i, i , are assumed to be independent draws from some distribution G, which in tern follows a Dirichlet process (DP) DP (G0 ) where where G0 is the base distribution and is the concentration parameter. Then, Bayesian hierarchical model with a DP prior can be written as follows: yi f (.|i ) , i G , G DP (G0 ). From the denition above, a DP is considered as a distribution function over all possible distributions. Moreover, the underlying random probability distribution G is discrete with probability one, so that the support of G consists of a countably innite set of atoms, drawn independently from G0 . (4)

is computed. Then, image I m+1 is constructed,using nite forward differences on the gradient descent, with intensities xm+1 = xm h/xm (2)

where is the time step. We stop pixel updating process after few iterations when I m+1 I m 2 < , a small threshold. For extracting color texture features ltering is performed for three Lab image channels. After that pixels in a 5 5 window centering each pixel is extracted and smoothed using a Gaussian. Finally local histograms are calculated.

Fig. 1.

Proposed segmentation framework

The representation via the P lya urn scheme, described by o Blackwell and MacQueen [8], shows the cluster formation and

C. Constructing Feature Vectors: Mean Weighted Histogram The nal step in feature extraction process is the calculation of mean weighted histogram. As color and texture in a color textured image plays complementary roles [7], this integration will help improve the nal segmentation result. If there are N feature histograms with C channels each, the channel wise weighted mean histogram, Hj can be calculated as Hj =
N i=1

sample allocation. In (4), when G is integrated out over its prior distribution, the conditional distribution of i following P lya urn scheme may be represented as: o

wi hij

(3)

i |i

1 G0 + +n1 +n1

n j=1,j=i

j (i )

(5)

where wi is the weight assigned to each histogram. The mean histogram is composed of channel wise weighted mean histograms, H = {Hj }j=1..C . As all the proposed color texture features are extracted using local window they have

where i represents the parameter set {1 , ..., i1 , i+1 , ..., n } with i removed and represents a Dirac measure concentrated at .

(a) Fig. 2.

(b)

(c)

(d)

Color textured feature images obtained from neighborhood statistics. (a) is the original image. (b),(c),(d) show the L,a,b feature images respectively.

As can be seen in equation (5) that the Dirichlet process exhibits a clustering property as a result of the discreteness property of the random measure G. Posterior expectations for the DP mixture model (4) can be estimated employing Gibbs sampler. The above DPMM model can be adapted for histogram clustering following [9]. B. Region merging The nal step of our segmentation algorithm is region merging. In order to enforce that the resulting segmentation respects spatial continuity and consists of only connected segments, we impose constraints that two regions RI and RJ can be merged together only if they are spatially adjacent in the 2D image and any of the regions is smaller than a pre specied size threshold. Final segmentation is obtained after the completion of the merging process. IV. E XPERIMENTAL A NALYSIS In order to illustrate the validity and performance of

In [11], an image is over segmented using low level features. Next the segments are merged using texture features in such a way that the overall coding length of the feature vectors is minimized. The implementation of the algorithm described in [11] is publicly made available at (http://www.eecs.berkeley.edu/ yang/software/ lossy segmentation). The segmentation results shown in column 3 of Figure 4 are obtained through this algorithm using merging parameter CT M =0.2. For the proposed algorithm, in histogram clustering, we used Dirichlet distribution with base measure, G0 = G0 (.|). We set vector = (1/B, ..., 1/B) and = 2T where T is the total number of data points in a local window to make histograms and B is total number of bins in the histogram. Sampling were performed paralleley for 3 individual channels. We put equal weights in constructing nal color texture features. The minimum region size threshold was set to such values so that the nal segmented regions obtained become roughly compatible with original image regions for better qualitative

the proposed scheme, we compare the results of our apcomparison. proach with the image segmentation results achieved using the well known JSEG method described in [10] and also with the ones described in [11] . The JSEG results were obtained from applying the images to the programs made available by the JSEG authors online (http://vision.ece.ucsb.edu/segmentation/jseg/software/) which we think are most relevant to compare with those of our algorithm. JSEG involves three parameters namely the color quantization threshold, the scale and the merge threshold which are to be set by the user. In this study we set the values 255, 1.0, and 0.4, respectively as suggested by the authors. We have applied our technique to Berkeley [12] and Vistex [13] natural image databases that include images characterized by nonuniform textures, fuzzy borders, and low image contrast. We present the segmentation steps of the proposed algorithm for natural scenes in Figure 3. Two natural sample images collected from Vistex [13] database are shown in Figure 3a and in Figure 3e. Initial segmentation maps are shown in Figure 3b and Figure 3f respectively. Final segmentation maps are shown in Figure 3c and Figure 3g respectively. The nal segmentation results are shown in Figure 3d and Figure 3h respectively.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 3. Segmentation of natural scenes in different stages. (a),(e) are two sample natural scenes. (b),(f) are the initial segmentation maps. (c),(g)are the nal segmentation maps. (d),(h):Segment boundaries are superimposed on the original images respectively.

TABLE I P ERFORMANCE EVALUATION FOR P ROPOSED ALGORITHM AND THE ALGORITHM PROPOSED IN [11] IN TERMS OF PR INDEX AND VOI

inhomogeneous texture characteristics and low color contrast. Yet, the experimental results in Figure 4 and gures in Table

Method Method in [11] Proposed

PR index 0.7617 0.7843

VoI 2.0236 1.7941

I indicate that the proposed algorithm is able to produce consistent segmentation results in producing perceptually uniform regions than the algorithm described in [11]. The results also

For quantitative measurements we calculated the Probabilistic Rand index (PR) and Variation of Information (VoI) metric for 30 randomly selected natural images from Berkeley [12] database including those images in Fig. 4. The PR index measures the agreement between the segmented result and the manually generated ground-truths and takes values in the range [0, 1). A higher PR value indicates a better match between the segmented result and the ground-truth data. On the other hand VoI denes the distance between two segmentations as the average conditional entropy of one segmentation given the other, and thus roughly measures the amount of randomness in one segmentation which cannot be explained by the other. VoI ranges between [0,), and lower is better. Table I depicts the PR index and VoI metric obtained from the segmented results of the the proposed method and the performance reported in [11] for similar images. To demonstrate the superiority of the proposed method, set of sample segmentation results are presented in Figure 4. These images are composed of regions with fuzzy boarders,

indicate that the proposed algorithm has better ability than the JSEG algorithm in handling the local inhomogeneities in texture and color because of using mean histogram features. From Fig. 4, it is noticeable that the JSEG and algorithm in [11] perform well in the identication of the image regions dened by similar color texture properties, but they fail to determine accurately the object borders between the regions that are characterized by low color contrast. Even the method [11] failed to identify regions in Fig.4e3. Almost in all cases the proposed algorithm achieved far superior segmentation results. V. C ONCLUSIONS In this paper, we presented a new framework for natural image segmentation that is based on novel color texture features. The proposed method unies color and texture features in true sense to solve the color texture segmentation problem rather than simply extend gray-level texture analysis to color images, or analyze only spatial interaction of colors in a neighborhood. The method is totally unsupervised and no a priori knowledge

(a)

(a)

(a)

(a)

(b)

(b)

(b)

(b)

(c)

(c)

(c)

(c)

(d)

(d)

(d)

(d)

(e)

(e)

(e)

(e)

(f)

(f)

(f)

(f)

(g)

(g)

(g)

(g)

Fig. 4. Segmentation results of some complex natural scenes. First column represents sample natural scenes. The second column shows segmentation results by JSEG. The third column shows segmentation results generated by the algorithm proposed in [11]. Fourth column shows segmentation result by the proposed algorithm.

about the number and types of textures or the number of regions are required. The performance of the developed color textured segmentation algorithm has been evaluated quantitatively and qualitatively and the superiority of the proposed algorithm has been conrmed through experiments using a large number of images with various degrees of color and texture composition complexities. R EFERENCES
[1] J. Chen, T.N. Pappas, A. Mojsilovic and B.E., Rogowitz, Adaptive perceptual color-texture image segmentation, IEEE Transaction on Image Process., vol. 14, pp.1524-1536, Oct. 2005. [2] D.E. Ilea and P.F. Whelan, CTexAn Adaptive Unsupervised Segmentation Algorithm Based on Color-Texture Coherence, IEEE Transaction on Image Process., vol. 17, pp.1926-1939, Oct. 2008. [3] K.M.Chen and S.Y. Chen, Color texture segmentation using feature distributions, Pattern Recognition Letters, vol. 23(7), pp.755-771, May 2002. [4] A.K.Jain and G. Healey, A multiscale representation including opponent color features for texture recognition, IEEE Trans. on Image Process. , vol.7, pp. 124-128, Jan. 1998. [5] M. Pietikinen, T. Menp and J. Viertola, Color Texture Classication with Color Histograms and Local Binary Pattern, Proc. 2nd International

Workshop on Texture Analysis and Synthesis, Copenhagen, Denmark, June 1, 2002, pp. 109-112. [6] S.P. Awate and R.T. Whitaker, Unsupervised, information-theoretic, adaptive image ltering with applications to Image Restoration, IIEEE Trans. Pattern Anal. Machine Intell., vol.28, pp.364-376, Mar. 2006. [7] T. Menpa,Matti Pietikinen and J. Viertola, Separating Color and a a a Pattern Information for Color Texture Discrimination, in Proc.IEE E 16th International Conference on Pattern Recognition, 2002, vol. 1, p. 668. [8] D. Blackwell, J.B. MacQueen, Ferguson distributions via Polya urn schemes, The Annals of Statistics, vol. 1(2), pp. 353-355, 1973. [9] P. Orbanz, J.M. Buhmann, Smooth Image Segmentation by Nonparametric Bayesian Inference, in: Proc. ECCV, 2006, vol. 1, p. 444. [10] Y. Deng and B.S.Manjunath, Unsupervised segmentation of color texture regions in images and video, IEEE Trans. Pattern Anal. Machine Intell., vol. 23(8), pp. 800-810, 2001. [11] A. Yang, J. Wright, Y. Ma and S. Sastry, Unsupervised Segmentation of Natural Images via Lossy Data Compression, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 2, Mar. 2008, pp. 212-225. [12] D. Martin, C. Fowlkes, D. Tal and J. Malik, A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics, Proc. Eighth International Conference on Computer Vision (ICCV), Vancouver, Canada, July 9-12, 2001, Vol. 2, pp. 416-423. [13] Vision Texture (VisTex) database. [Online]. Available: http://wwwwhite. media.mit.edu/vismod/, 1995.

Вам также может понравиться