Вы находитесь на странице: 1из 75

Digital Image Processing, 2nd ed. www.imageprocessingbook.

com

Image Segmentation

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Segmentation subdivides an image into its constituent regions or


objects.

The level of detail to which the subdivision is carried depends on the


problem being solved. That is, segmentation should stop when the
objects or regions of interest in an application have been detected

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

segmentation algorithms are based on one of two basic


properties of intensity values: discontinuity and similarity.

In the first category, the approach is to partition an image


based on abrupt changes in intensity, such as edges.

The principal approaches in the second category are based on


partitioning an image into regions that are similar according
to a set of predefined criteria.

Thresholding, region growing, and region splitting and merging


are examples of methods in segmentation.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Figure 10.1(a) shows an image of a region of constant intensity


superimposed on a darker background, also of constant
intensity. These two regions comprise the overall image region.

10.1(b) shows the result of computing the boundary of the


inner region based on intensity discontinuities. Points on the
inside and outside of the boundary are black (zero) because
there are no discontinuities in intensity in those regions.

To segment the image, we assign one level (say, white) to the


pixels on, or interior to, the boundary and another level (say,
black) to all points exterior to the boundary.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

(d) is: If a pixel is on, or inside the boundary, label it white;


otherwise label it black.We see that this predicate is TRUE for the
points labeled black and white in Fig. 10.1(c). Similarly, the two
segmented regions (object and background) satisfy condition (e).

Figure 10.1(f) shows the result of dividing the original image into
sub regions of size Each sub region was then labeled white if the
standard deviation of its pixels was positive (i.e., if the predicate
was TRUE) and zero other wise. The result has a “blocky”
appearance around the edge of the region because groups of
squares were labeled with the same intensity.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Point, Line, and Edge Detection

•segmentation methods that are based on detecting sharp, local


changes in intensity.
•The three types of image features in which are interested are
isolated points, lines, and edges.
•Edge pixels are pixels at which the intensity of an image function
changes abruptly, and edges (or edge segments) are sets of
connected edge pixels.
•Edge detectors are local image processing methods designed to
detect
•edge pixels. A line may be viewed as an edge segment in which the
intensity of the background on either side of the line is either much
higher or much lower than the intensity of the line pixels.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Abrupt & local changes in intensity can be detected using


derivatives. first- and second-order derivatives are particularly
well suited for this purpose.

(1) First-order derivatives generally produce thicker edges in an


image.
(2) Second-order derivatives have a stronger response to fine
detail, such as thin lines, isolated points, and noise.
(3) Second-order derivatives produce a double-edge response at
ramp and step transitions in intensity.
(4) The sign of the second derivative can be used to determine
whether a transition into an edge is from light to dark or dark to
light.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities

• The approach of choice for computing first and second


derivatives at every pixel location in an image is to use spatial
filters. For the filter mask in Fig. the procedure is to compute
the sum of products of the mask coefficients with the intensity
values in the region encompassed by the mask. That is, with
reference to Eq., the response of the mask at the center point
of the region is 9
R  w1 z1  w2 z2  ...  w9 z9   wi zi
i 1

where Zkis the intensity of the pixel


whose spatial location corresponds to
the location of the kth coefficient in the mask.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Isolated Points


Point detection should be based on the second derivative this
implies using the Laplacian:

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Point Detection

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Line Detection

• Only slightly more common than point detection is to find a


one pixel wide line in an image.
• For digital images the only three point straight lines are only
horizontal, vertical, or diagonal (+ or –45).

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Edge Detection

Edge detection is the approach used most frequently for


segmenting images based on abrupt (local) changes in intensity.

Edge models are classified according to their intensity profiles.


A step edge involves a transition between two intensity levels
occurring ideally over the distance of 1 pixel.

In practice, digital images have edges that are blurred and noisy,
with the degree of blurring determined principally by
limitations in the focusing mechanism and the noise level
determined principally by the electronic components of the
imaging system.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

In such situations, edges are more closely modeled as having


an intensity ramp profile. The slope of the ramp is inversely
proportional to the degree of blurring in the edge.

A third model of an edge is the so-called roof edge, having the


characteristics. Roof edges are models of lines through a
region, with the base (width) of a roof edge being determined
by the thickness and sharpness of the line.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Edge Detection

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Edge Detection

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The image segments in the first column in Fig. show close-ups of


four ramp edges that transition from a black region on the left to a
white region on the right (keep in mind that the entire transition
from black to white is a single edge).
The image segment at the top left is free of noise. The other three
images in the first column are corrupted by additive Gaussian noise
with zero mean and standard deviation of 0.1, 1.0, and 10.0
intensity levels, respectively.
The graph below each image is a horizontal intensity profile
passing through the center of the image. All images have 8 bits of
intensity resolution, with 0 and 255 representing black and white,
respectively.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Edge Detection

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Edge Detection

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

three fundamental steps performed in edge detection:

1. Image smoothing for noise reduction. The need for this


step is amply illustrated by the results in the second and
third columns of Fig.

2. Detection of edge points. As mentioned earlier, this is a


local operation that extracts from an image all points that are
potential candidates to become edge points.

3. Edge localization. The objective of this step is to select from


the candidate edge points only the points that are true
members of the set of points comprising an edge.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

• First-order derivatives:
– The gradient of an image f(x,y) at location (x,y) is defined
as the vector:
G x   f
x

f      f 
G y   y 
– The magnitude of this vector: f  mag (f )  G  G  2
x
2
y 
1
2

 Gx 
– The direction of this vector:  ( x, y )  tan  
1

 Gy 

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

Roberts cross-gradient operators

Prewitt operators

Sobel operators

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

Prewitt masks for


detecting diagonal edges

Sobel masks for


detecting diagonal edges

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators: Example

f  G x  G y

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators: Example

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators: Example

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

• Second-order derivatives: (The Laplacian)


– The Laplacian of an 2D function f(x,y) is defined as
2 f 2 f
 f  2  2
2

x y
– Two forms in practice:

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

• Consider the function: A Gaussian function


r2
 2
h(r )  e 2
where r 2  x 2  y 2
and  : the standard deviation
• The Laplacian of h is
r2
 r     2 2
2 2 The Laplacian of a
 h( r )   
2
e Gaussian (LoG)
 
4

• The Laplacian of a Gaussian sometimes is called the Mexican
hat function. It also can be computed by smoothing the image
with the Gaussian smoothing mask, followed by application of
the Laplacian mask.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators: Example

Sobel gradient

© 2002 R. C. Gonzalez & R. E. Woods


Gaussian smooth function Laplacian mask
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Detection of Discontinuities
Gradient Operators: Example

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The Marr-Hildreth edge-detection algorithm may be summarized


as follows:
1. Filter the input image with an (nxn)Gaussian lowpass filter
obtained by sampling Eq. (10.2-21).
2. Compute the Laplacian of the image resulting from Step 1
using, for example,(3x3) mask in Fig. 10.4(a). [Steps 1 and 2
implement Eq. (10.2-25).]
3. Find the zero crossings of the image from Step 2.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Local Processing

• Two properties of edge points are useful for edge linking:


– the strength (or magnitude) of the detected edge points
– their directions (determined from gradient directions)
• This is usually done in local neighborhoods.
• Adjacent edge points with similar magnitude and direction are
linked.
• For example, an edge pixel with coordinates (x0,y0) in a
predefined neighborhood of (x,y) is similar to the pixel at (x,y)
if
f ( x, y)  ( x0 , y0 )  E, E : a nonnegativ e threshold
 ( x, y)   ( x0 , y0 )  A, A : a nonegative angle threshold

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Local Processing: Example

In this example,
we can find the
license plate
candidate after
edge linking
process.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Global Processing via the Hough Transform

• Hough transform: a way of finding edge points in an image


that lie along a straight line.
• Example: xy-plane v.s. ab-plane (parameter space)
yi  axi  b

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Global Processing via the Hough Transform

• The Hough transform consists of


finding all pairs of values of 
and  which satisfy the equations
that pass through (x,y).
• These are accumulated in what is
basically a 2-dimensional
histogram.
• When plotted these pairs of  and
 will look like a sine wave. The
process is repeated for all
x cos   y sin   
appropriate (x,y) locations.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Hough Transform Example
The intersection of the
curves corresponding
to points 1,3,5

2,3,4

1,4
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Edge Linking and Boundary Detection


Hough Transform Example

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding

The intensity histogram in Fig. 10.35(a) corresponds to an image,


composed of light objects on a dark background, in such a way that
intensity values grouped into two dominant modes.

One way to extract the objects from the background is to select


a threshold, that separates these modes.

Then, any point in the image at which f(x, y) > T is called an object
point; otherwise, the point is called a background point.
In other words, the segmented image, is given by

1 if f ( x, y)  T
g ( x, y)  
0 if f ( x, y )  T
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding

When is a constant applicable over an entire image, the process


given in this equation is referred to as global thresholding.

When the value of changes over an image, it is term as variable


thresholding.

Single threshold Multiple threshold

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The term local or regional thresholding is used sometimes to


denote variable thresholding in which the value T of at any point
(x,y) in an image depends on properties of a neighborhood of
(x,y) (for example, the average intensity of the pixels in the
neighborhood).

If depends on the spatial coordinates themselves, then variable


thresholding is often referred to as dynamic or adaptive
thresholding.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

multiple thresholding classifies point (x, y) as belonging to the


background if f(x, y)<T1. to one object class if f(x, y)>T1& f(x,
y)<T2 and to the other object class, if f(x, y)>T2 .
That is, the segmented image is given by

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Intensity thresholding is directly related to the width and depth of


the valley(s) separating the histogram modes.
The key factors affecting the properties of the valley(s) are:
(1) the separation between peaks (the further apart the peaks are,
the better the chances of separating the modes);
(2) thenoise content in the image (the modes broaden as noise
increases);
(3) the relative sizes of objects and background;
(4) the uniformity of the illumination source; and
(5) the uniformity of the reflectance properties of the image.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The role of noise in image thresholding

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

The role of illumination and reflectance

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

There are three basic approaches to the solve the problem.

One is to correct the shading pattern directly.


For example, nonuniform (but fixed) illumination can be corrected by
multiplying the image by the inverse of the pattern, which can be obtained
by imaging a flat surface of constant intensity.
The second approach is to attempt to correct the global shading
pattern via processing .

The third approach is to “work around” nonuniformities using


variable thresholding,.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Basic Global Thresholding Algorithm

The basic global threshold, T, is calculated


as follows:
1. Select an initial estimate for T (typically the
average grey level in the image)
2. Segment the image using T to produce two
groups of pixels: G1 consisting of pixels with
grey levels >T and G2 consisting pixels with
grey levels ≤ T
3. Compute the average grey levels of pixels in
G1 to give μ1 and G2 to give μ2
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Basic Global Thresholding


Algorithm
4. Compute a new threshold value:
1   2
T
2
5. Repeat steps 2 – 4 until the difference in T in
successive iterations is less than a predefined
limit ∆T
•This algorithm works very well for finding
thresholds when the histogram is suitable

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding
Basic Global Thresholding

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Let {0,1,2,…,L-1} , L means gray level intensity

MN  n 0 n 1 n 2 ...  n L1
M*N is the total number of pixel.
denote the number of pixels with intensity
we select a threshold T (k )  k , 0  k  L  1 , and use it to
classify C 1 : intensity in the range [0, k ] and C : [k  1, L  1] 2

• k , L 1
P 1 (k )   p i P 2 (k )  p i  1  P 1 (k )
i 0 i  k 1

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding
Basic Adaptive Thresholding

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding
Basic Adaptive Thresholding

How to solve this problem?

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding
Basic Adaptive Thresholding

Answer: subdivision

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Thresholding
Optimal Global and Adaptive Thresholding

• This method treats pixel values as probability density functions.


• The goal of this method is to minimize the probability of
misclassifying pixels as either object or background.
• There are two kinds of error:
– mislabeling an object pixel as background, and
– mislabeling a background pixel as object.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation

• Edges and thresholds sometimes do not give good


results for segmentation.
• Region-based segmentation is based on the
connectivity of similar pixels in a region.
– Each region must be uniform.
– Connectivity of the pixels within the region is very
important.
• There are two main approaches to region-based
segmentation: region growing and region splitting.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Basic Formulation

• Let R represent the entire image region.


• Segmentation is a process that partitions R into subregions,
R1,R2,…,Rn, such that
n
(a)  Ri  R
i 1
(b) Ri is a connected region, i  1,2,..., n
(c) Ri  R j   for all i and j , i  j
(d) P( Ri )  TRUE for i  1,2,..., n
(e) P( Ri  R j )  FALSE for any adjacent regions Ri and R j
where P(Rk): a logical predicate defined over the points in set Rk
For example: P(Rk)=TRUE if all pixels in Rk have the same gray
level.
© 2002 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Region Growing

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Region Growing

• Fig. 10.41 shows the histogram of Fig. 10.40 (a). It is difficult to


segment the defects by thresholding methods. (Applying region
growing methods are better in this case.)

Figure 10.40(a) Figure 10.41

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Region Splitting and Merging

• Region splitting is the opposite of region growing.


– First there is a large region (possible the entire image).
– Then a predicate (measurement) is used to determine if the
region is uniform.
– If not, then the method requires that the region be split into
two regions.
– Then each of these two regions is independently tested by
the predicate (measurement).
– This procedure continues until all resulting regions are
uniform.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Region Splitting

• The main problem with region splitting is determining where to


split a region.
• One method to divide a region is to use a quadtree structure.
• Quadtree: a tree in which nodes have exactly four descendants.

© 2002 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 2nd ed. www.imageprocessingbook.com

Region-Based Segmentation
Region Splitting and Merging

• The split and merge procedure:


– Split into four disjoint quadrants any region Ri for which
P(Ri) = FALSE.
– Merge any adjacent regions Rj and Rk for which P(RjURk) =
TRUE. (the quadtree structure may not be preserved)
– Stop when no further merging or splitting is possible.

© 2002 R. C. Gonzalez & R. E. Woods

Вам также может понравиться