Академический Документы
Профессиональный Документы
Культура Документы
Boundary Detection
Dr. Ziya Telatar
1
Image Analysis
IMAGE ANALYSIS TECHNIQUES
Feature Extraction
Segmentation
Spatial Features
Transform Features
Edges and
Boundaries
Shape Features
Moments
Texture
Template Matching
Thresholding
Boundary
Detection
Clustering
Quad-Trees
Texture matching
Classification
-
Clustering
Statistical
Decision Trees
Similarity Measures
Min. Spanning
Trees
Principal approaches
Segmentation based feature extraction
algorithms generally are based on one of 2
basis properties of intensity values
discontinuity : to partition an image based on
abrupt changes in intensity (such as edges)
similarity
: to partition an image into regions
that are similar according to a set
of predefined criteria.
Detection of Discontinuities
detect the three basic types of gray-level
discontinuities
points , lines , edges
Point Detection
a point has been detected at the location on
which the mark is centered if
|R| T
where
T is a nonnegative threshold
R is the sum of products of the coefficients with
the gray levels contained in the region
encompassed by the mark.
Point Detection
Note that the mark is the same as the mask
of Laplacian Operation
The only differences that are considered of
interest are those large enough (as
determined by T) to be considered isolated
points.
|R| T
Example
Line Detection
Line Detection
Apply every masks on the image
let R1, R2, R3, R4 denotes the response of
the horizontal, +45 degree, vertical and -45
degree masks, respectively.
if, at a certain point in the image
|Ri| > |Rj|,
for all ji, that point is said to be more likely
associated with a line in the direction of mask
i.
Line Detection
Alternatively, if we are interested in detecting
all lines in an image in the direction defined
by a given mask, we simply run the mask
through the image and threshold the absolute
value of the result.
The points that are left are the strongest
responses, which, for lines one pixel thick,
correspond closest to the direction defined by
the mask.
Example
Edge Detection
the most common approach for detecting
meaningful discontinuities in gray level.
we discuss approaches for implementing
first-order derivative (Gradient operator)
second-order derivative (Laplacian operator)
Basic Formulation
an edge is a set of connected pixels that
lie on the boundary between two regions.
an edge is a local concept whereas a
region boundary, owing to the way it is
defined, is a more global idea.
because of optics,
sampling, image
acquisition
imperfection
Thick edge
The slope of the ramp is inversely proportional to the
degree of blurring in the edge.
We no longer have a thin (one pixel thick) path.
Instead, an edge point now is any point contained in the
ramp, and an edge would then be a set of such points that
are connected.
The thickness is determined by the length of the ramp.
The length is determined by the slope, which is in turn
determined by the degree of blurring.
Blurred edges tend to be thick and sharp edges tend
to be thin
Second derivatives
produces 2 values for every edge in an
image (an undesirable feature)
an imaginary straight line joining the
extreme positive and negative values of
the second derivative would cross zero
near the midpoint of the edge. (zerocrossing property)
quite useful for locating the centers of thick
edges
Noisy Images
First column: images and
gray-level profiles of a
ramp edge corrupted by
random Gaussian noise
of mean 0 and = 0.0,
0.1, 1.0 and 10.0,
respectively.
Second column: firstderivative images and
gray-level profiles.
Third column : secondderivative images and
gray-level profiles.
Keep in mind
fairly little noise can have such a
significant impact on the two key
derivatives used for edge detection in
images
image smoothing should be serious
consideration prior to the use of
derivatives in applications where noise is
likely to be present.
Edge point
to determine a point as an edge point
the transition in grey level associated with the
point has to be significantly stronger than the
background at that point.
use threshold to determine whether a value is
significant or not.
the points two-dimensional first-order
derivative must be greater than a specified
threshold.
f
G x x
f f
G y
y
first derivatives are implemented using
the magnitude of the gradient.
Gradient Operator
f mag (f ) [G G ]
2
x
f 2 f
x y
2
y
commonly approx.
f Gx Gy
Gradient direction
Let (x,y) represent the direction angle of
the vector f at (x,y)
(x,y) = tan-1(Gy/Gx)
The direction of an edge at (x,y) is
perpendicular to the direction of the
gradient vector at that point
Gradient Masks
Example
Example
Example
Laplacian
Laplacian operator
(linear operator)
f ( x, y ) f ( x, y )
f
2
2
x
y
2
f [ f ( x 1, y) f ( x 1, y)
2
f ( x, y 1) f ( x, y 1) 4 f ( x, y)]
Laplacian of Gaussian
Laplacian combined with smoothing as a
precursor to find edges via zero-crossing.
h(r ) e
r2
2 2
r2
2
2
2 2
r
2
h( r )
e
4
Mexican hat
Linear Operation
second derivation is a linear operation
thus, 2f is the same as convolving the
image with Gaussian smoothing function
first and then computing the Laplacian of
the result
Example
a). Original image
b). Sobel Gradient
c). Spatial Gaussian
smoothing function
d). Laplacian mask
e). LoG
f). Threshold LoG
g). Zero crossing
Drawbacks
Zero crossing creates closed loops. (spaghetti
effect)
sophisticated computation.
(a,b)
ax + by = c
Distance from (x,y) to line is
(a,b).*(x,y) = ax + by
1 (1 p )
n
Some comparisons
Complexity of RANSAC n*n*n
Complexity of Hough n*d
Error behavior: both can have problems,
RANSAC perhaps easier to understand.
Clutter: RANSAC very robust, Hough falls
apart at some point.
There are endless variations that improve
some of Houghs problems.
Boundary Extraction
Boundaries are linked edges that characterize
the shape of an object
Connectivity
Boundaries can be found by tracing the connected
edges
On a rectangular grid, a pixel is said to be 4- or 8connected when it has the same properties as one of
its nearest 4 or 8 neighbors, respectively
Neighbors of Pixels
4- Neighborhood
(x+1,y) (x-1,y) (x,y+1) (x,y-1)
Boundary Extraction
Contour Following
Contour-following algorithms trace boundaries by
ordering successive edge points.
Boundary Extraction
Edge Linking:
edge detection algorithm are followed by
linking procedures to assemble edge pixels
into meaningful edges.
Basic approaches
Local Processing
Global Processing via the Hough Transform
Global Processing via Graph-Theoretic
Techniques
Boundary Extraction
Local Processing
analyze the characteristics of pixels in a small
neighborhood (say, 3x3, 5x5) about every
edge pixels (x,y) in an image.
all points that are similar according to a set of
predefined criteria are linked, forming an edge
of pixels that share those criteria.
Boundary Extraction
Local Processing
Criteria
1. the strength of the response of the
gradient operator used to produce the
edge pixel
an edge pixel with coordinates (x0,y0) in a
predefined neighborhood of (x,y) is similar in
magnitude to the pixel at (x,y) if
|f(x,y) - f (x0,y0) | E
Boundary Extraction
Local Processing
Criteria
2. the direction of the gradient vector
an edge pixel with coordinates (x0,y0) in a
predefined neighborhood of (x,y) is similar in
angle to the pixel at (x,y) if
Criteria
A point in the predefined neighborhood of (x,y) is
linked to the pixel at (x,y) if both magnitude
and direction criteria are satified.
the process is repeated at every location in the
image
a record must be kept
simply by assigning a different gray level to each
set of linked edge pixels.
Example
Boundary Extraction
Hough Transformation (Line)
xy-plane
yi =axi + b
b = - axi + yi
all points (xi ,yi) contained on the same line must have lines in
parameter space that intersect at (a,b)
Accumulator cells
(amax, amin) and (bmax, bmin) are
the expected ranges of slope
and intercept values.
all are initialized to zero
if a choice of ap results in
solution bq then we let
A(p,q) = A(p,q)+1
at the end of the procedure,
value Q in A(i,j) corresponds to
Q points in the xy-plane lying
on the line y = aix+bj
b = - axi + yi
-plane
-plane
x cos + y sin =
= 90 measured with respect to x-axis
range of 2 D
where D is the distance
between corners in the
image
Construct an array
representing , d
For each point, render
the curve (, d) into this
array, adding one at
each cell
Difficulties
how big should the cells
be? (too big, and we
cannot distinguish
between quite different
lines; too small, and
noise causes lines to be
missed)
Continuity
based on computing the distance between
disconnected pixels identified during
traversal of the set of pixels corresponding
to a given accumulator cell.
a gap at any point is significant if the
distance between that point and its closet
neighbor exceeds a certain threshold.
link criteria:
1). the pixels belonged to one of the set of pixels
linked according to the highest count
2). no gaps were longer than 5 pixels
Boundary Representation
Proper representation of object boundaries is
important for analysis and synthesis of shape
Shape analysis is often required for detection
and recognition of objects in a scene.
Shape synthesis is useful in computer-aided
design (CAD) of parts and assemblies, image
simulation applications such as video games,
cartoon movies, environmental modeling of
aircraft-landing testing and training, etc.
Chain Codes
The direction vectors between successive boundary pixels are encoded.
B-Spline Representation
Piecewise polynomial functions that can provide local approximations of contours
of shapes using a small number of parameters.
B-Splines are used in shape synthesis and analysis, computer graphics, and
recognition parts from boundaries.
Fourier Descriptors
Effect of Geometric Transformations
Boundary matching from FD
Autoregressive Models
For arbitrary class of object boundaries, we have an ensemble of boundaries that
could be represented by a stochastic model.
Example:
Boundary superimposed
Region Representation
The shape of an object may be directly
represented by the region it occupies
Run-length codes
Quad-trees
Projections
Moment Representation
The theory of moments provides an interesting
and sometimes useful alternative to serious
expansions for representing shape of objects
Structure
In many computer vision applications, the objects in a
scene can be characterized satisfactorily by structures
composed of line or arc patterns like handwritten or
printed characters, circuit diagrams and engineering
drawings, etc.
Transformations useful for analysis of structure of
patterns
Shape Features
The shape of an object refers to its profile
and physical structure
These characteristics can be represented
by the boundary, region, moment, and
structural representations
Shape Features
Shape Representation
Regenerative Features
- Boundaries
- Regions
- Moments
- Structural and Syntactic
Measurement Features
Geometry
Moments
- Perimetry
- Area
- Max-min Radii
and eccentricity
- Corners
- Roundness
- Bending Energy
- Holes
- Euler Numbers
- Symmetry
- Center of Mass
- Orintation
- Bounding
Rectangle
- Best-Fit Ellipse
- Eccentricity
Texture
Texture is observed in the structural patterns of objects
such as wood, grain, sand, grass, and cloth
A texture contains several pixels, whose placement
could be periodic, quasi-periodic or random.
Natural textures are generally random
Artificial textures are often deterministic or periodic
Texture are classified into two main categories:
Statistical and Structural.
Texture
Classification of Texture
Statistical
- ACF (Autocorrelation
Function)
- Transforms
- Edge-ness
- Concurrence Matrix
- TextureTransforms
- Random Field
Models
Structural
Periodic
Random
- Primitives:
- Edge Density
- Gray Levels - Extreme Density
- Shape
- Run Lengths
- Homogeneity
- Placement Rules
- Period
- Adjacency
- Closest
Distances
Other
Mosaic Models
Image Subtraction
Template Matching and Area Correlation
Matched Filtering
Direct serch
Two Dimensional Logarithmic search
Sequential Search
Hiearchical Search