Вы находитесь на странице: 1из 136

1

of
42

Digital Image Processing


Introduction

S.SHAIK MAJEETH
Associate Professor/ECE
Saveetha Engineering College, Chennai
09-05-2013 Thursday
2
of
42
Introduction

“One picture is worth more than ten thousand


words”
Anonymous
3
of
FUNDAMENTALS OF DIGITAL
42
IMAGES x

Origin

y
Image “After snow storm” f(x,y)

An image: a multidimensional function of spatial coordinates.


Spatial coordinate: (x,y) for 2D case such as photograph,
(x,y,z) for 3D case such as CT scan images
(x,y,t) for movies
The function f may represent intensity (for monochrome images)
or color (for color images) or other associated values.
4
of
42

Digital image: an image that has been discretized both in


Spatial coordinates and associated value.
5
of
42
What is Digital Image Processing?
Digital image processing focuses on two major
tasks

– Improvement of pictorial information for human


interpretation

– Processing of image data for storage, transmission


and representation for autonomous machine
perception
6
of
42
What is DIP? (cont…)
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes
Output: Image Output: Attributes Output: Understanding
Examples: Noise Examples: Object Examples: Scene
removal, image recognition, segmentation understanding,
sharpening autonomous navigation
7
of
42
Examples: Image Enhancement
One of the most common uses of DIP
techniques: improve quality, remove noise etc
8
of
42
Examples: Artistic Effects
Artistic effects are used to
make images more visually
appealing, to add special
effects and to make
composite images
9
of
42
Examples: Medicine
Take slice from MRI scan of canine heart, and
find boundaries between types of tissue
– Image with gray levels representing tissue density
– Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image


10
of
42
Examples: GIS
Geographic Information Systems
– Digital image processing techniques are
used extensively to manipulate satellite
imagery
– Terrain classification
– Meteorology
11
of
42
Examples: GIS (cont…)
Night-Time Lights of the
World data set
– Global inventory of
human settlement
– Not hard to imagine
the kind of analysis
that might be done
using this data
12
of
42
Examples: Industrial Inspection
Human operators are
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
13
of
42
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
– Machine inspection is used to determine that all
components are present and that all solder joints are
acceptable
– Both conventional imaging and x-ray imaging are used
14
of
42
Examples: Law Enforcement
Image processing techniques are
used extensively by law enforcers

– Number plate recognition for


speed cameras/automated toll
systems

– Fingerprint recognition

– Enhancement of CCTV
images
15
of
42
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
16
of
Key Stages in Digital Image Processing:
42 Image Aquisition

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
17
of
Key Stages in Digital Image Processing:
42 Image Enhancement

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
18
of
Key Stages in Digital Image Processing:
42 Image Restoration

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
19
of
Key Stages in Digital Image Processing:
42 Morphological Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
20
of
Key Stages in Digital Image Processing:
42 Segmentation

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
21
of
Key Stages in Digital Image Processing:
42 Object Recognition

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
22
of
Key Stages in Digital Image Processing:
42 Representation & Description

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
23
of
Key Stages in Digital Image Processing:
42 Image Compression

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
24
of
Key Stages in Digital Image Processing:
42 Colour Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
25
of
42
Image displayed with various gray levels
26
of
42
Contents
In this lecture we will look at image
enhancement point processing techniques:
– What is point processing?
– Negative images
– Thresholding
– Logarithmic transformation
– Power law transforms
– Grey level slicing
27
of
42
Image Enhancement
Aim: Improving the interpretation of information
from the image

Make the image looks more clear

Methodology
Point Processing
Mask Processing
28
of
Basic Spatial Domain Image
42 Enhancement
Most spatial domain enhancement operations can be reduced to the
form
g (x, y) = T[ f (x, y)] Origin x
f (x, y) is the
input image,

g (x, y) the processed


(x, y)
image

T is some operator defined over


some neighbourhood
of (x, y)
y Image f (x, y)
29
of
42
Point Processing
The simplest spatial domain operations occur when the
neighbourhood is simply the pixel itself

In this case T is referred to as a grey level


transformation function or a point processing operation

Point processing operations take the form


s=T(r)

where s refers to the processed image pixel value and r


refers to the original image pixel value
30
of
42
Negative Images

Negative images are useful for enhancing white


or grey detail embedded in dark regions of an
image

s = intensitymax - r
31
of
42

The tissue in the negative image of the mammogram shown


below is much clearer

Original Image Negative Image


32
of
42
Intensity Transformations
33
of
42 Thresholding
Thresholding transformations are particularly
useful for segmentation to isolate an object of
interest from a background

1.0 r > threshold


s=
0.0 r <= threshold
34
of
Point Processing Example:
42 Thresholding (cont…)
Original Image Enhanced Image x
x

y Image f (x, y) y Image f (x, y)

1.0 r > threshold


s=
0.0 r <= threshold
35
of
42
Basic Grey Level Transformations
There are many different kinds of grey level
transformations
The most common are shown

Linear
• Negative/Identity
– Logarithmic
• Log/Inverse log
– Power law
• nth power/nth root
36
of
42
Logarithmic Transformations
The general form of the log transformation is
s = c * log(1 + r)
The log transformation maps a narrow range of
low input grey level values into a wider range of
output values

The inverse log transformation performs the


opposite transformation
37
of
42
Logarithmic Transformations (cont…)
Log functions are particularly useful when the input
grey level values may have an extremely large range of
values

In the following example the Fourier transform of an


image is put through a log transform to reveal more
detail

s = log(1 + r)
38
of
42
Logarithmic Transformations (cont…)

Original Image Enhanced Image x


x

y Image f (x, y) y Image f (x, y)

s = log(1 + r)
We usually set c to 1
Grey levels must be in the range [0.0, 1.0]
39
of
42
Power Law Transformations
Power law transformations have the following form
s=c*rγ

Map a narrow range


of dark input values
into a wider range of
output values or vice
versa

Varying γ gives a whole


family of curves
40
of
42
Power Law Transformations (cont…)
Original Image Enhanced Image x
x

y Image f (x, y) y Image f (x, y)

s=rγ
Usually c = 1
Grey levels must be in the range [0.0, 1.0]
41
of
42
Power Law Example
42
of
42
Power Law Example (cont…)

γ = 0.6
1
Transformed Intensities

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Old Intensities
43
of
42
Power Law Example (cont…)

γ = 0.4
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
44
of
42
Power Law Example (cont…)

γ = 0.3
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
45
of
42
Power Law Example (cont…)
The images to the
right show a Magnetic
s = r 0.6
Resonance (MR) image
of a fractured human
spine

s = r 0.4
Different curves
highlight different
detail
46
of
42
Power Law Example
47
of
42
Power Law Example (cont…)

γ = 5.0
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
48
of
42
Power Law Transformations (cont…)
An aerial photo
of a runway is
shown
s = r 3.0

Power law
transforms are

s = r 4.0
used to darken
the image

Different curves
highlight
different detail
49
of
42
Gamma Correction
Gamma correction of computer monitors

Problem is that
display devices do
not respond linearly
to different intensities

Can be corrected
using a log transform
50
of
42
More Contrast Issues
51
of
Piecewise Linear Transformation
42 Functions
Rather than using a well defined mathematical
function use arbitrary user-defined transforms

The images below show a contrast stretching linear


transform to add contrast to a poor quality image
52
of
42
Gray Level Slicing
Highlights a specific range of grey levels
– Similar to thresholding
– Other levels can be
suppressed or maintained
– Useful for highlighting features
in an image
53
of
42

Aortic Angiogram Gray level slicing with background


without background
54
of
42
MASK PROCESSING

• Operates on group of pixels

• Use Mask of odd size


55
of
42

SPATIAL FILTERING
56
of
Spatial Filtering Methods
42 (or Mask Processing Methods)

output image
57
of
42
Mechanics of spatial filtering
• The process consists simply of moving the
filter mask from point to point in an image.

• At each point (x,y) the response of the filter at


that point is calculated using a predefined
relationship
58
of Linear vs Non-Linear
42
Spatial Filtering Methods

• A filtering method is linear when the output is a


weighted sum of the input pixels.

• Methods that do not satisfy the above property are


called non-linear.
59
of
42
Linear filtering
• In general, linear filtering of an image f of size
MxN with a filter mask of size mxn is given by
the expression:

a b
g ( x, y ) =   w(s, t ) f ( x + s, y + t )
s = − at = − b
60
of
42

• The process of linear filtering similar to a


frequency domain concept called
“convolution”

Simplified expression mn
R = w1 z1 + w2 z2 + ... + wmn zmn =  wi zi w1 w2 w3
i =1
9 w4 w5 w6
R = w1 z1 + w2 z2 + ... + w9 z9 =  wi zi w7 w8 w9
i =1
Where the w’s are mask coefficients, the z’s are the value of the image gray levels
corresponding to those coefficients
61
of
42
Linear spatial filtering
The result is the sum of
Pixels of image
products of the mask
coefficients with the
w(-1,-1) w(-1,0) w(-1,1)
corresponding pixels
f(x-1,y-1) f(x-1,y) f(x-1,y+1)
w(0,-1) w(0,0) w(0,1)
directly under the mask
f(x,y-1) f(x,y) f(x,y+1) w(-1,-1) w(-1,0) w(-1,1)
w(1,-1) w(1,0) w(1,1) Mask
w(0,-1) w(0,0) w(0,1)
f(x+1,y-1) f(x+1,y) f(x+1,y+1) coefficient
w(1,-1) w(1,0) w(1,1) s

f ( x, y) = w(−1,−1) f ( x − 1, y − 1) + w(−1,0) f ( x − 1, y ) + w(−1,1) f ( x − 1, y + 1) +


w(0,−1) f ( x, y − 1) + w(0,0) f ( x, y ) + w(0,1) f ( x, y + 1) +
w(1,−1) f ( x + 1, y − 1) + w(1,0) f ( x + 1, y ) + w(1,1) f ( x + 1, y + 1)
62
of
42
Convolution
Original Image

7 9 11
Input Image after 0 padding
10 50 8
0 0 0 0 0
9 6 8
0 7 9 11 0

0 10 50 8 0
3 By 3 Average Filter
0 9 6 8 0
1 1 1
0 0 0 0 0
1/9 1 1 1
1 1 1
63
of
42
Convolution
1/9 1/9 1/9
0 0 0 0 0

1/9 1/9 1/9


0 7 9 11 0

1/9 1/9 1/9


0 10 50 8 0

0 9 6 8 0

0 0 0 0 0

1/9*0 + 1/9*0 +1/9*0 + 1/9*0 + 1/9*7 +1/9*9 + 1/9*0 + 1/9*10 + 1/9*50


64
of
42
Convolution
1/9 1/9 1/9
0 0 0 0 0

1/9 1/9 1/9


0 8.4 9 11 0

1/9 1/9 1/9


0 10 50 8 0

0 9 6 8 0

0 0 0 0 0
65
of
42
Convolution
1/9 1/9 1/9
0 0 0 0 0

1/9 1/9 1/9


0 8.4 10.7 11 0

1/9 1/9 1/9


0 10 50 8 0

0 9 6 8 0

0 0 0 0 0
66
of
42
Convolution
1/9 1/9 1/9
0 0 0 0 0

1/9 1/9 1/9


0 8.4 10.7 8.8 0

1/9 1/9 1/9


0 10 50 8 0

0 9 6 8 0

0 0 0 0 0
67
of
42
Convolution

0 0 0 0 0
1/9 1/9 1/9
0 8.4 10.7 8.8 0
1/9 1/9 1/9
0 10.3 50 8 0
1/9 1/9 1/9
0 9 6 8 0

0 0 0 0 0
68
of
42
Convolution

0 0 0 0 0
1/9 1/9 1/9
0 8.4 10.7 8.8 0
1/9 1/9 1/9
0 10.3 12.9 8 0
1/9 1/9 1/9
0 9 6 8 0

0 0 0 0 0
69
of
42
Convolution

0 0 0 0 0
1/9 1/9 1/9
0 8.4 10.7 8.8 0
1/9 1/9 1/9
0 10.3 12.9 5.7 0
1/9 1/9 1/9
0 9 6 8 0

0 0 0 0 0
70
of
42
Convolution

0 0 0 0 0

0 8.4 10.7 8.8 0


1/9 1/9 1/9
0 10.3 12.8 5.7 0
1/9 1/9 1/9
0 4.1 6 8 0
1/9 1/9 1/9
0 0 0 0 0
71
of
42
Convolution

0 0 0 0 0

0 8.4 10.7 8.8 0


1/9 1/9 1/9
0 10.3 12.8 5.7 0
1/9 1/9 1/9
0 4.1 4.6 8 0
1/9 1/9 1/9
0 0 0 0 0
72
of
42
Convolution

0 0 0 0 0

0 8.4 10.7 8.8 0


1/9 1/9 1/9
0 10.3 12.8 5.7 0
1/9 1/9 1/9
0 4.1 4.6 3.2 0
1/9 1/9 1/9
0 0 0 0 0
73
of
42
Convolution

Original Image
Result of Mean Filter

7 9 11
8.4 10.7 8.8
10 50 8
10.3 12.8 5.7
9 6 8
4.1 4.6 3.2
74
of
42
Nonlinear spatial filtering
• Nonlinear spatial filters also operate on
neighborhoods, and the mechanics of sliding a
mask past an image are the same as was just
outlined.

• The filtering operation is based conditionally


on the values of the pixels in the neighborhood
under consideration
75
of
42
Smoothing Spatial Filters
• Smoothing filters are used for blurring and
for noise reduction.

– Blurring is used in preprocessing steps, such as


removal of small details from an image prior to
object extraction, and bridging of small gaps in
lines or curves

– Noise reduction can be accomplished by blurring


76
of
42
Type of smoothing filtering

• There are 2 way of smoothing spatial filters

– Smoothing Linear Filters


– Order-Statistics Filters
77
of
42
Smoothing Linear Filters (LPF)
• Linear spatial filter is simply the average of the
pixels contained in the neighborhood of the
filter mask.

• Sometimes called “averaging filters”.

• The idea is replacing the value of every pixel


in an image by the average of the gray levels in
the neighborhood defined by the filter mask.
78
of
42
Application
• Results in an image with reduced sharp
transitions in gray levels.

• Used in noise reduction

• Blur edges

• Reduction of “irrelevant” details


79
of
42
Two 3x3 Smoothing Linear Filters

1 1 1 1 2 1
1 1
 1 1 1  2 4 2
9 16
1 1 1 1 2 1

Standard average Weighted average


80
of
42
5x5 Smoothing Linear Filters

1 1 1 1 1

1 1 1 1 1
( 1/25)
1 1 1 1 1

1 1 1 1 1

1 1 1 1 1
81
of
42
Smoothing Linear Filters
• The general implementation for filtering an
MxN image with a weighted averaging filter of
size mxn is given by the expression

a b

  w(s, t ) f ( x + s, y + t )
g ( x, y ) = s = − at = − b
a b

  w(s, t )
s = − at = − b
82
of
42 Examples

• Mask size determines the degree of smoothing and


loss of detail.
original 3x3 5x5 7x7

15x15 25x25
83
of
42
Result of Smoothing Linear Filters

Original Image

[3x3] [5x5] [7x7]


84
of
42
Process of Median filter

• Corp region of
neighborhood
10 15 20
• Sort the values of
20 100 20 the pixel in our
region
20 20 25
• In the MxN mask
the median is MxN
10, 15, 20, 20, 20, 20, 20, 25, 100 div 2 +1
5th
85
of Smoothing Filters: Median Filtering
42
(non-linear)

• Very effective for removing “salt and pepper”


noise (i.e., random occurrences of black and white
pixels).

median
averaging filtering
86
of
42
Sharpening Spatial Filters
• Objective of sharpening is to highlight fine
detail in an image (or)

• To enhance detail that has been blurred, either


in error or as an natural effect of a particular
method of image acquisition.
87
of
42
SHARPENING FILTER
• The image blurring is accomplished in the
spatial domain by pixel averaging in a
neighborhood.

• Since averaging is analogous to integration.

• Sharpening could be accomplished by spatial


differentiation.
88
of
42 Sharpening Filters (High Pass filtering)

• Useful for emphasizing transitions in image


intensity (e.g., edges).
89
of
42 Sharpening Filters (cont’d)

• Note that the response of high-pass filtering might


be negative.
• Values must be re-mapped to [0, 255]
original image sharpened images
90
of
42
Unsharp masking

• A process to sharpen images consists of subtracting


a blurred version of an image from the image itself.
This process, called unsharp masking, is expressed
as
f s ( x, y ) = f ( x, y ) − f ( x, y )
91
of
42
2-Dimentional Laplacian

• The digital implementation of the 2-Dimensional


Laplacian is obtained by summing 2 components
 2
f  2
f
2 f = 2 + 2
x x
2 f = f ( x + 1, y) + f ( x −1, y) + f ( x, y + 1) + f ( x, y −1) − 4 f ( x, y)

1 -4 1

1
92
of
42
Image Segmentation

Edge Detection

Laplacian of Gaussina
93
of
42

• The purpose of edge detection is to locate the


boundaries of objects of interest in an image

• Edges are normally marked by a discontinuity in


brightness, so most edge detection methods are
designed to locate these discontinuities

• Once edge points are located, they can be linked


together to form object boundaries
94
of
42
METHODS OF EDGE DETECTION

• First Order Derivative / Gradient Methods


– Roberts Operator
– Sobel Operator
– Prewitt Operator
• Second Order Derivative
– Laplacian
– Laplacian of Gaussian
– Difference of Gaussian
• Optimal Edge Detection
– Canny Edge Detection
95
of
42
Edge Detection
• Edge information in an image is found by looking at
the relationship a pixel has with its neighborhoods.

• If a pixel’s gray-level value is similar to those around


it, there is probably not an edge at that point.

• If a pixel’s has neighbors with widely varying gray


levels, it may present an edge point.
96
of
42
Edge Model
• An edge model describes the intensity transition
between an object with brightness A and the
background with brightness B
• – A step edge has a 1 pixel transition from A to A
• – A ramp edge has an N pixel linear transition
• A roof edge has an U pixel transition from B to A
followed by a D pixel transition back to B
97
of
42
98
of
42
Example
Z1 Z2 Z3
Z4 Z5 Z6
Z7 Z8 z9

To approximate Gradient at Z5

I = z − z
Mask : -1 1
x 6 5 1
I = z − z -1
x 5 8

I = ( z6 − z5 )2 + ( z5 − z8 )2
99
of
42
First Derivative

• At the point of greatest


slope, the first
derivative has
maximum value
– E.g. For a
Continuous 1-
dimensional
function f(t)
100
of
42
Gradient

• Gradient equation:

• Represents direction of most rapid change in intensity

• Gradient direction:

• The edge strength is given by the gradient magnitude


101
of
42
Mathematical Foundation
• The behavior of the derivatives in areas of
constant gray level(flat segments), at the onset
and end of discontinuities(step and ramp
discontinuities), and along gray-level ramps is
the matter of interest

• These types of discontinuities can be noise


points, lines, and edges.
102
of
42
Definition of the 1st-order derivative

• A basic definition of the first-order derivative of a


one-dimensional function f(x) is

f
= f ( x + 1) − f ( x)
x
103
of
42
Definition for a first derivative
• Must be zero in flat segments

• Must be nonzero at the onset of a gray-level


step or ramp; and

• Must be nonzero along ramps.


104
of
42
Definition of the 2nd-order derivative

• A second-order derivative is defined as the


difference

2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x).
x 2
105
of
42
Definition for a second derivative
• Must be zero in flat areas;

• Must be nonzero at the onset and end of a


gray-level step or ramp;

• Must be zero along ramps of constant slope


106
of
42
Gray-level profile

000123200226332233000000776553
7
6
5
4
3
2
1
0
107
of
42
Derivative of image profile

0 0 0 1 2 3 2 0 0 2 2 6 3 3 2 2 3 3 0 0 0 0 0 0 7 7 6 5 5 3

first 0 0 1 1 1-1-2 0 2 0 4-3 0-1 0 1 0-3 0 0 0 0 0-7 0-1-1 0-2

second 0-1 0 0-2-1 2 2-2 4-7 3-1 1 1-1-3 3 0 0 0 0-7 7-1 0 1-2
108
of
42
109
of
42
Observation
• The 1st-order derivative is nonzero along the
entire ramp,
• while the 2nd-order derivative is nonzero only
at the onset and end of the ramp.
• As edges in an image resembles this type of
transition

1st make thick edge and 2nd make


thin edge
110
of
42
Observations contd
• The response at and around the isolated point
is much stronger for the 2nd- than for the 1st-
order derivative

• II order derivative is more active in enhancing


sharp edges. Hence it enhances fine details
(including noise).

• Thin line is a fine detail


111
of
42

• If the maximum gray level of the line had been same as the
isolated point , the response of the second order derivative is
stronger.

• Second order derivative has a transition from positive back to


negative. In an image it shows “ double edge”

• If the gray level of the thin line had been the same as the step,
the response of the second derivative is stronger for the line
than for the step
112
of
42
Conclusion
• I order derivative produces thicker edge and
mainly used to extract edges.

• II order derivative has a stronger response to fine


detail such as thin lines and isolated points

• I order derivatives produce a double response at


step at step changes in gray level

• For similar changes in gray level values in an


image , II order response is stronger to a line than
to step and point than to a line
113
of
42
Roberts Operator

1 0   0 1
h1 =   h2 =  
0 − 1  − 1 0 

• Mark edge point only


• No information about edge orientation
• Work best with binary images
• Primary disadvantage:
– High sensitivity to noise
– Few pixels are used to approximate the gradient
114
of
42

• C=2 gives Sobel operator

-1 0 1 -1 -2 -1
-2 0 2 0 0 0
-1 0 1 1 2 1
115
of
42 The Sobel Edge Detector

-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

Gx  (z7 + 2z8 + z9 ) − (z1 + 2z2 + z3 ) Gy  (z3 + 2z6 + z9 ) − (z1 + 2z4 + z7 )


116
of
42
Prewitt Operator
X0 X1 X2
X7 [i,j] X3
X6 X5 X4

Partial Derivatives

Mx= [X2+cX3+X4]- [X0+cX7+X6]

My= [X6+cX5+X4]- [X0+cX1+X2]

C= Pixels closer to the pixel of interest is given more emphasis

C=1 gives Prewitt operator

-1 0 1 -1 -1 -1
-1 0 1 0 0 0
-1 0 1 -1 -1 -1
117
of
42 The Prewitt Edge Detector

-1 -1 -1 -1 0 1

0 0 0 -1 0 1

1 1 1 -1 0 1

Gx  (z7 + z8 + z9 ) − (z1 + z2 + z3 ) Gy  (z3 + z6 + z9 ) − (z1 + z4 + z7 )


118
of
42
The Roberts Edge Detector

0 0 0 0 0 0

0 -1 0 0 0 -1

0 0 1 0 1 0

Gx  z9 − z5 Gy  z8 − z6

The Roberts Edge Detector is in fact a 2x2 operator


119
of
42
Roberts Edge Detector

f
= f (i, j ) − f (i + 1, j + 1)
x
f
= f (i + 1, j ) − f (i, j + 1)
x

Mask
1 0
Mx = 
0 − 1

0 − 1
My = 
1 0 
120
of
42
Edge Models
121
of
42

Original Gradient

Laplacian Canny
122
of
42
123
of
42
The Laplacian (2nd order derivative)

• Shown by Rosenfeld and Kak[1982] that the


simplest isotropic derivative operator is the
Laplacian is defined as

 2
f  2
f
 f = 2 + 2
2

x y
124
of
42
Second Order Derivative Methods

• Zero crossing of the second derivative of a


function indicates the presence of a
maxima
125
of
42
Linear Operation
• second derivation is a linear operation

• thus, 2f is the same as convolving the image with


Gaussian smoothing function first and then
computing the Laplacian of the result
126
Second Order Derivative Methods - Laplacian
of
42

• Defined as

• Mask
0 1 0
1 -4 1
0 1 0


• Very susceptible to noise, filtering required, use
Laplacian of Gaussian
127
of
42
Discrete form of derivative

f(x-1,y) f(x,y) f(x+1,y) 2 f


= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y )
x 2

f(x,y-1)

f(x,y) 2 f
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
y 2

f(x,y+1)
128
of
42
2-Dimentional Laplacian

• The digital implementation of the 2-Dimensional


Laplacian is obtained by summing 2 components
 2
f  2
f
2 f = 2 + 2
x x
2 f = f ( x + 1, y) + f ( x −1, y) + f ( x, y + 1) + f ( x, y −1) − 4 f ( x, y)

1 -4 1

1
129
of
42
Laplacian

0 1 0 1 1 1

1 -4 1 1 -8 1

0 1 0 1 1 1

1 0 1

0 -4 0

1 0 1
130
of
42
Laplacian

0 -1 0 -1 -1 -1

-1 4 -1 -1 8 -1

0 -1 0 -1 -1 -1

-1 0 -1

0 4 0

-1 0 -1
131
of
42

• As Laplacian is derivative , it highlights gray


level discontinuities in an image

• Deemphasizes regions with slowly varying


gray level.

• Background features can be recovered while


still preserving the sharpening effect of the
Laplacian operation by simply adding the
original and Laplacian images
132
of
42
Example

a). Original image


b). Sobel Gradient
c). Spatial Gaussian
smoothing function
d). Laplacian mask
e). LoG
f). Threshold LoG
g). Zero crossing
Second Order Derivative Methods - Laplacian
133
of
42 of Gaussian
• Also called Marr-Hildreth Edge Detector

• Steps
– Smooth the image using Gaussian filter
– Enhance the edges using Laplacian operator
– Zero crossings denote the edge location
– Use linear interpolation to determine the sub-pixel location
of the edge
134
of
42
Laplacian of Gaussian – contd.

• Defined as

• Greater the value of s, broader is the Gaussian filter, more is


the smoothing
• Too much smoothing may make the detection of edges
difficult
135
of
42
Laplacian of Gaussian - contd.
• Also called the Mexican Hat operator
136
of
42
Laplacian of Gaussian – contd.
• Mask

Discrete approximation to LoG function with Gaussian = 1.4

Вам также может понравиться