Вы находитесь на странице: 1из 89

DIGITAL IMAGE PROCESSING

MODULE 3
IMAGE ENHANCEMENT

References:
• S. Jayaraman, S. Esakkirajan, “DIGITAL IMAGE PROCESSING”
• Rafael C Gonzalez, R Woods, “DIGITAL IMAGE PROCESSING”
Syllabus
• Image enhancement:
– Spatial domain methods
• Point Processing
• Intensity Transformations
• Histogram Processing
• Image Subtraction
• Image Averaging
• Spatial Filtering
– Smoothing filters
– Sharpening filters
– Frequency Domain methods
• Low pass filtering
• High pass filtering
• Homomorphic filter
Image enhancement
• To improve the quality of the image as perceived by
human being
• To improve the interpretability of the information
present in images.
• can be done either by suppressing the noise or
increasing the image contrast.
• Image enhancement algorithm is employed to
emphasize, sharpen, or smoothen image features
for display and analysis and are application specific.
Image enhancement example
Spatial Domain
Image enhancement methods
Frequency Domain

• Spatial domain method : operates directly on


pixels
– 𝑔 𝑥, 𝑦 = 𝑇[𝑓 𝑥, 𝑦 ] where 𝑓 𝑥, 𝑦 is the input image,
𝑔 𝑥, 𝑦 is the output image and is an operator on 𝑓.

• Frequency domain method: operates on the


Fourier Transform of an image
Image enhancement In Spatial Domain
• Point operation: Each pixel is modified by an equation that is independent
on other pixel values
𝑔 𝑚, 𝑛 = 𝑇 𝑓 𝑚, 𝑛 f 𝑚, 𝑛  i/p image pixel
𝑔 𝑚, 𝑛  o/p image pixel
𝑇 operates on one pixel
• Mask operation:
– Each pixel is modified according to the values in a small neighbourhood.
– Operator 𝑇operates on the neighbourhood of pixels.
– Mask is a small matrix whose values are termed as weights.
– Symmetric mask has origin at their centre pixel position. For non-symmetric mask,
any pixel location may be chosen as the origin, depending on the use.

;
• Global operation: All pixel values in the image are taken into consideration.
Usually, frequency domain operations are global operations
Enhancement through point operation
• Each pixel value is mapped to a new pixel value
• Point operations are basically memoryless operations
• Enhancement at any point depends only on the image value
at that point
𝑔 𝑚, 𝑛 = 𝑇 𝑓 𝑚, 𝑛 f 𝑚, 𝑛  i/p image pixel
𝑔 𝑚, 𝑛  o/p image pixel
𝑇 operates on one pixel

Every pixel of f 𝑚, 𝑛 with the


same gray level maps to a
single gray value in the output
image
Intensity Transformation functions
• Image negatives
• Log transformations
• Power-Law (Gamma) transformations
• Piecewise – Linear Transformation Functions
– Contrast Stretching
– Intensity-level slicing
– Bit-plane slicing
Image Negative or Inverse Transform
• A negative image is obtained by subtracting each
pixel from the maximum pixel value
• For an 8 bit image, the negative image can be
obtained by reverse scaling of the gray levels,
according to the transformation,
𝑔 𝑚, 𝑛 = 255 − 𝑓(𝑚, 𝑛)
In general, 𝒈 𝒎, 𝒏 = (𝑳 − 𝟏) − 𝒇(𝒎, 𝒏)where 𝐿 is the
number of gray levels.
Image negative
Applications
• Used for medical applications (Cancer Detection, diagnosing
diseases)
• Used for producing image equivalent to photographic negatives
• Particularly suited for enhancing white or gray details embedded
in dark regions of an image, especially when black regions are
dominant in size.
• Automatic Processing Of • fundus camera or retinal
Retinal Images to Aid The camera
Diagnosis And The
Treatment Of Eye Diseases.
Logarithmic Transformation
• 𝑔 𝑚, 𝑛 = 𝑐𝑙𝑜𝑔2 (𝑓 𝑚, 𝑛 + 1)
• Spreads out lower gray levels
Power law Transformation/ gamma Correction

𝛾
• 𝑔 𝑚, 𝑛 = 𝑐 ∗ 𝑓(𝑚, 𝑛)
• When 𝛾>1,
image
appears to
be darker
• When 𝛾 <1,
image
appears to
be brighter
• A variety of devices used for image capture, printing, and display respond
according to a power law.
• The exponent in the power-law equation is referred to as gamma.
• The process used to correct this power-law response phenomena is called
gamma correction.
• For example, cathode ray tube (CRT) devices have an
intensity-to-voltage response that is a power function,
with exponents varying from approximately 1.8 to 2.5.
Brightness Modification

• Brightness of an image depends on the value


associated with the pixel of the image.
• Brightness of an image can be changed by
adding or subtracting a constant value to or
from each and every pixel of the image.

𝑔 𝑚, 𝑛 = 𝑓 𝑚, 𝑛 ± 𝑘
Contrast Adjustment
• It is done by scaling all the pixels of the image by
a constant value.
𝑔 𝑚, 𝑛 = 𝑓 𝑚, 𝑛 ∗ 𝑘
• Changing the contrast of an image, changes the
range of luminance values present in the image.
• If 𝑘 > 1: Contrast will be increased. (Making
bright samples more brighter and dark samples
more darker)
• If 𝑘 < 1: Contrast will be decreased.
Piecewise – Linear Transformation Functions : Contrast Stretching

 x 0  x  a
 yb
y   (x  a)  ya a  x  b
  (x  b)  y ya
 b  x  L
b
0 a b L x

Contrast stretching is a process that expands the range of intensity levels in


an image so that it spans the full intensity range of recording medium or
display device 21
Contrast Stretching

Thresholding
Piecewise – Linear Transformation Functions :
Intensity level slicing or Gray-level slicing
• Highlight a specific range of gray levels
• Gray-level slicing
– Without preserving background- High values for
a range of interest and low values in other areas
• Gray-level slicing
– With preserving background- High values for a
range of interest and original gray levels in other
areas
Thresholding (Hard Thresholding, Soft Thresholding)
• Hard Thresholding
– Pixels having intensity lower than the Threshold T are set
to zero
– Pixels with intensity > T  higher intensity (eg.255)
– Application- to obtain binary image from gray scale image

0 𝑓𝑜𝑟 𝑓 𝑚, 𝑛 < 𝑇
𝑔 𝑚, 𝑛 =
255 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
Piecewise – Linear Transformation Functions :
Bit-Plane Slicing
• Instead of highlighting gray-level ranges, highlighting the
contribution made to total image appearance by specific bits
might be desired.
• For an 8 bit image, imagine that the image is composed of eight 1-
bit planes, ranging from bit-plane 0 for the least significant bit to
bit plane 7 for the most significant bit.
• Plane 0 contains all the lowest order bits in the bytes comprising
the pixels in the image and plane 7 contains all the high-order bits.
Bit-plane slicing: example-original image
For image
compression

7 6

5 4 3

2 1 0
Bit plane slicing advantages
• The higher-order bits (especially the top four) contain the
majority of the visually significant data.
• The other bit planes contribute to more subtle details in the
image.
• Separating a digital image into its bit planes is useful for
analyzing the relative importance played by each bit of the
image, a process that aids in determining the adequacy of
the number of bits used to quantize each pixel.
• This type of decomposition is useful for image compression
• Goals of bit plane slicing are
• Converting a gray level image to a binary image
• Representing an image with fewer bits and compressing
the image to a smaller size
• Enhancing the image by focusing
Bit planes of Image Bit planes removed image (Compressed)
Histogram Manipulation
• Histogram of a digital image with intensity levels in the range
[0, 𝐿 − 1] is a discrete function 𝒉 𝒓𝒌 = 𝒏𝒌 .
𝑟𝑘  𝑘 𝑡ℎ intensity value
𝑛𝑘  no. of pixels with intensity 𝑟𝑘

• Normalized histogram, 𝒑 𝒓𝒌 = 𝒏𝒌 𝑴𝑵.


𝑀𝑋𝑁Dimension of image
• Histogram equalization can be used
for image enhancement
Plot of Histogram (𝒏𝒌 v/s 𝒓𝒌 )
• Darker Image  Components of histogram are
concentrated on low side of intensity scale
• Lighter Image  to the high side
• High contrast image  Components cover a wide
range of the intensity scale.
• An image whose pixels tend to occupy entire
range of possible intensity levels will have an
appearance of high contrast and will exhibit a
large variety of gray tones.
• It is possible to develop a transformation function
that can automatically achieve this effect, based
only on information available in the histogram of
the input image  Histogram equalization
Histogram Equalization
• Transformation, 𝑠 = 𝑇 𝑟 produces an output intensity level
s for every pixel value r in the original image.
• 𝑟lies in [0,𝐿 − 1] where 𝐿 − 1 is the maximum grey level
value
• 𝑇(𝑟) satisfies the following conditions:
– 𝑇(𝑟) is single-valued and monotonically increasing in the
interval 0 ≤ 𝑟 ≤ 𝐿 − 1 and
– 0 ≤ 𝑇 𝑟 ≤ 𝐿 − 1for 0 ≤ 𝑟 ≤ 𝐿 − 1.
• The inverse transformation from s back to r is
denoted as
𝑟 = 𝑇 −1 𝑠 0 ≤ 𝑠 ≤ 𝐿 − 1

The transformation function is given by


𝑘 𝑘
𝑛𝑗
𝑠𝑘 = 𝑇 𝑟𝑘 = 𝐿 − 1 𝑝𝑟 𝑟𝑗 = L − 1 ;
𝑀𝑁
𝑗=0 𝑗=0
𝑘 = 0,1, … , 𝐿 − 1
• Histogram Equalization Problem  Refer Notebook
Images After Histogram Equalization
Image Subtraction
• 𝑐 𝑚, 𝑛 = 𝑓 𝑚, 𝑛 − 𝑔 𝑚, 𝑛
• To access only a particular area in the image,
image subtraction can be used
Local or neighbourhood operation
• Pixels in an image are modified based on
some function of the pixels in their
neighbourhood.
• Spatial filtering is one of the important
neighbourhood operation.
• The subimage that is used in spatial filtering
is called a filter,mask, kernel, template, or
window
Spatial filtering
• The process of spatial filtering is similar to convolution.
• The pixel corresponding to the center of the kernel is modified as
the sum of products of pixel values and kernel weights.
• Spatial filtering =convolving a mask with an image
• For a 3x3 mask, the response at any point (x,y) in the image is
given by 𝑹 = 𝟗𝒊=𝟏 𝒘𝒊 𝒛𝒊
where 𝑤𝑖 is the mask weight and 𝑧𝑖 is the image pixel value

Mask
Smoothing Filter or Mean filter or Averaging filter
or Low-pass filter or box filter

• Mean filter replaces each pixel by the average of


all the values in the neighbourhood.
• The low pass filter behavior removes the sharp
variations leading to blurring effect.
• Can be used to remove salt-and-pepper noise.
Smoothing Linear filters

9
1
𝑅= 𝑧𝑖
9
𝑖=1

• Average of the intensity levels of the pixels in the 3x3


neighborhood defined by the mask will be given to the center
pixel.
• A spatial averaging filter in which all coefficients are equal is
called a box filter.
• Second mask b perform weighted average.
Limitations of averaging filter
• Averaging operation leads to blurring of the image
• If the averaging operation is applied to an image
corrupted by impulse noise, then the impulse noise
is attenuated and diffused but not removed.
• A single pixel, with a very unrepresentative value
can affect the mean value of all the pixels in the
neighbourhood significantly.
Non-linear smoothing filter: Median Filter
• Statistical non-linear filters
• Median filter smoothens the image by utilizing the median
of the neighbourhood.
• Median filters are quite popular because, for certain types
of random noise, they provide excellent noise-reduction
capabilities, with considerably less blurring than linear
smoothing filters of similar size.
• Median filters are particularly effective in the presence of
impulse noise, also called salt-and-pepper noise
• Method-
– All pixels in the neighbourhood are sorted in the ascending or
descending order
– The median of the sorted value is computed and is chosen as
the pixel value of the processed image
Comparison between mean filter and median filter
Sharpening Spatial Filters
• The objective of sharpening is to highlight transitions in intensity
• Averaging is analogous to integration, sharpening can be
accomplished by spatial differentiation
• Sharpening filters are based on first- and second-order
derivatives.
• First derivative will be
– Zero in areas of constant intensity
– Nonzero at the onset of an intensity step or ramp
– Nonzero along ramps
• Second derivative will be
– Zero in constant areas
– Nonzero at the onset and end of an intensity step or ramp
– Zero along ramps of constant slope
• The formula for the 1st derivative of a function is as
follows:
f
 f ( x  1)  f ( x )
x
– It’s just the difference between subsequent values and
measures the rate of change of the function

• The formula for the 2nd derivative of a function is as


follows:

2
f
 f ( x  1)  f ( x  1)  2 f ( x )

2
x

– Simply takes into account the values both before and after
the current value
Using second derivative for image sharpening – The Laplacian

The Laplacian is defined as follows:


 
2 2
f f
 f  
2

 
2 2
x y

where the partial 1st order derivative in the x direction is


defined as follows:

2
f
 f ( x  1, y )  f ( x  1, y )  2 f ( x , y )
 x
2

and in the y direction as follows:



2
f
 f ( x , y  1)  f ( x , y  1)  2 f ( x , y )
 y
2
•So, the Laplacian can be given as follows:

 f  [ f ( x  1, y )  f ( x  1, y )
2

 f ( x , y  1 )  f ( x , y  1 )]

 4 f ( x, y)

•We can easily build a filter based on this

0 1 0
1 -4 1
0 1 0
Applying the Laplacian to an image we get a new image that highlights
edges and other discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
There are lots of slightly different versions of the Laplacian that can be
used:
0 1 0 1 1 1
Simple Variant of
1 -4 1 Laplacian 1 -8 1 Laplacian

0 1 0 1 1 1
Unsharp Masking (Image sharpening)
• A smoothened version of the image is subtracted
from the original image, hence, tipping the image
balance towards the sharper content of the image.
• Procedure-
– Blur the original image
– Subtract the result of step 1 from the original image
– Multiply the result of step 2 by some weighting function
– Add the result of step 3 to the original image
• 𝑓 ′ 𝑚, 𝑛 = 𝑓 𝑚, 𝑛 +∝ [𝑓 𝑚, 𝑛 − 𝑓 𝑚, 𝑛 ]
Where 𝑓 𝑚, 𝑛 is the original image, 𝑓 𝑚, 𝑛 is the
blurred version of the image, ∝ is the weighting factor
and 𝑓 ′ 𝑚, 𝑛 is the sharpened result.
High-Boost Filtering (Image sharpening)
• Also known as high frequency emphasis filter.
• Used to retain some of the low frequency
components to aid in the interpretation of an image.
𝐻𝑖𝑔𝑕 𝑏𝑜𝑜𝑠𝑡 𝑖𝑚𝑎𝑔𝑒 = 𝐴 × 𝑓 𝑚, 𝑛 − 𝑙𝑜𝑤 𝑝𝑎𝑠𝑠
Where 𝐴 is the amplification factor and low pass
component is the blurred version of the original image.
• 𝐻𝑖𝑔𝑕 𝑏𝑜𝑜𝑠𝑡 𝑖𝑚𝑎𝑔𝑒 = 𝐴 − 1 × 𝑓 𝑚, 𝑛 + 𝑓 𝑚, 𝑛 − 𝑙𝑜𝑤 𝑝𝑎𝑠𝑠
• 𝐻𝑖𝑔𝑕 𝑏𝑜𝑜𝑠𝑡 𝑖𝑚𝑎𝑔𝑒 = 𝐴 − 1 × 𝑓 𝑚, 𝑛 − 𝑕𝑖𝑔𝑕 𝑝𝑎𝑠𝑠
1st Derivative Filtering for image sharpening
•Implementing 1st derivative filters is difficult in practice
•For a function f(x, y) the gradient of f at coordinates (x,
y) is given as the column vector:  f 
G   
x
•The magnitude of this vector is given by:
x
f      f 
G
 y   
 f  mag (  f )   y 

 
1
 G  G
2 2 2
x y

1
 f 
2 2 2
   f 
    


 
   x   y  
 

•For practical reasons this can be simplified as:


f  G x
 G y
1st Derivative Filtering (cont…)
•There is some debate as to how best to calculate these
gradients but we will use:
f  z7  2 z 8  z 9    z1  2 z 2  z 3 

 z 3  2 z 6  z 9    z1  2 z 4
  z7 
•which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
Sobel Operators
•Based on the previous equations we can derive the Sobel
Operators

-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

•To filter an image it is filtered using both operators the


results of which are added together
Image Enhancement in the frequency domain

• Selective enhancement or suppression of


frequency components is termed as Fourier
Filtering or Frequency domain filtering
• Spatial representation describes the
adjacency relationship between the pixels
• Frequency domain representation  clusters
the image data according to their frequency
distribution
• In spatial domain, filtering is done by convolving the
input image 𝑓 𝑥, 𝑦 with the kernel 𝑕(𝑥, 𝑦)
• In frequency domain, filtering corresponds to the
multiplication of the image spectrum 𝐹(𝑘, 𝑙) by the
Fourier transform of the kernel 𝐻(𝑘, 𝑙)
• Convolution in spatial domain is the same as
multiplication in frequency domain

• 𝑓 𝑥, 𝑦 ∗ 𝑕(𝑥, 𝑦) → 𝐹(𝑘, 𝑙)𝐻(𝑘, 𝑙)


Types of Frequency domain Filters
Filtered Image, 𝑮 𝒌, 𝒍 = 𝑯 𝒌, 𝒍 𝑭(𝒌, 𝒍) where is the filter
response and is the original signal spectrum

• Lowpass
– Ideal Lowpass Filters
– Butterworth Lowpass Filters
– Gaussian Lowpass Filters
• Highpass
– Ideal Highpass Filters
– Butterworth Highpass Filters
– Gaussian Highpass Filters
FT

IFT
IFT

IFT
The central part of FT, i.e.
the low frequency
components are responsible
for the general gray-level
appearance of an image.

The high frequency


components of FT are
responsible for the detail
information of an image.
Image Frequency Domain
(log magnitude)
v Detail

General
appearance
5% 10 % 20 % 50 %
Non separable Filter Transfer Separable Filter Transfer
Function Function
2-D ideal Lowpass Filter
1 𝐷 𝑢, 𝑣 ≤ 𝐷0
• 𝐻 𝑢, 𝑣 =
0 𝐷 𝑢, 𝑣 > 𝐷0
– 𝐻 𝑢, 𝑣 is the ideal lowpass filter Transfer Function
– 𝐷0 is the specified non negative value or cutoff frequency
– 𝐷 𝑢, 𝑣 is the distance from point to the origin of frequency rectangle
Ideal Lowpass Filtering

Spatial domain

Ringing Effect is present in the filtered Image


Butterworth Lowpass filter
1
• 𝐻 𝑢, 𝑣 = 2𝑛
𝑢2 +𝑣2
1+ 𝐷0
– 𝐻 𝑢, 𝑣 is the ideal lowpass filter Transfer Function
– 𝐷0 is the cutoff frequency
– 𝑛 is the filter order
– 𝑢2 + 𝑣 2 = 𝐷 𝑢, 𝑣 is the distance from point to the origin of frequency
rectangle
Butterworth Low-pass Filters (BLPF) in spatial domain

n=1 n=2 n=5 n=20


h(x)

1
H (u , v )  2n
 u
2
 v
2 
1  
 D0 
No serious
ringing
artifacts
Gaussian Lowpass Filters
𝑢2 +𝑣2
1 −
• 𝐻 𝑢, 𝑣 = 2 𝑒 2𝜎2
2𝜋𝜎
• Where 𝜎 2 = 𝐷0 2 is the variance of the distribution
No ringing
artifacts
Use of Lowpass Filtering
High-pass Filters
• Hhp(u,v) = 1- Hlp(u,v)

• Ideal: D (u , v )  D 0
1
H (u , v )  
0 D (u , v )  D 0
• Butterworth:

1
| H (u , v ) | 
2
2n
 D0 
• Gaussian: 1  
 v
2 2
 u 

2 2 2
 (u  v ) / 2
H (u , v )  1  e
Title of presentation
High-pass Filters in spatial domain

h(x,y)
Result of Highpass filtering with 𝑫𝟎 = 15, 30 and 80

Ideal Highpass Filter


Ringing artifact is more

Butterworth Highpass Filter


Ringing artifact is less

Gaussian Highpass Filter


No ringing artifact
Steps to perform Frequency domain Filtering

• Find the Fourier Transform of the input image,


𝐹(𝑢, 𝑣)
• Find the Filter transfer function (LP or HP), H(𝑢, 𝑣)
• Perform element by element multiplication,
G 𝑢, 𝑣 = 𝐹(𝑢, 𝑣) 𝐻(𝑢, 𝑣)
• Take the inverse Fourier Transform of the result
G 𝑢, 𝑣 to get the filtered image
Homomorphic Filter

• Image can be modeled as the product of an


illumination function and the reflectance function at
every point.
𝒇 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 𝒓(𝒙, 𝒚)
• This model is known as illumination-reflectance
model
• 𝒊 𝒙, 𝒚 is the primary contributor to the dynamic
range and varies slowly in space
• 𝒓 𝒙, 𝒚 represents the details of the object and
varies rapidly in space
Homomorphic Filter

Вам также может понравиться