Вы находитесь на странице: 1из 40

Chapter 8

Analog Optical Information


Processing

Weimin Sun
College of Science
Harbin Engineering University
Image restoration
 As contrasted with the optimum correlation
detection that uses the maximum signal-to-
noise ratio criterion, for optimum image
restoration one uses the minimum mean-
square error criterion.
 The example provided in the following is by
no means optimum.
Assumptions
 Let o(x,y) represent the intensity distribution associated
with an incoherent object, and let i(x,y) represent the
intensity distribution associated with a blurred image of
that object.
 For simplicity we assume that the magnification of the
imaging system is unity and we define image
coordinates in such a way as to remove any effects of
image inversion.
 We assume that the blur which the image has been
subjected to is a linear, space-invariant transformation,
describable by a known space-invariant point-spread
function s(x,y).
 Thus the object and image are related by

i ( x, y )    o( , )s( x   , y  )dd  o( x, y)  s( x, y)
-
 We seek to obtain an estimate ô(x,y) of o(x,y) , based on
the measured image intensity i(x,y) and the known point-
spread function s(x,y).
 In other word, we wish to invert the blurring operation
and recover the original object.
 An unsophisticated solution to this problem is to give
the relationship
I ( f X , fY )  F {i( x, y)}  F {o( x, y)  s( x, y)}  O( f X , fY )S ( f X , fY )
Inverse filter
 It seems obvious that the spectrum of the original object can be
obtained by simply dividing the image spectrum by the known
OTF of the imaging system
ˆ I ( f X , fY )
O( f X , fY ) 
S ( f X , fY )
 An equivalent statement of this solution is that we should pass
the detected image i(x,y) through a linear space-invariant filter
with transfer function 1
H ( f X , fY ) 
S ( f X , fY )
 Such a filter is commonly referred to as an “inverse filter”.
Sample: Smeared-point
(blurred) image
 Let us now assume that the transmission function
of a linear smeared-point (blurred) image can be
written as
1,   / 2     / 2
f ( )  
0, otherwise
 where  is the smeared length.
 To restore the image, we seek for an inverse filter
as given by
1 f X  / 2
H( fX )  
F ( f X ) sin( f X  / 2)
 To retrieve the information loss due to
blurring there is a price to pay in terms of
entropy, which is usually very costly.
 a practical, although not optimum, inverse
filter can be synthesized, as given by
H ( f X )  A( f X ) exp[i ( f X )]
where A(fX) and (fX) are the corresponding
amplitude and phase filters.
 The restored
Fourier spectra
that we would
like to achieve
is the
rectangular
spectral
distribution
bounded by Tm fX
and fX.
 The restored spectral distribution is the
shaded areas.

fX
fX
 It is evident that the blurred image can be
restored for some degrees of restoration
error.
 By defining the degree of restoration as
given by
1 F ( f X )H ( f X )
D (Tm ) 
Tm f X p 
df X

 where fX is the spatial bandwidth of interest, a


plot can be drawn.
 The degree of restoration
increases rapidly as Tm
decreases.
 However, as Tm
approaches zero, the
transmittance of the
inverse filter also
approaches zero, which
leaves no transmittance of
the inverse filter.
 A perfect degree of restoration, even within
the bandwidth,  fX, cannot actually be
obtained in practice.
 Aside from this consequence, the weak
diffracted light field from the inverse filter
would also cause poor noise performance.
 The effects on
image restoration
due to the
amplitude, the
phase, and the
combination of
amplitude and
phase filters are
shown
 In view of these results, we see that using the
phase filter alone would give rise to a reasonably
good restoration result as compared with the
complex filtering.
 This is the consequence of the image formation
(either in the spatial or in the Fourier domain);
 the phase distribution turns out to be a major quantity in
the effect of image processing as compared with the
effect due to amplitude filtering alone.
 In other words, in image processing, as well
as image formation, the amplitude variation,
in some cases, can actually be ignored.
 A couple of such examples are optimum
linearization in holography and phase-
preserving matched filters.
An image restoration result
obtained from an inverse filter
 A linear blurred image can indeed be restored.
 In addition, we have also seen that the restored image is
embraced with speckle noise, also known as coherent
noise.
 This is one of the major concerns of using coherent light for
processing.
 Nevertheless, coherent noise can be actually suppressed by
using an incoherent light source.
Serious defects of inverse filter
 Diffraction limits the set of frequencies over
which the transfer function S(fX,fY) is nonzero to a
finite range. Outside this range, S=0 and its
inverse is ill.
 Within the range of frequencies for which the
diffraction-limited transfer function is nonzero, it
is possible that transfer function S will have
isolated zeros.
 The inverse filter takes no account of the fact that
there is inevitably noise present in the detected
image, along with the desired signal.
The Wiener filter
least-mean-squared-error filter
 A new model for the imaging process is
adopted.
 The detected image is represented by

i ( x , y )  o ( x, y )  s ( x , y )  n ( x , y )

where n(x,y) is the noise associated with the


detection process.
 The object o(x,y) is regarded as a random
process, as well as the random noise.
 We assume that the power spectral densities
(i.e. the distributions of average power over
frequency) of the object and the noise are
known, and are represented by o(fX,fY) and
n(fX,fY).
 The mean-squared difference between the
true object o(x,y) and the estimate of the
object ô(x,y) is
  Average [ o  oˆ ]
2
 The transfer function of the optimum restoration
filter is given by 
S ( f X , fY )
H ( f X , fY ) 
 n ( f X , fY )
S ( f X , fY ) 
2

 o ( f X , fY )
 This type of filter is often referred to as a Wiener
filter, after its inventor, Norbert Wiener.
 If the SNR is high (n/ o<<1) S
1
H 2 
S S
 If the SNR is low (n/ o>>1) S
o 
H 2  S
S n
Wiener filter
 Diffraction, rather then absorption, is used
to attenuate frequency components.
 Only a single interferometrically generated
filter is required, albeit one with an unusual
set of recording parameters.
 The filter is bleached and therefore
introduces only phase shifts in the
transmitted light.
 Certain postulates underlie this method of
recording a filter.
 The maximum phase shift introduced by the
filter is much smaller than 2 radians, and
therefore
t A  e j  1  j
 The phase shift of the transparency after
bleaching is linearly proportional to the silver
present before bleaching.
D
 The filter is exposed and processed such that
operation is in the linear part of the H&D curve,
where density is linearly proportional to the
logarithm of exposure
D   log E  Do

 t A    D
E
 (log E ) 
E
 The exposure produced by this interferometrical
recording is
2
x y x y x y
E ( x, y )  T { A  a S ( , )  2 Aa S ( , ) cos[ 2x   ( , )]}
2 2

f  f  f f f  f
where A is the square root of the intensity of the
reference wave at the film plane, a is the square
root of the intensity of the object wave at the
origin of the film plane,  is the carrier frequency
introduced by the off-axis reference wave,  is the
phase distribution associated with the blue transfer
function S, and T is the exposure time.
2
x y
 If A2<<a2, E ( x, y )  [ A 2  a 2 S (
, ) ]T
f f
x y x y
E ( x, y)  2 AaT S ( , ) cos[ 2x   ( , )]
f f f f
E S
 t A  
KS
2
E

A2
K 2
a
Image subtraction
 Image subtraction may be of value in many
applications, such as urban development, highway
planning, earth resources studies, remote sensing,
meteorology, automatic surveillance, and
inspection.
 Image subtraction can also apply to
communication as a means of bandwidth
compression;
 for example, when it is necessary to transmit only the
differences among images in successive cycles, rather
than the entire image in each cycle.
 Two images, o1(x-a, y), o2(x+a, y), are
generated at the input spatial domain SLM1
 The corresponding joint transform spectra
can be shown as

O( f X , fY )  O1 ( f X , fY )e iafX  O2 ( f X , fY )eiafX

 where O1(fX, fY) and O2(fX, fY) are the


Fourier spectra of o1(x, y) and o2(x, y),
respectively.
 If a bipolar Fourier domain filter,
H ( f X )  sin( af X )
 is generated in SLM2, the output complex
light field can be shown as
g ( x, y)  C1[o1 ( x, y)  o2 ( x, y)]  C2[o1 ( x  2a, y)  o2 ( x  2a, y)]
 in which we see that a subtracted image can be
observed around the origin of the output plane.
 We note that the preceding image subtraction
processing is, in fact, a combination of the joint
transformation and the Fourier domain filtering.
 Whether by combining the joint transformation
and Fourier domain filtering it is possible to
develop a more efficient optical processor remains
to be seen.
 Now consider an image subtraction result as
obtained by the preceding processing strategy.
Once again, we see that the
subtracted image is severely
corrupted by coherent artifact
noise, which is primarily due to
the sensitivity of coherent
illumination.
Broadband signal processing
 Because of the high space bandwidth
product of optics, a one-dimensional large
time-bandwidth signal can be analyzed by a
two-dimensional optical processor.
 In order to do so, a broadband signal is first
raster-scanned onto the input SLM.
 This raster-scanned process is, in fact, an
excellent example of showing a time-to-
spatial-signal conversion.
 A one-dimensional
time signal can be
converted into a
two-dimensional
spatial format for
optical processing.
 If we assume the return sweep is
adequately higher as compared with
the maximum frequency content of the
time signal, a two-dimensional raster-
scanned format, which represents a
long string of time signals, can be
written as
N
s ( x, y )   g n ( x)hn ( y )
n 1
 where N = h/b is the number of scanned
lines within the two-dimensional input
format.
N
s ( x, y )   g n ( x)hn ( y )
n 1

 g(x) is the transmittance function proportioned


to the time signal, as written by
w w w
g n ( x)  g[ x  (2n  1) ]   x  , n  1,2,  , N
2 2 2
y  nb
and hn ( y )  rect[ ]
a

 where rect[ ] represents a rectangular function.


 By taking the Fourier transform of the input
raster-scanned format the complex light
distribution at the Fourier domain is given
by
S ( f X , fY )
N fX w fY
fX a fX w ( 2 n 1) i [ h  2 ( n 1) b  a ]
 C  sinc (
i
){sinc ( )  S ( f X )e 2
e 2
}
n 1 2 2
where C is a complex constant
w
S ( p )   s ( x ' )e if X x '
dx' x'  x  (2n  1)
2
 For simplicity of illustration, we assume s(x') =
exp(ifX0x), a complex sinusoidal signal, wh,ba,
and N»1; then the corresponding intensity
distribution can be shown as
w 2 f Y a sin N 2
I ( f X , fY )  K sinc [ ( f X  f X 0 )]sinc (
2
)( )
2 2 sin 
 in which the first sine factor represents a narrow
spectral line located at fX = fX0.
 The second sine factor represents a relatively broad
spectral band in the fY direction, which is due to the
narrow channel width a.
 This last factor deserves special mention; for large
values of N, it approaches a sequence of narrow
pulses
1
fY  (2n  wfY 0 )
b
 which yields a fine spectral resolution in the q direction.
2
fY 
 The half bandwidth is w
 which is equal to the resolution limit of the transform
lens.
 The displacement in the q direction is proportional to
the displacement in the p direction.
 As the input signal

frequency changes,
the position of the
spectral points also
changes by the
amounts
w
df Y  df X 0
b
 To conclude this section, we note that one
interesting application of optical processing of
broadband signals is its application to synthetic
aperture radar.
 A broadband microwave signal is first converted
in two-dimensional raster-scanned format, similar
to the preceding example.
 If the raster-scanned format is presented at the
input plane of a specially designed optical
processor, an optical radar image can be observed
at the output plane.