Академический Документы
Профессиональный Документы
Культура Документы
Codruta Orniana Ancuti, Cosmin Ancuti, Tom Haber and Philippe Bekaert
Hasselt University - tUL -IBBT, EDM, Belgium
ABSTRACT
In this paper we introduce a novel strategy that effectively
enhance the visibility of underwater images. Our method is
build-up on the fusion strategy that takes a sequence of inputs
derived from the initial image. Practically, our fusion-based
method aims to yield a final image that overcomes the deficiencies existing in the degraded input images by employing
several weight maps that discriminate the regions characterized by poor visibility. The extensive experiments demonstrate the utility of our solution since the visibility range of
the underwater images is significantly increased by improving both the scene contrast and the color appearance.
Underwater image
Our result
1. INTRODUCTION
When photographs are taken in turbid media such as underwater, hazy or fogy conditions, the visibility of the scene is
degraded significantly. This is due to the fact that the radiance
of a point in the scene is directly influenced by the medium
scattering. Practically, distant objects and parts of the scene
suffer from poor visibility, loss of contrast and faded colors.
Recently, it has been seen a growing interest in restoring
visibility of images altered due to such atmospheric conditions. Recovering this kind of degraded images is important
for various applications such as oceanic engineering and research in marine biology, archeology, surveillance etc.
Underwater visibility has been typically investigated by
involving acoustic imaging and optical imaging systems.
Acoustic sensors have the major advantage to penetrate water much easily despite of their lower spatial resolution in
comparison with the optical systems [1]. However, acoustic
sensors become very large when aiming for high resolution
outputs [1]. On the other hand, optical systems despite of several shortcomings [2] such as poor underwater visibility, have
been applied recently by analyzing the physical effects of visibility degradation. Mainly, the existing techniques employ
several images of the same scene registered with different
states of polarization for underwater images [1, 3] but as well
for hazy inputs [4]. As well, dehazing techniques [5, 6, 7]
have been related with the underwater restoration problem
but in our experiments these techniques shown limitations to
tackle with this problem (see figure 1).
In this paper we introduce a novel technique to restore
underwater images. Different than most of the existing tech-
2. FUSION-BASED RESTORATION
Our single image approach is built on a multi-scale fusion
technique by defining several inputs that are derived from the
original input image. To obtain an image with enhanced visibility, each region from the image needs to be characterized
by the optimal appearance in the input sequence. To generate suitable inputs we searched for appropriate enhancement
methods. Although most image enhancement methods are
able to improve with a certain degree the visibility in some
areas, there are many limitations such as loss of contrast or
clipping of details that may be introduced in different regions
of the image. Our fusion-based approach has the advantage to
select based on the weight maps characteristics the appropriate pixels from each input and blend them in a final enhanced
version.
The proposed technique is described by three main steps.
Firstly, we derive the sequence of input images characterized
by the desired details that need to be preserved in the restored result. Secondly, the weight maps that rate the locallyimportant information are defined and finally, the composition of the final output is obtained by employing a classical
multi-scale fusion strategy. An important advantage is that by
our strategy the underwater image enhancement may be performed reliably even when the distance map (transmission) is
not previously estimated.
2.1. Inputs
In our restoration approach the first input is represented by
the initial white balanced image. To obtain the color corrected
image the algorithm searches to equalize the median values of
the basic R,G,B color channels. This step is important since
the input color channels of the underwater images are rarely
balanced. We perform a linear adjustment of the histogram
by stretching the original mean value to the desired average
value of the scene. Additionally, the mean reference value
(default 0.5) is increased with a small degree ( = 0.15) of
the actual scene mean in order to preserve both the gray value
and to obtain the desired appearance of the existing white objects in the scene. Other more sophisticated white balancing
techniques may be applied as well, but in our experiments
this simple and effective white balancing technique yielded
results with a good accuracy having as well the advantage of
low processing costs.
However, white balancing is not able to solve entirely the
problem of contrast loss. Therefore, we define a second input that is obtained by applying the classical global min-max
windowing method [8] that enhances the image appearance
in the selected intensity window. This simple technique exploits effectively the image coherence by enhancing the contrast within a subrange of the intensity values at the expense
of the remaining intensity values. To compute the parameters
Imin and Imax that are required for the normalization pro-
cess, we search for the median value within a small percentage (%) of the minimal value and maximal value (default
is = 10%). Since we observed that by employing only
these two inputs the information outside the window is not
well depicted we introduce a third input that is defined as the
arithmetic mean of the two previous defined inputs. Practically, this third input image together with the first one aim to
compensate the loss in contrast for the outliers details.
2.2. Weights
Besides the inputs, a crucial step of the fusion-based techniques is the way of defining the weights. Our algorithm is
guided by four weights that are explained in the following:
Luminance weight map controls the luminance gain of
the final result since the general appearance of the degraded
input photo tends to become flat. This weight value represents
the standard deviation between every R,G and B color channels and the luminance L of the input. It generates high values
correlated with the preservation degree of each input region,
while the multi-scale blending ensures a seamless transition
between the inputs. Although this map may enhance the degraded input, it may reduce as well the image contrast and
the colorfulness. These undesired effects are balanced in our
strategy by defining three additional weights: contrast (local
contrast), saliency (global contrast) and chromatic (colorfulness).
Contrast weight map yields high values to image elements such as edges and texture. To generate this map we
rely on an effective contrast indicator built on the Laplacian
filter computed on the grayscale of each image input. A similar local contrast estimator has been employed for tasks such
as multi-focus fusion and extended the depth-of-field [9].
Chromatic weight map is designed to control the saturation gain of the result. This map may be seen as a basic
saturation indicator that computes for every pixel the distance
between the saturation value S and the maximum
of the sat
2
max )
uration range using a Gauss curve: d = exp (SS
22
with a standard deviation = 0.3. Since images with increased saturation are more preferred, this chromatic map assigns higher values to the well saturated pixels.
Saliency weight map is a quality map that estimates the
degree of conspicuousness with respect to the neighborhood
regions. This value is effectively computed based on the formulation introduced by Achanta et al. [10]. Their strategy is
motivated by the biological concept of center-surround contrast. The saliency weight at pixel position (x, y) of input I k
is defined as:
WS (x, y) =
Ik Ikhc
(1)
where Ik represents the arithmetic mean pixel value of the input I k while Ikhc is the blurred version of the same input that
aims to remove high frequency noise and textures. Ikhc is ob-
Input images
White balanced
Our results
Fig. 2. Underwater restoration results. The top row presents the input underwater images. Our restored results (bottom row)
demonstrate a significant improvement of the visibility range even compared with the white balanced versions (middle row).
1
[1, 4, 6, 4, 1]) sepatained by employing a small 5 5 ( 16
rable binomial kernel with the high frequency cut-off value
hc = /2.75. For small kernels the binomial kernel is a
good approximation of its Gaussian counterpart, but it can be
computed more effectively. Beside being fast to compute, the
obtained maps are characterized by well-defined boundaries
and uniformly highlighted salient regions, even at high resolution scales.
Finally, once the weights are obtained, we employ the
by constraining that the sum
normalized weight values W,
at each pixel location of the weight maps W to equal one.
This processing step is required to yield consistent results but
also ensure that the final result fits into the original scale.
k :
corresponding normalized weight maps W
F (i,j) =
(i,j) I (i,j)
W
k
k
(2)
Fl
=
Gl W
Ll Ik
(3)
k
k
Initial image
Our result