Вы находитесь на странице: 1из 4

Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems Seoul, Korea, August 20 - 22,

2008

TM3-1

Enhancement of Image Degraded by Fog Using Cost Function Based on Human Visual Model
Dongjun Kim, Changwon Jeon, Bonghyup Kang and Hanseok Ko

AbstractIn foggy weather conditions, images become degraded due to the presence of airlight that is generated by scattering light by fog particles. In this paper, we propose an effective method to correct the degraded image by subtracting the estimated airlight map from the degraded image. The airlight map is generated using multiple linear regression, which models the relationship between regional airlight and the coordinates of the image pixels. Airlight can then be estimated using a cost function that is based on the human visual model, wherein a h uman is more insensitive to variations of the luminance in bright regions than in dark regions. For this objective, the luminance image is employed for airlight estimation. The luminance image is generated by an appropriate fusion of the R, G, and B components. Representative experiments on real foggy images confirm significant enhancement in image quality over the degraded image.
I.

INTRODUCTION

og is a phenomenon caused by tiny droplets of water in the air. Fog reduces visibility down to less than 1 km. In foggy weather, images also become degraded by additive light from scattering of light by fog particles. This additive light is called airlight. There have been some notable efforts to restore images degraded by fog. The most common method known to enhance degraded images is histogram equalization. However, even though global histogram equalization is simple and fast, it is not suitable because the fogs effect on an image is a function of the distance between the camera and the object. Subsequently, a partially overlapped sub-block histogram equalization was proposed in [1]. However, the physical model of fog was not adequately reflected in this effort. While Narasimhan and Nayar were able to restore images using a scene-depth map [2], this method required two images taken under different weather conditions. Grewe and Brooks suggested a method to enhance pictures that were blurred due to fog by using wavelets [3]. Once again, this approach required several images to accomplish the enhancement. Polarization filtering is used to reduce fogs effect on images [4, 5]. It assumes that natural light is not polarized and that

scattered light is polarized. However this method does not guarantee significant improvement in images with dense fog since it falls short of expectations in dense fog. Oakley and Bu suggested a simple correction of contrast loss in foggy images [6]. In [6], in order to estimate the airlight from a color image, a cost function is used for the RGB channel. However, it assumes that airlight is uniform over the whole image. In this paper, we improve the Oakley method [6] to make it applicable even when the airlight distribution is not uniform over the image. In order to estimate the airlight, a cost function that is based on the human visual model is used in the luminance image. The luminance image can be estimated by an appropriate fusion of the R, G, and B components. Also, the airlight map is estimated using least squares fitting, which models the relationship between regional airlight and the coordinates of the image pixels. The structure of this paper is as follows. In Section , we propose a method to estimate the airlight map and restore the fog image. We present experimental results and conclusions in Sections and V respectively. The structure of the algorithm is shown in Fig. 1.

Fig. 1 Structure of the algorithm

II. PROPOSED ALGORITHM A. Fog Effect on Image and Fog Model The foggy image is degraded by airlight that is caused by scattering of light with fog particles in air as depicted in Fig. 2 (right).

Manuscript received May 1, 2008. Dongjun Kim, Changwon Jeon and Hanseok Ko (Corresponding author) are with School of Electrical Engineering, Korea University, Seoul, Korea. (e-mail: djkim@ispl.korea.ac.kr; cwjeon@ispl.korea.ac.kr; hsko@ispl.korea.ac.kr). Bonghyup Kang is with Samsung Techwin CO., LTD. (e-mail : bh47.kang@samsung.com)

Fig. 2 Comparison of the clear image (left) and the fog image (right)

978-1-4244-2144-2/08/$25.00 2008 IEEE

Airlight plays the role of being an additional source of light as modeled in [6] and Eqn (1) below. I' R,G,B = I R,G,B + R,G,B (1) where

estimated airlight map. B. Region Segmentation In this paper, we suggest estimating the airlight for each region and modeling the airlight for each region and the coordinates within the image to generate the airlight map. In the case of an image with various depth, the contribution of airlight can be varied according to the region. Estimating the airlight for each region can reflect the variation of depth within the image. Regions are segmented uniformly to estimate the regional contribution of airlght.

I' R,G,B is the degraded image, I R,G,B is the original


R,G,B

image, and

represents the airlight for the Red, Green,

and Blue channels. This relationship can be applied in the case where airlight is uniform throughout the whole image. However, the contribution of airlight is not usually uniform over the image because it is a function of the visual depth, which is the distance between the camera and the object. Therefore, the model can be modified to reflect the depth dependence as follows. I' R,G,B (d) = I R,G,B (d) + R,G,B (d) (2) Note that d represents depth. Unfortunately, it is very difficult to estimate the depth using one image taken in foggy weather conditions, so we present an airlight map that models the relationship between the coordinates of the image pixels and the airlight. In this paper, since the amount of scattering of a visible ray by large particles like fog and clouds are almost identical, the luminance component is used alone to estimate the airlight instead of estimating the R, G, and B components. The luminance image can be obtained by a fusion of the R, G, and B components. Subsequently, the color space is transformed from RGB to YCbCr. Therefore Eqn (2) can be re-expressed as follows. Y'(i,j) = Y(i,j) + Y (i,j) (3) where Y' and Y reflect the degraded luminance and clear luminance images respectively at position (i,j). Note that Y is the estimated airlight map for the luminance image. The shifting of mean(Y) can be confirmed in Fig 3.
600

Fig. 4 Region segmentation

C. Estimate Airlight In order to estimate the airlight, we improved the cost function method in [6] using a compensation that is based on the human visual model. In Eqn (3), the airlight is to be estimated to restore the image degraded by fog. To estimate the airlight, the human visual model is employed. As described by Webers law, a human is more insensitive to variations of luminance in bright regions than in dark regions.

S =k

500

R R

(5)

400

300

200

100

0 0 50 100 150 200 250

1000 900 800 700 600 500 400 300 200 100 0 0 50 100 150 200 250

where R is an initial stimulus, R is the variation of the stimulus, and S is a variation of sensation. In the foggy weather conditions, when the luminance is already high, a human is insensitive to variations in the luminance. We can estimate the existing stimulus in the image signal by the mean of the luminance within a region. The variation between this and foggy stimulus can be estimated by the standard deviation within the region. Thus the human visual model would estimate the variation of sensation as

Fig. 3 Comparison of the Y Histogram

In order to restore the image blurred by fog, we need to estimate the airlight map and subtract the airlight from the foggy image as follows.

STD(Y) = mean(Y)

1 n

n i =1

( yi Y ) 2 Y
(6)

Y(i,j) = Y'(i,j) Y (i,j)

(4)

In this model, Y represents the restored image and Y is the

Where Y means that mean value of Y .Note that the value of Eqn (6) for a foggy image, STD(Y')/mean(Y') , is relatively small since the value of numerator is small and the value of denominator is large.

A( ) =

STD(Y' ) mean(Y' )

(7)

To correct the blurring due to fog, edge enhancement is performed. (11) where g(i,j) is the reverse Fourier transformed signal that is filtered by a high pass filter, s is a constant that determines the strength of enhancement, and luminance image.

In Eqn (7), increasing causes an increase in A( ) , which means that a human can perceive the variation in the luminance. However, if the absolute value of the luminance is too small, it is not only too dark, but the human visual sense also becomes insensitive to the variations in the luminance that still exist. To compensate for this, a second function is generated as follows.

Yde blurr (i, j ) = Y(i,j) + s g(i,j)

Ydeblurr (i, j ) is the de-blurred

B ' ( ) = (mean(Y' ))

(8)

Eqn (8) indicates information about mean of luminance. In a foggy image, the result of Eqn (8) is relatively large. And, increasing causes a decrease in B ( ) which means that overall brightness of the image decreases. Functions (7) and (8) reflect different scales from each other. Function (8) is re-scaled to produce Eqn (9) to set 0 when input image is Ideal. Note that Ideal represents the ideal image having a uniform distribution from the minimum to the maximum of the luminance range. In general, the maximum value is 235 while the minimum value is 16.

B ( ) = (mean(Y') )

STD(Ideal) mean(Ideal) 2

(9)

Fig. 5 Generation of airlight map

relatively large when is small. Increasing causes a decrease in A( ) B ( ) . If is too large, it cause an increase in | A( ) B ( ) | which means the image becomes dark. The airlight.

For dense foggy image, the result of A( ) B ( ) is

F. Post-Processing The fog particles absorb a portion of the light in addition to scattering it. By changing the color space from YCbCr to RGB, I R ,G , B can be obtained. Therefore, after the color space conversion, histogram stretching is performed as a post-processing step.

satisfying Eqn (10) is the estimated

= arg min{| A( ) B( ) |}

~ I R ,G , B = 255
(10) where

I R ,G , B min( I R ,G , B ) max( I R ,G , B ) min( I R ,G , B )

(12)

~ I R ,G , B is the result of histogram stretching, max

D. Estimate Airlight Map Using Multiple Linear Regression Objects in the image are usually located at different distances from the camera. Therefore, the contribution of the airlight in the image also differs with depth. In most cases, the depth varies with the row or column coordinates of the image scene. This paper suggests modeling between the coordinates and the airlight values that are obtained from each region. The airlight map is generated by multiple linear regression using least squares (Fig. 5). E. Restoration of luminance image In order to restore the luminance image, the estimated airlight map is subtracted from the degraded image as Eqn (4).

( I R ,G , B ) is the maximum value of I R ,G , B that is an input for post-processing, and min( I R ,G , B ) is the minimum value of

I R ,G , B .

2500

2500

2000

2000

1500

1500

1000

1000

500

500

0 0 50 100 150 200 250

0 0 50 100 150 200 250

Fig. 6 Histogram stretching

III. RESULT The experiment is performed on a 3.0GHz Pentium 4 using MATLAB. The experiment results for images taken in foggy weather are shown in Fig. 8. In order to evaluate the performance, we calculate contrast, colorfulness, and the sum of the gradient that is based on the important of edges measurement. Contrast and colorfulness are improved by 147% and 430% respectively over the foggy image. In addition, the sum of the gradient is also improved 201% compared to the foggy image.

5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0


Contrast Colourfulness Sum_gradient

E. Namer and Y. Y. Schechner, "Advanced visibility improvement based on polarization filtered images," Proc. SPIE, vol. 5888, pp. 36-45, 2005. [4] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, "Polarization-based vision through haze," Applied Optics, vol. 42, pp. 511-525, 2003. [5] J. P. Oakley and H. Bu, "Correction of Simple Contrast Loss in Color Images," Image Processing, IEEE Transactions on, vol. 16, pp. 511-522, 2007. [6] Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically blurred images according to weather-predicted atmospheric modulation transfer functions," Optical Engineering, vol. 36, p. 3064, 1997. [7] K. K. Tan and J. P. Oakley, "Physics-based approach to color image enhancement in poor visibility conditions," Journal of the Optical Society of America A, vol. 18, pp. 2460-2467, 2001. [8] R. S. Sirohi, "Effect of fog on the colour of a distant light source," Journal of Physics D Applied Physics, vol. 3, pp. 96-99, 1970. [9] S. Shwartz, E. Namer, and Y. Y. Schechner, "Blind haze separation," International Conference on Computer Vision and Pattern Recognition, 2006. [10] Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically blurred images according to weather-predicted atmospheric modulation transfer functions," Optical Engineering, vol. 36, p. 3064, 1997. [11] K. K. Tan and J. P. Oakley, "Physics-based approach to color image enhancement in poor visibility conditions," Journal of the Optical Society of America A, vol. 18, pp. 2460-2467, 2001. [12] R. S. Sirohi, "Effect of fog on the colour of a distant light source," Journal of Physics D Applied Physics, vol. 3, pp. 96-99, 1970. S. Shwartz, E. Namer, and Y. Y. Schechner, "Blind haze separation," International Conference on Computer Vision and Pattern Recognition, 2006.

[3]

Enhanced Image
Fig. 7 Result of the evaluation

Foggy Image

IV. CONCLUSIONS In this paper, we propose to estimate the airlight using cost function, which is based on human visual model, and generate airlight map by modeling the relationship between coordinates of image and airlight. Blurred image due to fog is restored by subtracting airlight map from degraded image. In order to evaluate the performance, we calculated contrast, colorfulness and sum of gradient. The results confirm a significant improvement in image enhancement over the degraded image. In the future, we plan to investigate a methodology to estimate the depth map from single image. In addition, enhancement of degraded image in bad weather due to non-fog weather will be investigated. ACKNOWLEDGMENT This work is supported by Samsung Techwin CO., LTD. Thanks to Professor Colin Fyfe for his valuable advices and comments. REFERENCES
[1] Y. S. Zhai and X. M. Liu, "An improved fog-degraded image enhancement algorithm," Wavelet Analysis and Pattern Recognition, 2007. ICWAPR'07. International Conference on, vol. 2, 2007. S. G. Narasimhan and S. K. Nayar, "Contrast restoration of weather degraded images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 713-724, 2003. Fig. 8 Results of Image Enhancement by Defogging

[2]

Вам также может понравиться