Вы находитесь на странице: 1из 8

Optimizing Low Resolution Image Through Multi-Scale Retinex Algorithm,

Gaussian Filtering with Edge Preservation and Face Hallucination


Super Resolution Algorithm
Ian Dave T. Padilla, Madelaine D. Quines, and Micah Angela A. Sarte
School of Information Technology, Mapua Institute of Technology, Makati City

Abstract. Digital cameras are very prevalent nowadays and almost everyone uses digital cameras in order
to capture something or someone. However, not all cameras have the capability to produce high resolution
images. Because of this, the proponents came up to an idea to improve low resolution images into high
resolution images. The main problem of this research study is the unrecognizable subject of low
quality images taken from standard resolution cameras. These images may contain pixelated details, too
much noise, and imbalance brightness and contrast. The proponents used three algorithms which are the
Gaussian Filtering with Edge Preservation for noise reduction, Multi-scale Retinex Algorithm for
uneven illumination and Super Resolution Algorithm to reconstruct the facial features of the low
resolution images. After undergoing experiment, results showed that the combination of the
algorithms: Multi-Scale Retinex Algorithm, Gaussian Filtering with Edge Preservation and Face
Hallucination Super Resolution Algorithm significantly improved the quality of face images taken
from a low resolution camera. Also, results showed that high resolution versions of low resolution
inputs significantly helped the reconstruction of facial features of low resolution inputs. 75% of the
low resolution inputs were reconstructed into better quality image, thus, shows that the combinations of the
used algorithms are effective
Keywords: Face hallucination; Super-resolution; Residue compensation.

Image Processing
Image processing is a function of enhancing, editing, restoring and processing an image
with the use of different types of algorithms or the computer manipulation of images.
Image processing have different element. Image processing by enhancement, image
processing by reconstruction and image processing by compression. Enhancement in image is
subdivided into two methods, Contrast Enhancement method and Spatial filtering method.
Contrast enhancements improve the perceptibility of objects in the scene by enhancing the
brightness [1]. Spatial filtering method consists of a neighborhood and a predefined operation that
is performed on the image pixels encompassed by the neighborhood [1].
Image processing by reconstruction is concerned with filtering the observed image
to minimize the effect of degradations[2]. Image processing by compression involve discrete
cosine transform (DCT) that converts data into sets of frequencies and minimizing the no. of bits
required and to represent an image [3].
Some traditional algorithms used in image enhancement can only enhance a single
feature of image such as the compression of the dynamic range of image or the enhancement of
the edge of image [4]. The Retinex Algorithm will be used in this study since it can balance both
the enhancement of dynamic compression range and the edge of the image at the same time. This
algorithm is based on a model of lightness and color perception of human vision [5] and is used
for improving illumination and reflectance of images obtained from a camera in uneven
lighting. Different techniques of Retinex algorithm were developed including Single Scale

Retinex (SSR) and Multi-Scale Retinex (MSR). SSR is defined for a point in an image while the
output of MSR is the weighted sum of several SSR outputs [31]. In comparison, MSR is better
than SSR in balance of dynamic compression and color rendition but is not desirable for colored
pictures. Further improvements on MSR have been developed such as Multi-Scale Retinex with
Color Restoration (MSRCR) and Multi-Scale Retinex with canonical gain/offset. MSRCR
is used to improve the MSRs application on colored pictures while MSR with canonical
gain/offset is used for better contrast [5].
This thesis research will combine three different methods to find a solution to the
problem. We intend to combine Gaussian filtering for noise reduction, Multi-Scale Retinex
Algorithm for image illumination and lastly Hallucination by super resolution algorithm for
recovering of lost high-frequency information occurring during the image formation process.

Proposed Method
a. Pre-Processing
Generally, pre-processing is composed of three steps. The first step is to normalize the image
and then the next step is to remove the noise and provide shadow effect using Gaussian filtering
with edge preservation. Afterwards, the image will be converted to greyscale. Converted grey
scale image will be extracted for Illumination process by Mutli-scale Retinex algorithm.

The captured image from the camera will be normalized and will be set in a
constant resolution. And then the image will be pre-processed to remove the noise and adjust the
right contrast of the image.
For noise reduction, the noise standard deviation of the image is estimated using
the Immerkaers fast method. First, each element in the filtering window is subtracted
from the center pixel to get the absolute difference between the center pixel and surrounding
pixels in the filtering window. The difference will be large when the image is highly
corrupted. This difference is compared with a threshold. The difference will be great when the
image is extremely corrupted. A threshold is used to compare this difference. The threshold
can be obtained by multiplying the smoothing factor and noise standard deviation. The
value of smoothing factor is indicated as two for best performance. If the selected value of
smoothing factor is high, the noise reduction is better at the cost of loss of image details. The
number of pixels taken into consideration in a filtering window should be at least 5. If it does not
satisfy the above condition, the window size should be increased and repeat the said procedures

until the number of pixels under consideration in a filtering window is at least five. And then, the
mean of the pixels that are considered will replace the center pixel. The said procedure is repeated
for the whole image.

Fig. 2 shows the input image used and the result after applying edge preservation and
noise reduction. After applying the edge preservation Gaussian Filtering the output image will be
filtered. The output will have a shadow from the input image. This shadow will be used for the
next algorithm which is the Retinex Algorithm.

For the Illumination Process, the algorithm that will be used which is the Multi-Scale
Retinex Algorithm is given by
where
is the Retinex output
and (x, y) is a point in an image, N is the number of scales, S is the number of spectral bands, and
.
the n-th scale for an input image,
gain

is set to satisfy the condition

the equation is given by


of the surround and

represents a Retinex output associated with


and the normalized surround function,

. A

. To be able to compute for the surround function,


, where

are the scales that control the extent

is the normalization factor. The smaller the value of

is the narrower surrounds will be. The output is shown in figure 3.2.

Figure 3.8 Multi-Scale Retinex Algorithm Output

b. Main Process

Before the main process, the proponents trained 25 low resolution images from the 100
face database. The proponents tested the best parameter to be used for the main process.
The proponents used the equation XOUT=XLR(M)+XHR(M)+XS(M) where XOUT is the
output, XLR is the low resolution input, XHR is the high resolution input, XS is the
linear combined similar images greater than the assigned threshold and M is the pixel ratio and
parameter which will be tested. The proponents test the image by changing the M to 50% -10%
-40%, 50% -20% -30%, 50%-30%-20% and 50%-40%-10%. To find what is the best ratio
that will be used, the proponents compared the output from a high resolution images using
SIFT algorithm.
Finding Highest Similarity Image (XHS)
The pre-processed input image is taken as XL. The next step is to find the highest image
similarity of the input image to the database. The proponents used the EyeOpen library
to compare image similarities. In this process, if the output is greater than 50%, it means that the
image have same structure to the input image[6]. All output that is greater than 50% will be added
to a separated folder, while the highest will be XHS. For the pixels to be linearly combined, 50%
will come from the low resolution input image, 40% will come from the highest similarity image
and 10% will come from the output similarity greater than 50%. In computing the similarity
percentage of the images, SIFT algorithm was used. The first step is to recognize the face of
the subject by using SIFT features. SIFT features are the invariant features extracted used
in matching between two different images. The location of potential interest points are
computed by detecting a set of Difference of Gaussian filters (DoG). The difference of
Gaussian is identified by comparing a center pixel from a 3x3 region to its 26 neighbors
as seen in Fig. 6.

After obtaining the interest points, a local feature descriptor is computed at each key
point. Each of these key points has an assigned vector of features describing the distribution of
local gradients in the nearest neighborhood of a given key point [6]. The features extracted from
the image are compared to the features from each image in the face database.

In comparing the features between two images, a distance between two descriptors must
be computed first. After computing for the distance, Simple Graph Matching (SGM) is used in
searching for the best matches for each feature vector in the query image. If two points P11 and
P12 of image 1 are matched to points P21 and P22 of image 2, then the geometric relation
between P11 and P12 and the one between P21 and P22 should also be similar [6]. The procedure
for calculating the similarity is as follows:

Finding Linear Combination of Similar image (XS)


To find the XS, the proponent must get the linear combination of the similar images using
this computation XS= YS1+ YS2 YSM where M is the reconstruction weight that must be equal to
1 and YS are the similar images from the database.
Step 1 Output
For the final process of step 1, the proponents used the linear combination using this
formula XOUT1=XL(.5)+ XS(.1)+XHS(.4) to obtain the output.
The estimated image XOUT1 is the step-one result as shown in Figure 8.

Step 2 Output

In step two, the proponents used Residue Compensation algorithm to patch up the
residue pixels from the result of step 1. To accomplish step two, the output from step one
Xout1 will be estimated and the final result Tout1 is obtained using this equation Tout1
=(m)TL+ (m)THS+(m)TS where m is the reconstruction weight and must be equal to one.

To get the TOUT1, the residue of the input image TL, highest similarity images TH and
Linear combined similar feature images TS are obtained.

Fig. 9 Step 2 Output

Final Process
For the last process, the proponents obtained the output image from step one and step
two. Final result XF is obtained by adding the X out1 which is the step-one result and T out1 which is
the step-two result XF=(.5)Xout1+(.5)Tout1. Figure 10 below shows the final output.

Fig. 10. Final Output

Experimental Results
For the pre-processing, 25 low resolution images out of 100 face database were tested for
the parameters to be used. Based from the figure above, it shows that 50% -10% -40% parameter
has the highest number of result which is 56% (14 out of 25) of the input. The 50% -20% -30%
parameter and 50% -30% -20% is 16% (4 out of 25) and the 50% -40% -10% has the lowest
number of result which is 12% (3 out of 25) of the input. Thus, the parameter that was used for
the whole process was 50% -10% -40%.
For the main processing, Test Set A, the proponents used 50% low resolution input, 10%
linearly combined similar images, and 40% highest similarity percentage for pixel ratio. For Test

Set B, the proponents used the 50% low resolution input, 10% linearly combined similar images,
and 40% high resolution version of the input for pixel ratio.
Fig. 11. Similarity Percentage Results Test Set A and Test Set B

Figure 11 above shows that the result of Test Set A and Test Set B has a significant

difference with each other. It shows that the image result of Test Set B has a better quality and
face features were reconstructed more compared to the Test Set A since in Test Set B, the input
image that was used has a corresponding high resolution version which was used in the 40% pixel
ratio of the output. Unlike in Test Set A, the input image has no corresponding high resolution
image in the 40% pixel ratio of the output. Instead, the image that was used in 40% pixel ratio of
the output was the image with the highest similarity percentage from the database .
4

Conclusion
The aim of this thesis is to improve the quality of low resolution images by reducing the
noise in the image, improving the illumination due to uneven lighting, finding the parameter or
pixel ratio that best improves the quality of the output and compressing the low quality image
into good quality. The approach has two parts: the pre-processing and the main processing. Preprocessing includes noise reduction and illumination correction using Gaussian Filtering with
Edge Preservation and Multi Scale Retinex Algorithm respectively. The main process has two
steps, the linear combination and residue compensation. Based from the experimental results,
high resolution versions of low resolution inputs significantly helped the reconstruction of facial
features of low resolution inputs. 75% of the low resolution inputs were reconstructed into better
quality image, thus, shows that the combinations of the used algorithms are effective .

References
[1] ACS-7205-001 Digital Image Processing (2012-13) Page 120.

[2] Morris, T. (2004). Computer Vision and Image Processing. Palgrave Macmillan.
[3] Chen, W. H. and Pratt, W. K. Scene Adaptive Coder. IEEE Transactions on Communications
COM, vol. 32, pp. 225232,1984.
[4] Weizhen, S., Fei, L., Qinzhen, Z. The Applications of Improved Retinex Algorithm for XRay
Medical Image Enhancement. 2012 International Conference on Computer Science and
Service System
[5] Bian, Z. and Zhang, Y. (2002) Retinex Enhancement Techniques Algorithm, Application
and Advantages
[6] Fraczek, R., Cyganek, B., Wiatr, K., Pararellized Algorithms for Finding Similar Images
and Object Recognition, 2013

Вам также может понравиться