Вы находитесь на странице: 1из 4

4th International Conference on Electrical Engineering (ICEE 2015)

IGEE, Boumerdes, December 13th -15th, 2015

PERCEPTUAL BLUR DETECTION AND ASSESSMENT IN THE DCT DOMAIN


Fatma Kerouh (1)(2) and Amina Serir (2)
(1)

Institute of Electrical and Electronic Engineering, Universite MHamed BOUGARA, Algerie.


(2)
USTHB, Laboratoire de Traitement dImages et de Rayonnement (LTIR)
f.kerouh@usthb.dz, aserir@usthb.dz
ABSTRACT

The main emphasis of this paper is to develop an approach


able to detect and assess blindly the perceptual blur degradation in images. The idea deals with a statistical modelling of
perceptual blur degradation in the frequency domain using the
discrete cosine transform (DCT ) and the Just Noticeable Blur
(JN B) concept. A machine learning system is then trained
using the considered statistical features to detect perceptual
blur effect in the acquired image and eventually produces a
quality score denoted BBQM for Blind Blur Quality Metric. The proposed BBQM efficiency is tested objectively by
evaluating its performance against some existing metrics in
terms of correlation with subjective scores.

on constructing a perceptual edge map using a psychometric function and the Just Noticeable Blur (JN B) concept [6]
[7] and then, estimate some statistical features relying on blur
effect. Finally, the SV M classifier is trained using these features to label a test image as perceptually sharp or blurred.
According to this classification, a quality score is derived.
Proposed approach is evaluated on different databases, Gblur
and JPEG2000 LIVE [8], IVC [9], TID 2008 [10], TID 2013
[11] and CISQ [12]. Experimental results show that the proposed statistical features provide high correlations with human judgement when assessing blur distortion. The next section details the proposed approach for blur detection. Section
3, presents different experiments achieved and results found.
The last section concludes this work with some perspectives.

Index Terms Blurring, blind quality metric; statistical


features; Support Vector Machines.
1. INTRODUCTION
Digital image and video compression could lead to visible
distortions in the coded image, with dominant artefacts like
blockiness and blurriness. The intent of this work is to focus
on a particular compression artefact, namely blurring. This
artefact mostly affects salient features such as edges and can
result in drastic quality degradation [1]. The fine details lost
due to blurring correspond to high frequencies in the image.
Over the years, enormous amount of efforts have been put
and many objective quality metrics have been proposed to detect and quantify the blur effect. Depending on the use or
not, completely of partially of the original image, these metrics could be full reference, reduced reference or no reference.
Our work deals with the no reference case which is required
in practical applications where the original image is expensive to obtain or unavailable.
Different approaches have been proposed in the literature to
quantify the blur effect in the DCT domain. Some methods propose a statistical modelling of the blur degradation on
the whole DCT image [2, 3], where others propose to characterize blurring especially on edge pixels in the frequency
domain [4, 5]. The proposed blur detection idea differs from
all these approaches in the fact that, it is based on extracting relevant perceptual statistical features. The idea turns

2. DCT DOMAIN FEATURES


The performance of a learning model is a function of the robustness of the used features. Hence, features selection is a
crucial step. That is why we conduct a series of experiments
leading to choose the best statistical features that are able to
characterize faithfully the blur degradation. Based on observations of the DCT of a sharp and a blurred image (figure 1),
the following statistical features are considered: mean , variance 2 , entropy En, kurtosis K, skewness S and maximum
energy E. In addition, the total number of non zero values N
is considered. The selected statistical features are then used
in the blur detection and assessment process. Let us detail the
used algorithm for perceptual edge map construction.
Apply the wavelet transform on the test image (im)
at J resolutions (j = 1 : J). At each resolution level
corresponds one approximation and three detail images
that represents high frequency components of an image
at three directions, respectively, horizontal Dhj , vertical Dvj and diagonal Ddj .
Construct the contour map Contj using the following
equations.

Ej (k, l) if Ej (i, j) > T hi
Contj (k, l) =
0
otherwise
(1)

2015 IEEE

system. The aim is to estimate the probability that an


edge pixel could be perceptual. The likelihood `j , to
detect perceptual edge pixels at each resolution level j
is derived from the psychometric function. It is defined
as follows.
`j (k, l) = (

ej (Contj )

Fig. 1. DCT on perceptual edge map of a sharp and a blurred


image.
with
Ej (k, l) =

Dh2j (k, l) + Dvj2 (k, l).

(2)

herein, Ej (k, l), represents the detected edge pixel


magnitude at the j th resolution. While evaluating the
blur level at different resolutions using the wavelet
transform, it could be observed that for a fixed threshold, the edge detection is less efficient while going
down in resolutions. This is due to smoothing introduced by the wavelet transform filters. Then, for a
better edge detection we found that, it is useful to propose a set of thresholds depending on the resolution
level j as follows.

T hj =

k=Nj l=Mj
X X
2j1
(Ej (k, l)).
Nj M j
k=1

(3)

l=1

where Nj Mj corresponds to the edge map size at the


j th resolution.
Construct the perceptual edge map: the perceptual concept is introduced to take into account the human visual
system properties. The idea is to sense the perceptual
blur in the extracted edge map Contj . For this purpose,
the psychometric function defined by equation (4), is
used.
1
| )
(4)
P = 1 exp (|
jnb
herein, stands for the standard deviation of the Gaussian filter and jnb represents the Just Noticeable Blur
threshold. It could be defined as the minimum quantity
of blurring that could be perceived by the human visual

w(Contj (k, l) 1
| )
wjnb

(5)

where w(Contj (k, l)) stands for the detected edge


pixel spread. It is estimated by counting the total
number of pixels between two consecutive maxima or
minima around the edge pixel [13]. wjnb denotes the
just noticeable edge pixel spread. According to [6],
the wjnb value depends on the contrast value C. It is
measured to be 5 for C 50 and 3 if C 51. The parameter is fixed at 3.6. The pixel is said as an outline
perceptual edge pixel, if its probability is higher than
63% [14]. Let us denote the detected perceptual edge
map as P contj . It is constructed as follows.

Contj (k, l) if `j (k, l) > 0.63
P contj (k, l) =
0
otherwise
(6)
The edge map, P contj , at each resolution j, contains only
perceptual edge pixels. Having the perceptual edge map, the
DCT is applied. Significant DCT coefficient are only kept
by applying a threshold against the mean value (figure 1). Finally, the considered statistical features are computed.
3. EXPERIMENTS, TESTS AND RESULTS
In this section, we present at first the conducted tests on the
proposed descriptor. Then, the classification robustness is
evaluated on different databases. Finally, the proposed blind
blur quality metric BBQM is defined and evaluated against
some existing perceptual metrics in terms of correlation with
subjective scores.
3.1. Proposed statistical features evaluation
We report, here, how each considered statistical feature alone
correlates with subjective DMOS provided with the Gblur
LIVE database. The considered features are tested on both
DCT of the whole image, DCT of the non-perceptual edge
map (eq.1) and the perceptual one (eq.7). The Spearman correlation is computed to evaluate the performance of each considered statistical feature for blur characterization. Obtained
results are reported in Table 1. Accordingly, we can conclude that applying the DCT on the perceptual edge map
provides the best correlation values compared to other cases.
By analysing all the obtained SROCC values, we decided to
characterize blur effect by applying the DCT on perceptual
edge map using features that correlate better with subjective

Table 1. Statistical features evaluation against subjective


DMOS of the Glur LIVE database
SROCC

2
K
S
E
En
N

DCT
0.7341
0.048
0.1925
0.2164
0.011
0.1977
0.5765

DCT Edge map


0.8552
0.8238
0.4692
0.46
0.8298
0.3809
0.4619

DCT P.Edge map


0.8710
0.8410
0.4850
0.4766
0.8321
0.9257
0.9297

(a)

(b)

(c)

(d)

Table 2. Accuracy rate on different datasets


D.B

JP EG2000
87.89

Gblur
85.08

T ID2008
80.95

T ID2013
78.33

IV C
82.76

(e)
scores that are: mean, variance, energy, entropy and the total
number of non zero values. So each test image will be characterized only by five features. Figure 2 depicts the evolution
of each selected feature against the subjective DM OS values
of Gblur LIV E database. Accordingly, selected features are
consistent for perceptual blur detection in the DCT domain.

Fig. 2. Considered statistical features versus DMOS on Gblur


database. (a) Mean . (b) Variance, (c) Entropy,(d) Energy, (e)
Non zero values.

Table 3. Accuracy rate comparative study on Gblur database


3.2. Test of the classification robustness
To test the robustness of the classification step, we consider
the training and test sets completely content independent in
the sense that, no image is available in both sets. The training
step is carried out on CISQ database (150 images). However,
tests are achieved on other datasets that are, JP EG2000 and
Gblur LIV E, T ID2008, 2013 and IV C. The classification
accuracy is evaluated in terms of the right recognition ratio
calculated as follow.
total number of correctly classif ied images
total number of test images
(7)
The proposed method is tested on all images from IV C,
T ID2008, 2013, LIVE Gblur, JP EG2000 databases using
the same SV M parameters fixed in the training set. Obtained
results are summarized on Table 2. Accordingly, it could
be noticed that important right recognition ratio values are
obtained on all considered datasets, especially on Gblur and
JP EG2000. Obtained classification ratio values are compared to the obtained ones using the same statistical features
extracted on the DCT of spatial image and the DCT of
the non perceptual (N-P) edge map and the proposed one in
[15]. We recall that, authors in [15], use all gradient histogram (256 features) as a descriptor to classify images into
two classes blur or sharp. According to Table 3, it could
be clearly noticed that the proposed descriptor provides best
performance compared to others. Specially, compared to the
proposed descriptor in [15] even with largely less features (5
against 256).
= 100

Gblur

[15]
61.36

DCT
65.98

DCT Edge map


70.03

DCT P.Edge map


87.89

3.3. Blind Blur Quality Metric BBQM evaluation


Achieving the classification step, we obtain the class of each
image(sharp or blurred) and the corresponding confidence
value conf . Ranges between 0 and 1, conf value represents
the distance between the test image and the training data. The
higher confidence value implies a faithful classification result.
Using the conf information and the total number of non zero
values N , the proposed BBQM is defined as follows.
BBQM (im) =

N
conf (im).
nl nc

(8)

where nl nc stands for the image size. An obtained quality


score BBQM is normalized between 0 and 1. It is close to 0
if the image is sharp and tends to increase with blur amount.
Let us now evaluate the proposed BBQM on all images from
Gblur, JP EG2000, T ID2008, T ID2013 and IV C. Table
4, illustrates the quantitative evaluation in terms of Spearman
SROCC and Pearson CC correlations and the Root Mean
Square Error RM SE. Accordingly, the proposed metric correlates highly with subjective scores. To validate the proposed
metric, a comparative study against some existing perceptual
blur quality metrics is performed in terms of SROCC and
CC while applied on the GblurLIV E dataset. According to
Table 5, the proposed metric provides a competitive correlation values compared to the considered perceptual metrics.

Table 4. BBQM evaluation on different datasets.


Databases
Gblur
JP EG2000
IV C
T ID2008
T ID2013

SROOC
0.9332
0.9407
0.9031
0.8687
0.8602

CC
0.9297
0.9353
0.9004
0.8702
0.8798

RM SE
5.8145
0.1543
0.2341
0.3326
0.3532

Table 5. Comparative study on Gblur LIVE dataset


Gblur
CP BD [16]
JN B [6]
S [17]
M [18]
BIQI [19]
IQA SV M [20]
P BIQA [7]
BBQM

SROCC
0.8889
0.8203
0.8578
0.8010
0.8463
0.9217
0.9071
0.9332

CC
0.9097
0.8939
0.8789
0.8630
0.8293
0.8997
0.9028
0.9297

4. CONCLUSION
In this paper, a simple blind blur quality metric is defined. The
idea turns on perceptual blur effect modelling in the DCT
domain using the JN B concept. Experimental results prove
that the suggested metric provides consistent performance in
terms of blur quality prediction on different datasets. Future research involve developing a region-based segmentation
method using perceptual blur detection in each patch of the
image.
5. REFERENCES
[1] F. Kerouh, A. Serir. An adaptive deblurring method
based on the Multiplicative Multiresolution Decomposition. IEEE, EUVIP, 88-93, 2013.
[2] X. Marichal, W. Y. Ma, H. Zhang. Blur determination
in the compressed domain using DCT information. Proceeding IEEE, ICIP, 2:386- 390, oct. 1999.
[3] N. Zhang, A. Vladar, M. Postek and B. Larrabee. A
Kurtosis-based statistical for two dimensional process
and its application to image sharpness. Section of physical and engineering Sciences of American Statistical Society, 4730-4736, 2003.
[4] Caviedes and F. Oberti. A new sharpness metric based
on local kurtosis, edge and energy information. Signal
Processing: Image Communication, 19: 120-127, 147163, 2004.
[5] M. Asaad, A. C. Bovik and C. Charrier. A DCT statistics
based blind image quality index. IEEE, signal processing letters, 17(6):583-586, June 2010.

[6] R. Fezli and L. J. Karam. A No-Reference Objective


Image sharpness Metric Based on the Notion of Just
Noticeable Blur (JNB). IEEE Trans. Image Processing,
18:717-728, 2009.
[7] F. Kerouh and A. Serir. A Perceptual Blind Blur Image
Quality Metric. ICASSP, Florence, Italy, 2784:2788,
Mai, 2014.
[8] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli. Image quality assessment from error visibility
to structural similarity. Proc. IEEE, Image Processing,
13(4):600- 61, April. 2004.
[9] P. Le Callet and F. Autrusseau, . Subjective quality Assessment IRCCyN/IVC database.
[10] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M . Carli and F. Battisti. TID2008-A Database
for Evaluation of Full-Reference Visual Quality Assessment Metrics. Advances of Modern Radio electronics,
10:30-45, 2009.
[11] N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian,
L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, C. C. Jay Kuo. Color Image Database TID2013:
Particularities and Preliminary Results,. Euvip, Paris,
106-111, 2013.
[12] http://vision.okstate.edu/cisq/.
[13] P. Marziliano, F. Dufaux, S. Winkler, T. Ebrahimi. A no
reference perceptual blur metric. Inter. Conf on Image
Processing, 3:57-60, 2002.
[14] I. Hontsch and L. J. Karem. Adaptive image coding with
perceptual distortion control. IEEE Trans. Image Process,11(3): 213-222, Mars. 2002.
[15] M. J. Chen and A. C. Bovic. No reference image blur assessment using multiscale gradient. EURASIP Journal
on image and video processing, 70-74, 2011.
[16] N. D. Narvekar, L. J. Karam. A no-reference perceptual
image sharpness metric based on a cumulative probability of blur detection. International Workshop on Quality
of Multimedia Experience, 2949-2952, 2009.
[17] S. Varadarajan and L. J. Karam. G. Wyszecki and W.
S. Stiles. An Improved Perception Based No-Reference
Objective Image Sharpness Metric Using Iterative Edge
Refinement. ICIP, 401- 404, 2008.
[18] N. G. Sadaka, L.J.karam, R. Fezli and G. P. Abousleman. A no reference perceptual image sharpness metric
based on saliency weighted foveal pooling. ICIP, 369372, oct 2008.
[19] A. Krishna, Moorthy, A. Bovik. A Two-Step Framework
for Constructing Blind Image Quality Indices. IEEE
Signal Processing Letters, 17(5):513-516, 2010.
[20] F. Kerouh and A. Serir. A classification-based perceptual blind blur quality metric. ISIVC, Marrakech, 2014.

Вам также может понравиться