Вы находитесь на странице: 1из 12

Engineering Science and Technology, an International Journal xxx (xxxx) xxx

Contents lists available at ScienceDirect

Engineering Science and Technology,


an International Journal
journal homepage: www.elsevier.com/locate/jestch

Full Length Article

Spatial frequency discrete wavelet transform image fusion technique for


remote sensing applications
Joy Jinju a,⇑, N. Santhi a, K. Ramar b, B. Sathya Bama c
a
Electronics & Communication Engineering, Noorul Islam Centre for Higher Education, Kumaracoil, Kanyakumari District, Thuckalay, Tamil Nadu 629180, India
b
Computer Science & Engineering, Anna University, Einstein College of Engineering, Sir C V Raman Nagar, Seethaparpanalur, Tirunelveli, Tamil Nadu 627012, India
c
Electronics & Communication Engineering, Anna University, Thiagarajar College of Engineering, Thiruparankundaram, Madurai, Tamil Nadu 625015, India

a r t i c l e i n f o a b s t r a c t

Article history: In Remote Sensing, fusion of Panchromatic (PAN) image and Multispectral (MS) image is an important
Received 13 May 2018 technique. This paper incorporates a multiresolution image fusion algorithm based on the proposed
Revised 24 November 2018 Spatial Frequency DWT (SFDWT – Spatial Frequency Discrete Wavelet Transform) technique. In SFDWT
Accepted 9 January 2019
technique, the low resolution MS image is resampled to the high resolution PAN image and fusion is done
Available online xxxx
by injecting the spectral and spatial information’s present in PAN and MS images onto each other using
their corresponding DWT coefficients by evaluating the spatial frequency (SF – Spatial Frequency).
Keywords:
Evaluation is carried out using Pléiades Satellite images having a resolution ratio of 1:4. Fusion simulation
SFDWT
PAN image
is done in which pan sharpened reference images are available, indicate that the proposed SFDWT tech-
MS image nique performs better than standard DWT methods and the IHS method in terms of different evaluation
Spatial frequency indexes including Structural similarity measure (SSIM), Errur Relative Globale Adimensionnelle de
Synthese (ERGAS) etc. SFDWT produces high spectral and spatial quality fused images for remote sensing
applications.
Ó 2019 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction better spectral resolution but poor spatial resolution. The different
earth covers include water, grass, trees, asphalt, bare land etc.
In remote sensing, information about an area or object is Hence in spite of the spectral quality, such sensors may not per-
obtained with the help of sensors which are kept at either aircraft form well in identification applications. On the other hand, sensors
or satellites. Two parameters that is important when it comes to that are capable of capturing images with good spatial resolution
remote sensing sensors are spatial resolution and spectral resolu- can easily identify geometric information [6].
tion which gives the quality of the images. Spatial and spectral Spatial resolution can be thought of as the ability of the sensor
resolution of these sensors are related in an inverse manner i.e. to recognize two closely spaced objects as two separate objects
the sensor capable of capturing images with high spatial resolution while spectral resolution shows the sensitivity of the sensor to var-
do not have good spectral resolution and vice-versa. This inverse ious bands of wavelength present in the electromagnetic spectrum.
relation of sensors may be due to several factors like those related Panchromatic images (PAN) have better spatial resolution and poor
to design, structural or observational constraints. Also, as these spectral resolution while Multispectral images (MS) are images
sensors are located either in aircraft or satellites, where the on- with better spectral resolution and poor spatial resolution. Remote
board bandwidth and storage capabilities are limited, add on to sensing sensors are capable of producing either MS or PAN images.
this effect. This complementary behaviour of Panchromatic images and
The sensors capable of acquiring the radiant flux reflected by Multispectral images, the limitation on the amount of data that
various types of earth cover in ample number of bands that can be stored and the amount of data transmitted to the ground
extends over a wide range in the electromagnetic spectrum have station are the motivation behind different image fusion
techniques [6]. Image fusion is the technique in which PAN image
⇑ Corresponding author at: Department of Electronics and Communication, having high spatial quality and low spectral quality and MS image
Noorul Islam Centre for Higher Education, Kumaracoil, Kanyakumari District, with low spatial quality and high spectral quality are combined in
Thuckalay, Tamil Nadu, India. order to get fused images with excellent spatial and spectral
E-mail address: jinjujoy1@gmail.com (J. Jinju). quality. The image fusion technique should not introduce spectral
Peer review under responsibility of Karabuk University.

https://doi.org/10.1016/j.jestch.2019.01.004
2215-0986/Ó 2019 Karabuk University. Publishing services by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
2 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

distortion in the fused image. Images having good spectral as well Thus effective extraction and injection of spectral and spatial
as spatial quality are very important in many remote sensing appli- information’s contained in PAN and MS images without causing
cations like feature identification, land cover classification, urban any Artifacts or color distortion is very important in image fusion.
area classification. This main contribution of this paper is to develop a multiresolution
There are several factors [6] to be considered during image image fusion algorithm based on the proposed Spatial Frequency
fusion in order to get good quality fused images. Color distortion DWT (SFDWT) technique which results in images with better spec-
is one of the problem that can affect fusion process. One method tral qualities. The quality of the proposed technique and existing
to avoid this problem is to ensure that the Panchromatic image methods are analyzed quantitatively using several reference and
spectrum should be in such a way that it spreads over the entire non-reference performance [16] parameters and visually.
spectrum range of all the bands of the multispectral image. Also, The organization of this paper is as follows. Section 2 discusses
both spectrum of the high spatial resolution PAN image and the the standard wavelet-based image fusion technique, some recent
low spatial resolution MS image should be similar during the developments in DWT based image fusion technique and the pro-
fusion process. Artifacts and radiometric artifacts during fusion posed SFDWT image fusion technique is explained. Section 3
process can be avoided if the registration between PAN image defines the various performance parameters used for quantitative
and MS image have a precision less than 0.5 pixels. Another point analysis. Section 4 explains the details of the images used in this
to be noted to avoid artifacts is that, both the replaced component paper. Section 5 gives the experimental results and discussion
and the Panchromatic image should be globally contrast matched. based on visual and quantitative analysis. The conclusion of this
One more point that is worth noting is that, the spatial resolution paper is given in Section 6.
ratio between PAN image and MS image should not be greater than
4, as fusion processes like resampling and registration will be
2. Methodology
difficult beyond that [6].
Image fusion techniques can be classified into several types.
2.1. Standard wavelet transform based image fusion technique
Based on whether fusion is done in time domain or frequency
domain, the image fusion technique can be classified into two,
The schematic block diagram of multiresolution image fusion
spatial domain and spectral domain techniques. Even though there
technique based on wavelet transform [6] is given in Fig. 1. In this
are many image fusion techniques, most commonly used
technique, the two images i.e. PAN and MS images are separated
technique are based on Intensity Hue Saturation [4] (IHS)
into approximation and detail coefficients. Next, the coefficients
transformation technique [3] and Principal component analysis
are combined, either by substitution, addition or depending on
(PCA) [8], which are spatial domain techniques. In these
the maximum value of coefficients. Inverse wavelet transform is
techniques, Panchromatic images having high spatial resolution
taken to obtain the final fused image. As the detail band coeffi-
is injected into the low-resolution MS images, often known as
cients have zero mean, the radiometry of the MS image will not
component substitution methods. These techniques have the
be affected by the fusion process. There is a wide variety of wavelet
drawback of introducing spectral distortion in the fused image [8].
transforms [7,12,13,17,21] used in remote sensing applications.
There are many remote sensing applications using spectral clas-
The level of wavelet decomposition required for acceptable perfor-
sification techniques that involves the extraction of thematic infor-
mance depends on the application as well as the ratio of spatial
mation. The image fusion techniques that results in spectral
resolution of MS image to the spatial resolution of PAN image
distortion cannot be used for such applications. The spectral distor-
[19]. One of the basic multiresolution image fusion technique using
tion can be avoided by making use of the spectral domain image
wavelets is based on Discrete Wavelet Transform (DWT).
fusion techniques. The basic principle of multiresolution image
fusion technique is that, the high frequency components present
in the PAN image is injected into the resampled version of the 2.2. Recent developments in DWT based image fusion technique
MS image. Two aspects that determine the efficiency of such meth-
ods are, firstly how effective the spatial information contained in Some of the recent developments in DWT based Image Fusion
the PAN image can be extracted and secondly, how effectively Technique are mentioned in this section. One of the major problem
the extracted information can be injected into the original multi- associated with DWT is shift variance, i.e. mismatch in image
spectral image without introducing spectral distortion, i.e. the registration can result in poor performance. Different version of
effectiveness of fusion rule.
As discussed earlier, the effective extraction of the spatial infor-
mation contained in the panchromatic image is an important step
in determining the effectiveness of the image fusion technique. In
the past few decades, several image fusion algorithms based on the
multiresolution framework, using Discrete wavelet transform [15],
Laplacian pyramid algorithms [15] and other wavelet transforms
for extraction and injection processes were discussed. The mul-
tiresolution image fusion technique based on wavelet transform
is a method by which the wavelet coefficients of PAN image is cal-
culated in order to extract the spatial information and then it is
combined with the corresponding coefficients of MS image. The
coefficients are combined using different fusion rule like substitu-
tion, maximum selection, averaging etc. As there is a wide range of
wavelet transform based image fusion techniques [7,12,13,17,21],
it is not discussed in this paper. Only a general overview of wavelet
transform based image fusion technique and recent developments
in DWT based image fusion technique is given. Fusion rule is
another important aspect that improves the quality of the image Fig. 1. Schematic Block Diagram of Image Fusion Technique based on Wavelet
fusion technique. Transform.

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx 3

Fig. 2. Block Diagram representation of SF-DWT image Fusion technique.

DWT like undecimated DWT has been developed to overcome such can be considered as high frequency content. Hence in such
problems. Other version like Dual Tree Complex Wavelet application preserving high frequency content is also important
Transform (DT-CWT) has also been developed with an aim in addition to improving the spectral quality [4]. As explained
overcome shift variance. DT-CWT image fusion technique based earlier, SF is a parameter which is directly related to the high
on Noticeability Index or perceptual importance is discussed in frequency content of the image. Thus SF can be used in the fusion
[10]. Another version of DWT mentioned in [28] is such that, the of urban areas in remote sensing applications. The effect of SFDWT
low frequency coefficients are fused based on regional energy will be more predominant in images with large high frequency
and high frequency coefficients are fused based on the weighted contents.
sum of difference of neighbouring coefficients. It is applied in the Spatial frequency [21] is defined as follows. For an MXN image If
case of Infrared images. Image fusion based on DWT using local with gray value If ðx; yÞ at position ðx; yÞ is given by
visibility for low frequency coefficients and variance for fusing high qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
frequency component is given in [26]. However, these techniques Spatial frequency F s ¼ F 2R þ F 2C ð1Þ
are implemented in the case of gray scale images. Images obtained
in remote sensing applications may contain regions of clouds
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 M1 N1
which may of less informative. This is mainly due to the occlusion Row Frequency; F R ¼ R R ½I f ðx; yÞ  I f ðx; y  1Þ2 ð2Þ
MN x¼0 y¼1
caused by the clouds. As these regions of clouds does not contain
any information, it can be remove and once remove the image will rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 N1 M1
be not rectangular. Shape Adaptive DWT (SA-DWT) [24] is used in Column Frequency; F c ¼ R R ½I f ðx; yÞ  I f ðx  1; yÞ2
such situations. With an aim to improve the time and space MN y¼0 x¼1
complexity and also identifying which image contribute more in ð3Þ
the fusion process, DWT combined with Genetic Algorithm is where, If represents fused image, M & N denotes the dimensions of
implemented in [11]. This efficient DWT based image fusion the fused image.
technique is applied in the case of Brain images. To overcome the The technique proposed make use of DWT to extract the spatial
problem of unclear textual information that occurs in the case of information contained in PAN image and MS image and then it is
standard DWT based image fusion technique, a feature residual combined using the new fusion rule which is based on spatial
and statistical matching image fusing technique [9] applied in frequency to get the high-resolution images. In standard
the case of visible and infrared images is developed. wavelet-based image fusion techniques, after finding the wavelet
coefficients associated with the MS image and PAN image, the
2.3. Proposed Spatial Frequency DWT (SFDWT) image fusion technique detail coefficients of PAN image and approximation coefficient of
MS image are combined to obtain the fused image. It can be seen
Spatial frequency (SF) is a measure of the amount of frequency that, in most of the image fusion technique, the high frequency
content which is present in the image. In other words, it shows the component of MS image is replaced with that of the high frequency
clarity or sharpness of the image. Moreover, urban land covers, component of PAN image and thus the details coefficients of MS
which includes buildings, transportation, parks, stadiums etc., image are often discarded. There may be some useful information

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
4 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

Fig. 3. Block diagram of Proposed Fusion Rule.

present in the detail coefficients of MS image which can be made obtained. Once the N-Level coefficients are obtained, fusion is done
use of. This is the motivation behind the proposed SFDWT image at each level using steps (6)–(10) and inverse DWT is taken, start-
fusion technique. ing from Nth level to 1st level to obtain the fused coefficients and
Fig. 2 shows the outline of SFDWT image fusion technique to finally the fused image. Nth level approximation coefficient is
combine both MS image and PAN image having a spatial resolution obtained by replacing the Nth level approximation coefficient of
ratio of 1:4. In this paper, 2-level SFDWT using 2nd order Daubechies PAN image with that of MS image. Now, Nth level detail coefficients
filter banks is applied on both MS and PAN images. It is assumed that are obtained by fusing the Nth level detail coefficients of PAN and
both PAN image and MS image are geometrically registered and no MS images using SF Fusion rule. Nth level fused image which is
pre-processing is performed. Few papers have discussed the effect the (N-1)th level approximation coefficient is obtained by combin-
of misregistration on the pan-sharpened image [2]. ing the Nth level approximation and detail coefficients by taking
The proposed algorithm consists of 11 steps. The method starts inverse DWT. Same operation is done at (N-1)th level of decompo-
by resampling the MS image, as the resolution ratio of MS and PAN sition to get the corresponding fused image at (N-1)th level and so
images are different. After resampling, the approximation on. At 1st Level of decomposition, the so obtained fused approxi-
coefficient and detail (horizontal, vertical and diagonal) coefficients mation coefficient and fused details coefficients are combined
associated with both PAN and MS images are calculated by applying using inverse DWT to get the final fused image.
DWT. As the approximation coefficient contain the low frequency Algorithm of SFDWT Image Fusion Technique
information and PAN image lacks low frequency information, the
approximation coefficient of PAN image is replaced with that of (1) MS image is resampled so that its spatial resolution is equal
MS Image. The horizontal, vertical and diagonal coefficients of both to that of the PAN image in order to get a perfectly superim-
images are combined using the new fusion rule given in steps posable image.
(6)–(10) separately. Fusion is done as follows, in order to avoid (2) Apply DWT to MS image and calculate the approximation
any artifacts or distortion from occurring and to improve the fusion and detail coefficients.
process, the PAN Detail coefficients are first histogram matched (3) Similarly apply DWT to PAN image and decompose it into
with the intensity value of the corresponding MS detail coefficient. their respective approximation and detail coefficients.
Then spatial frequency of histogram matched PAN Detail (4) Approximation coefficient of PAN image is replaced with
coefficients and intensity value of MS detail coefficients is that of MS image.
calculated and normalized using Eqs. (4) & (5). Once the normalized (5) Each pair of the detail coefficients obtained in step (2) & (3)
spatial frequencies are calculated, it is then used to fuse the detail are fused by using the proposed fusion rule based on spatial
coefficients of MS and PAN Image using Eq. (6). This fusion is done frequency given in steps (6)–(10).
in the case of horizontal, vertical and diagonal coefficients (6) PAN Detail Coefficients ðPDC Þ is histogram matched with the
individually to get the corresponding fused detail coefficients. intensity component of the MS Detail Coefficients ðMDC Þ.
Finally, the fused detail coefficients and replaced MS approximation (7) Spatial Frequency of both, histogram matched PAN Detail
coefficient are combined using inverse DWT to get the final fused Coefficients (F SPC ) and the intensity value of MS Detail coef-
image. ficients (F SMC ) are calculated using Eqs. (1)–(3).
For N-Level SFDWT image fusion technique, DWT is applied to (8) Let FSMC be the Normalized spatial frequencies of the Inten-
the MS and PAN images to calculate approximation and detail sity value of MS Detail Coefficients.
coefficients, DWT is again applied to the approximation coefficient
of level 1 to get the approximation and detail coefficients of level 2 F SMC
FSMC ¼ ð4Þ
and so on till Nth level approximation and detail coefficients are ðF SMC þ F SPC Þ

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx 5

Fig. 4. Multispectral and Panchromatic Images: (a) MS Image of 1st Set, (b) PAN Image of 1st Set, (c) MS Image of 2nd Set, (d) PAN Image of 2nd Set.

(9) Let FSPC be the Normalized spatial frequencies of histogram 3. Quality assessment parameters
matched PAN Detail Coefficients.
Quality evaluation [14] is the process of determining the effec-
F SPC
FSPC ¼ ð5Þ tive quality of the fused image with respect to different quality
ðF SMC þ F SPC Þ parameters, which usually depends on the application. Qualitative
and Quantitative methods are the two types of Quality assessment
(10) Then the fused Detail coefficients (Ifdc Þ will be equal to methods. In many situations, the pan sharpened reference image
may not be available and therefore, different techniques [23] are
Ifdc ¼ MDC  FSMC þ PDC  FSPC ð6Þ employed in such cases to evaluate the quality of the fused image.
To get a clear picture about the performance of the proposed
technique, it is applied in the case where a pan-sharpened refer-
ence image is available. The qualitative method generally involves
(11) Apply Inverse DWT to fused detail coefficients (Ifdc Þ and
visual comparison between the fused image and reference image
replaced MS Approximation Coefficient to get the fused
whereas the quantitative method determines the spectral and
image ðIf Þ.
spatial qualities of the fused image based on some quality
parameters. Both reference and non-reference indexes [16] are
The block diagram of spatial frequency based fusion rule is
considered in this paper.
given in Fig. 3.
The quality metrics used in this paper are defined as follows.
The result of SFDWT image fusion technique will produce
images with good spatial and spectral quality which will be evalu-
(1) Standard Deviation (SD) [18]
ated in the following section both qualitatively and quantitatively.

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
6 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

Fig. 5. Stage by Stage Outputs of 2 Level SF-DWT Image Fusion: (a) MS Image, (b) DWT Coefficients of MS–Level 1, (c) DWT Coefficients of MS–Level 2, (d) PAN Image, (e) DWT
Coefficients of PAN–Level 1, (f) DWT Coefficients of PAN–Level 2, (g) Fused Image, (h) Fused Coefficients–Level 1 and (i) Fused Coefficients–Level 2.

P  
It is defined as a measure of the contrast of the fused image. i;j ðIf i;j  I f ÞðIri;j  Ir Þ
CC ðIr ; If Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ð8Þ
Fused image with high spectral quality will have low standard  2  2
i;j ðIf i;j  I f Þ ðIri;j  I r Þ
deviation.
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi here If and Ir are the fused image and reference image. The mean
1 N1 M1
SD ¼ R R ½I f ðx; yÞ  U If 2 ð7Þ values of If and Ir are If and Ir respectively. If i;j and Iri;j are the pixel
MN y¼0 x¼1 th
values corresponding to the ði; jÞ pixel of the images If and Ir
where the mean is denoted as respectively.
1 XN1 XM1
U If ¼ jI ðx; yÞj
x¼1 f
(3) Entropy (E) [18]
MN y¼0

It gives the information content in the image. A Higher value of


(2) Correlation Coefficient (CC) [22] entropy indicates a higher amount of information present in the
image.
The similarity between the fused and reference images can be
calculated using correlation coefficient. CC of unity indicates that X
G1
E¼ pi log2 pi ð9Þ
both images are same. It is one of the reference quality metrics. i¼0
It is defined as

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx 7

Fig. 6. Original and fused images of Set 1 Pléiades Image (Sub-Image portion is shown in red box): (a) PAN, (b) Sub-Image of PAN, (c) MS, (d) Sub-Image of MS, (e) DWT-S, (f)
Sub-Image result of DWT-S, (g) DWT-A, (h) Sub-Image result of DWT-A, (i) DWT-MS, (j) Sub-Image result of DWT-MS, (k) SF-DWT and (l) Sub-Image result of SF-DWT.

here G is the total number of grey levels and the probability distri- where RMSE represents root mean square error, lðIr Þ denote mean
bution of each level is given by pi . of the reference image,Ir is the reference image, If is the fused
image, M R denotes resolution ratio of PAN and MS images and NB
(4) Relative Average Spectral Error (RASE) [22] denotes the number of spectral bands.

The global spectral quality of the image can be evaluated using (6) Root Mean square error (RMSE) [27]
RASE. The lower value of RASE indicates better spectral quality.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RMSE is defined as
100 1 XNB rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
RASE ¼ RMSE2 ðBi Þ ð10Þ 1 XM XN
M Rad NB i¼1
RMSE ¼ ðIrij  If ij Þ2 ð12Þ
MN i¼1 j¼1
here RMSE denotes root mean square error and MRad represents the
where Ir denotes reference image and If is the fused image, i and j
mean radiance of the N B spectral bands Bi of the original MS bands.
are the row index and column index, M and N denotes the horizon-
tal and vertical dimensions. A Smaller value indicates better
(5) Errur Relative Globale Adimensionnelle de Synthese (ERGAS)
performance.
[22,25]
(7) Peak Signal to Noise Ratio (PNSR) [16]
It is relative global dimensional synthesis error. A Smaller value
of ERGAS shows that both fused image and reference image are
The mathematical expression for PSNR is given by
similar [20]
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 255
 2 PSNRðdBÞ ¼ 20log qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð13Þ
100 1 XNB RMSEðIr ðiÞ; If ðiÞÞ PM PN 2
ERGAS ¼ ð11Þ j¼1 ðIrij  If ij Þ
1
MR NB i¼1 lðIr Þ MN i¼1

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
8 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

Fig. 7. Original and fused images of Set 2 Pléiades Image (Sub-Image portion is shown in red box): (a) PAN, (b) Sub-Image of PAN, (c) MS, (d) Sub-Image of MS, (e) DWT-A, (f)
Sub-Image result of DWT-A, (g) DWT-S, (h) Sub-Image result of DWT-S, (i) DWT-MS, (j) Sub-Image result of DWT-MS, (k) SF-DWT and (l) Sub-Image result of SF-DWT.

Table 1 here mean values of the reference image Ir and fused image If are
Comparison of IHS, DWT-S, DWT-A, DWT-MS, SF-DWT Image Fusion Techniques in denoted by lr and lf respectively, its variance is given by rr 2 and
terms of Entropy, RASE, ERGAS.
rf 2 and the covariance of the images is represented by rrf .
Algorithm Entropy RASE ERGAS
Set 1 Image (9) Edge Stability mean square error (ESMSE) [1]
IHS 7.6460 56.3562 75.6294
DWT-S 7.5244 33.7876 27.2659
Edges present in an image represent high-frequency content.
DWT-A 7.5167 32.4434 25.3269
DWT-MS 7.5287 33.9927 27.7336
Consistency of edges present in the fused image and reference
SF-DWT 7.5152 32.3371 25.1699 image can be clearly evaluated using ESMSE.
Set 2 Image
IHS 7.6595 81.0709 133.324 1 X
ESMSE ¼ ðEr  Ef Þ2 ð15Þ
DWT-S 7.3683 43.7873 38.9306 NB
DWT-A 7.3855 41.7249 35.6626
DWT-MS 7.3870 44.0414 39.5912 where, Er and Ef denote the edge maps of the reference image Ir and
SF-DWT 7.3853 41.5840 35.4326 fused image If , NB denotes the number of spectral bands. Lower
value indicates better performance.
here Ir represents the reference image and If is the fused image, i The performance of the proposed technique will be evaluated in
and j are the row index and column index. terms of these nine quality metrics. Quality metrics like RASE,
ERGAS gives the global spectral performance [6] of the fused
(8) Structural similarity measure (SSIM) [16] image.

The structural similarity of the fused image and the reference 4. Materials
image is determined using SSIM. It is better than PSNR. A Higher
value of SSIM indicates better structural quality and hence better Experimental analysis to evaluate the quality of the proposed
quality. SFDWT image fusion technique is carried out using images
2ðlr lf Þð2rrf Þ available from Pléiades sensor. Pléiades satellites designed
SSIM ¼ ð14Þ by French-Italian ORFEO program are capable of providing
ðlr 2 þ lf 2 Þðrr 2 þ rf 2 Þ
panchromatic images of resolution of 50 cm, multispectral image

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx 9

Table 2
Comparison of IHS, DWT-S, DWT-A, DWT-MS, SF-DWT Image Fusion Techniques in terms of SD, CC, RMSE.

Algorithm SD CC RMSE
R G B R G B R G B
Set 1 Image
IHS 0.234 0.235 0.222 0.905 0.921 0.927 0.173 0.158 0.154
DWT-S 0.242 0.246 0.244 0.944 0.931 0.911 0.090 0.095 0.105
DWT-A 0.236 0.240 0.238 0.952 0.934 0.911 0.083 0.091 0.103
DWT-MS 0.237 0.241 0.239 0.945 0.927 0.905 0.088 0.095 0.107
SF-DWT 0.235 0.240 0.237 0.952 0.935 0.912 0.083 0.090 0.103
Set 2 Image
IHS 0.247 0.228 0.211 0.902 0.901 0.900 0.186 0.168 0.161
DWT-S 0.233 0.219 0.215 0.939 0.916 0.890 0.087 0.090 0.101
DWT-A 0.226 0.211 0.207 0.949 0.921 0.890 0.079 0.085 0.099
DWT-MS 0.228 0.213 0.209 0.940 0.912 0.881 0.085 0.090 0.103
SF-DWT 0.226 0.211 0.207 0.949 0.921 0.890 0.079 0.085 0.099

Table 3
Comparison of IHS, DWT-S, DWT-A, DWT-MS, SF-DWT Image Fusion Techniques in terms of PSNR, SSIM, ESMSE.

Algorithm PSNR SSIM ESMSE


R G B R G B R G B
Set 1 Image
IHS 87.47 88.24 88.48 0.795 0.804 0.784 0.014 0.015 0.037
DWT-S 93.09 92.68 91.79 0.687 0.655 0.606 0.013 0.020 0.042
DWT-A 93.76 93.04 91.91 0.731 0.662 0.583 0.013 0.020 0.042
DWT-MS 93.27 92.61 91.58 0.684 0.625 0.554 0.013 0.020 0.042
SF-DWT 93.80 93.07 91.93 0.736 0.665 0.585 0.013 0.020 0.042
Set 2 Image
IHS 86.85 87.72 88.07 0.729 0.726 0.701 0.012 0.017 0.032
DWT-S 93.41 93.14 92.15 0.662 0.615 0.555 0.009 0.019 0.035
DWT-A 94.19 93.58 92.30 0.715 0.625 0.533 0.009 0.020 0.036
DWT-MS 93.61 93.08 91.93 0.661 0.585 0.505 0.009 0.020 0.036
SF-DWT 94.23 93.61 92.32 0.720 0.629 0.537 0.009 0.020 0.036

Fig. 8. Performance metrics vs level of decomposition of SF-DWT in comparison with existing methods in the case of set 1 image: (a) Entropy, (b) RASE and (c) ERGAS.

of resolution 2 m and it is also capable of providing pan sharpened should be enlarged to match the size of the PAN image. The process
image having a resolution of 50 cm. MS image has 4 spectral bands of resizing the MS image is achieved by up-sampling the MS image
of which RGB bands are used in this paper. In order to make the 4 times using lanczos3 interpolation technique [5]. Then the scaled
processing faster, a section of the image is taken. The PAN image MS image and PAN image is given to the proposed algorithm, to
of size 4096  4096 and MS image of size 1024  1024 with reso- standard IHS method and DWT image fusion method based on
lution ratio of 1:4 is considered. substitution (DWT-S), averaging (DWT-A), maximum selection
In this paper the proposed image fusion technique is applied in (DWT-MS).
the case of urban images, as urban areas are characterized with
high frequency contents and the SFDWT is based on spatial 5. Results and discussion
frequency which perform image fusion without losing high
frequency contents. In this section, the results of the proposed image fusion
The experimental area includes buildings, roads, green patches technique are presented. Two sets of MS and PAN images shown
of land. Before applying the proposed algorithm, the MS image in Fig. 4 are used to obtain the results. A 2 level SFDWT image

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
10 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

Fig. 9. Performance metrics vs level of decomposition of SF-DWT in comparison with existing methods in the case of set 1 image: (a) SD-Red, (b) SD-Green, (c) SD-Blue, (d)
CC-Red, (e) CC-Green, (f) CC-Blue, (g) RMSE-Red, (h) RMSE-Green and (i) RMSE-Blue.

fusion technique along with existing techniques are applied the fused image has good geometric content along with rich
and their performances are compared. Both qualitative and radiometric information.
quantitative analysis is done to get a better idea about the
performance the proposed image fusion technique. The output of 5.2. Quantitative analysis
SFDWT at different stages of output in the case of Fig. 4 (a) & (b)
is shown in Fig. 5. A clear idea about the effectiveness of the proposed algorithm is
obtained by evaluating the quality of the fused image using nine
5.1. Qualitative analysis quality metrics given in Section 3. Tables 1–3 (bold values indicate
best result) shows the comparison of various parameters of exist-
The visual quality of the technique proposed in the case of ing and proposed techniques. From the tables, it will be evident
Pléiades data can be verified using Figs. 6 and 7. The SFDWT image that SFDWT Image Fusion Technique outperforms traditional
fusion technique produced high-quality images, which implies that fusion methods.

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx 11

Fig. 10. Performance metrics vs level of decomposition of SF-DWT in comparison with existing methods in the case of set 1 image: (a) PSNR-Red, (b) PSNR-Green, (c) PSNR-
Blue, (d) SSIM-Red, (e) SSIM-Green, (f) SSIM-Blue, (g) ESMSE-Red, (h) ESMSE-Green and (i) ESMSE-Blue.

From Table 1, it can be noted that, entropy is better in the case of proposed method is closer to unity. It can be also seen that a
of IHS image fusion technique being a spatial domain technique. considerable reduction in RMSE is achieved in the case of SFDWT.
RASE obtained using SFDWT is only 32.3371 & 41.5840, which Thus, from Table 2, the correlation coefficient is closer to unity
shows a considerable improvement. It can be seen that ERGAS which indicates that the fused image closely resembles as that of
has reduced in the case of the proposed technique to 25.1699 & the reference image, SD has reduced which shows that spectral
35.4326. Thus from Table 1, it is vivid that the performance metrics quality has improved and RMSE compared with that of the refer-
of SFDWT have significant improvement. ence image is also reduced.
Table 2 shows the comparison of SFDWT with existing methods From Table 3, it can be noted that PSNR in the case of SFDWT is
in terms of standard deviation, correlation coefficient and root much better than other methods. SSIM is better in the case of IHS
mean square error. The standard deviation of three bands has method being a spatial method. While the same have improved
reduced in the case of SFDWT, which indicates that the spectral when compared with that of standard DWT from 0.687 to 0.736
performance has improved. The correlation coefficient in the case in the case of Red band for set 1 image. SFDWT has almost same

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004
12 J. Jinju et al. / Engineering Science and Technology, an International Journal xxx (xxxx) xxx

ESMSE when compared with other existing methods. Here it is [3] M. Chikr El-Mezouar, N. Taleb, K. Kpalma, J. Ronsin, An IHS-based fusion for
color distortion reduction and vegetation enhancement in IKONOS imagery,
worth noting that PSNR and SSIM parameters have improved when
IEEE Trans. Geosci. Remote Sens. 49 (2011) 1590–1602, https://doi.org/
compared with the standard DWT methods. Similar results are 10.1109/TGRS.2010.2087029.
obtained in the case of set 2 image also. [4] Deng, B., Guo, H., Wang, C., Nie, Y., 2009. A comparison study on SPOT5 image
From Tables 1–3, most of the quality metrics have improved in fusion and quality assessment, in: Proceedings of SPIE. pp. 714618-1-714618–
8. https://doi.org/10.1117/12.813135.
the case of proposed image technique. Another important factor [5] S. Fadnavis, Image interpolation techniques in digital image processing: an
that determines the quality of the image fusion method is the level overview, Int. J. Eng. Res. Appl. 4 (2014) 70–73.
of decomposition. In order to get a clear idea about the effect of [6] L. Fonseca, L. Namikawa, E. Castejon, L. Carvalho, C. Pinho, A. Pagamisse, Image
fusion for remote sensing applications, in: D.Y. Zheng (Ed.), Image Fusion and
level of decomposition on the performance of SFDWT, all the nine Its Applications, InTech, 2011, pp. 153–178. https://doi.org/10.5772/22899.
quality metrics are evaluated for different levels of decomposition [7] M. Ghahremani, H. Ghassemian, Remote sensing image fusion using ripplet
and plotted. From the analysis, it will be clear that SFDWT which is transform and compressed sensing, IEEE Geosci. Remote Sens. Lett. 12 (2015)
502–506, https://doi.org/10.1109/LGRS.2014.2347955.
better than the existing techniques is even better as the level of [8] M. Gonzalez-Audicana, J.L. Saleta, R.G. Catalan, R. Garcia, Fusion of
decomposition is increased. multispectral and panchromatic images using improved IHS and PCA
Fig. 8 shows the effect of level of decomposition on entropy, RASE mergers based on wavelet decomposition, IEEE Trans. Geosci. Remote Sens.
42 (2004) 1291–1299, https://doi.org/10.1109/TGRS.2004.825593.
and ERGAS. It is clear as the level of decomposition increase above 2, [9] J. Han, Y. Zhang, L. Wang, L. Bai, Image fusion via feature residual and statistical
the proposed technique works better than other versions of DWT matching, IET Comput. Vis. 10 (2016) 551–558, https://doi.org/10.1049/iet-
image fusion techniques. It is interesting to note that, among the 4 cvi.2015.0280.
[10] P. Hill, M.E. Al-Mualla, D. Bull, Perceptual image fusion using wavelets, IEEE
methods DWT-Substitution technique is the worst performing
Trans. Image Process. 26 (2017) 1076–1088, https://doi.org/10.1109/
technique. It can be also noted that, DWT-Average is having a close TIP.2016.2633863.
performance compared to the proposed technique but difference [11] S. Kavitha, K.K. Thyagharajan, Efficient DWT-based fusion techniques using
starts to increase when the level of decomposition becomes 5. genetic algorithm for optimal parameter estimation, Soft Comput. 21 (2017)
3307–3316, https://doi.org/10.1007/s00500-015-2009-6.
The effect of level of decomposition on Standard Deviation, [12] Y. Kim, C. Lee, D. Han, Y. Kim, Y. Kim, Improved additive-wavelet image fusion,
Correlation Coefficient and Root Mean Square error is shown IEEE Geosci. Remote Sens. Lett. 8 (2011) 263–267, https://doi.org/10.1109/
Fig. 9. Similar to the result obtained in Fig. 8, DWT-S is the worst LGRS.2010.2067192.
[13] X. Luping, G. Guorong, F. Dongzhu, Multi-focus image fusion based on non-
performing technique and proposed technique is better than subsampled shearlet transform, IET Image Process. 7 (2013) 633–639, https://
others as the level of decomposition increases. The effect increase doi.org/10.1049/iet-ipr.2012.0558.
as the level of decomposition increases and also performance of [14] J. Marcello, A. Medina, F. Eugenio, Evaluation of spatial and spectral
effectiveness of pixel-level fusion techniques, IEEE Geosci. Remote Sens. Lett.
DWT-A is comparable with the performance of proposed technique 10 (2013) 432–436, https://doi.org/10.1109/LGRS.2012.2207944.
for level of decomposition less than 4. [15] H.O.S. Mishra, S. Bhatnagar, Survey on different image fusion techniques, Int. J.
From Fig. 10, In case of PSNR the order of performance is Sci. Eng. Res. 5 (2014) 167–172.
[16] V.P.S. Naidu, J.R. Raol, Pixel-level image fusion using wavelets and principal
SF-DWT, DWT-A, DWT-MS & DWT-S. While in the case of SSIM, component analysis, Def. Sci. J. 58 (2008) 338–352, https://doi.org/10.14429/
proposed technique is better than others, but here DWT-MS is dsj.58.1653.
the worst performing technique. In the case of ESMSE, except [17] F. Palsson, J.R. Sveinsson, M.O. Ulfarsson, J.A. Benediktsson, Model-based
fusion of multi- and hyperspectral images using PCA and wavelets, IEEE Trans.
DWT-S all other techniques have comparable performance.
Geosci. Remote Sens. 53 (2015) 2652–2663, https://doi.org/10.1109/
Thus from Figs. 8–10, it is clear that as the level of decomposi- TGRS.2014.2363477.
tion increases, all performance metrics have improved compared [18] V.R. Pandit, R.J. Bhiwani, Image fusion in remote sensing applications: a
with the existing methods. Thus it can be concluded that, perfor- review, Int. J. Comput. Appl. 120 (2015) 975–8887, https://doi.org/10.1007/3-
540-29711-1.
mance of SFDWT is much better as the level of decomposition is [19] P.S. Pradhan, R.L. King, N.H. Younan, D.W. Holcomb, Estimation of the number
increased. of decomposition levels for a wavelet-based multiresolution multisensor
image fusion, IEEE Trans. Geosci. Remote Sens. 44 (2006) 3674–3686, https://
doi.org/10.1109/TGRS.2006.881758.
6. Conclusion [20] H.R. Shahdoosti, H. Ghassemian, Fusion of MS and PAN images preserving
spectral quality, IEEE Geosci. Remote Sens. Lett. 12 (2015) 611–615, https://
doi.org/10.1109/LGRS.2014.2353135.
In remote sensing applications, the major issue is how effectively
[21] S. Singh, M.V. Patil, Multi focus image fusion based on spatial frequency and
spectral information is preserved while simultaneously improving contrast based analysis under stationary wavelet transform domain, Int. J. Sci.
the spatial information. In order to address this problem, a novel Eng. Res. 7 (2016) 225–230.
[22] D. Sylla, A. Minghelli-roman, P. Blanc, A. Mangin, O.H.F. D’Andon, Fusion of
image fusion technique which depends on Discrete wavelet
multispectral images by extension of the pan-sharpening ARSIS method, IEEE J
transform is developed in this paper. The proposed technique based Sel. Top. Appl. EARTH Obs. Remote Sens. 7 (2014) 1781–1791, https://doi.org/
on spatial frequency is found to be an improved version of existing 10.1109/JSTARS.2013.2271911.
standard DWT image fusion technique. The quality of the proposed [23] G. Vivone, L. Alparone, J. Chanussot, M.D. Mura, A. Garzelli, G.A. Licciardi, R.
Restaino, L. Wald, A critical comparison among pansharpening algorithms,
technique is analyzed, visually, quantitatively, using reference and IEEE Trans. Geosci. Remote Sens. 53 (2015) 2565–2586, https://doi.org/
non-reference performance indexes. From the experimental 10.1109/TGRS.2014.2361734.
analyses, it can be vividly comprehend that the proposed technique [24] H. Wang, J. Wang, Architecture and implementation of shapeadaptive discrete
wavelet transform for remote sensing image onboard compression, in: 2017
has better spectral and spatial quality than standard DWT method. 3rd IEEE International Conference on Computer and Communications, 2017,
pp. 1803–1808.
[25] Q. Wei, J. Bioucas-dias, N. Dobigeon, J.-Y. Tourneret, Hyperspectral and
Declarations of interest
Multispectral Image Fusion Based on a Sparse Representation, IEEE Trans.
Geosci. Remote Sens. 53 (2015) 3658–3668, https://doi.org/10.1109/
None. TGRS.2014.2381272.
[26] Y. Yang, D.S. Park, S. Huang, N. Rao, Medical image fusion via an effective
wavelet-based approach, EURASIP J. Adv. Signal Process (2010), https://doi.
References org/10.1155/2010/579341.
[27] Y. Yang, W. Wan, S. Huang, F. Yuan, S. Yang, Y. Que, Remote sensing image
[1] I. Avcıbas, B. Sankur, K. Sayood, Statistical evaluation of image quality fusion based on adaptive ihs and multiscale guided filter, IEEE Access 4 (2016)
measures, J. Electron. Imaging 11 (2002) 206–223, https://doi.org/10.1117/ 4573–4582, https://doi.org/10.1109/ACCESS.2016.2599403.
1.1455011. [28] L. Zhan, Y. Zhuang, L. Huang, Infrared and visible images fusion method based
[2] S. Baronti, B. Aiazzi, M. Selva, A. Garzelli, L. Alparone, A theoretical analysis of on discrete wavelet transform, J. Comput. 28 (2017) 57–71, https://doi.org/
the effects of aliasing and misregistration on pansharpened imagery, IEEE J. Sel. 10.3966/199115592017042802005.
Top. Signal Process. 5 (2011) 446–453, https://doi.org/10.1109/
JSTSP.2011.2104938.

Please cite this article as: J. Jinju, N. Santhi, K. Ramar et al., Spatial frequency discrete wavelet transform image fusion technique for remote sensing appli-
cations, Engineering Science and Technology, an International Journal, https://doi.org/10.1016/j.jestch.2019.01.004

Вам также может понравиться