Вы находитесь на странице: 1из 24

Hyperspectral Image: Fundamentals

and Advances

V. Sowmya, K. P. Soman and M. Hassaballah

Abstract Hyperspectral remote sensing has received considerable interest in recent


years for a variety of industrial applications including urban mapping, precision
agriculture, environmental monitoring, and military surveillance as well as computer
vision applications. It can capture hyperspectral image (HSI) with a lager number of
land-cover information. With the increasing industrial demand in using HSI, there
is a must for more efficient and effective methods and data analysis techniques that
can deal with the vast data volume of hyperspectral imagery. The main goal of this
chapter is to provide the overview of fundamentals and advances in hyperspectral
images. The hyperspectral image enhancement, denoising and restoration, classical
classification techniques and the most recently popular classification algorithm are
discussed with more details. Besides, the standard hyperspectral datasets used for
the research purposes are covered in this chapter.

1 Introduction

Over the last decades, satellites have been successfully used for many applications
such as earth observation, remote sensing, communication, and navigation. The data
measured from an object without any physical contact is known as remote sensing

V. Sowmya (B) · K. P. Soman


Amrita School of Engineering, Center for Computational Engineering
and Networking (CEN), Amrita Vishwa Vidyapeetham, Coimbatore, India
e-mail: v_sowmya@cb.amrita.edu
K. P. Soman
e-mail: kp_soman@amrita.edu
M. Hassaballah
Faculty of Computers and Information, Computer Science Department,
South Valley University, Luxor, Egypt
e-mail: m.hassaballah@svu.edu.eg

© Springer Nature Switzerland AG 2019 401


M. Hassaballah and K. M. Hosny (eds.), Recent Advances in Computer Vision, Studies
in Computational Intelligence 804, https://doi.org/10.1007/978-3-030-03000-1_16
402 V. Sowmya et al.

[1]. As different objects have variation in their molecular composition, each object
has the unique property of absorption and emission of the electromagnetic radiation
incident on the surface. The measurement of the absorbed or reflected radiation
at detailed wavelength range results in a pattern, known as spectral signature. The
spectral signature can be potentially used to identify any object, as it is unique for
each and every material present on the Earth’s surface. In other words, the materials
or objects presented in the obtained hyperspectral image are identified from their
spectral signature or spectral response with great precision. Hence, the hyperspectral
sensors are developed to capture the radiations at wide wavelength range present in
the electromagnetic spectrum covering the visible, short, mid and long-wave infrared
region, each of which is 10nm wide [2, 3].
The radiation emission of a scene at a particular wavelength is captured as an image
and are arranged layer by layer (for different wavelength) to form a hyperspectral
data-cube as shown in Fig. 1. The spatial information of the hyperspectral data-cube
is represented by the x-y plane and the spectral content is represented in the z-plane.
Each band of hyperspectral image has a dimension in which each pixel represents
a digital number (DN) corresponding to the radiance value collected by the sensor
(IFOV). Where, each band corresponds to a particular wavelength. Generally, the
HSI data cube (3D hypercube) is represented as a χ ∈ n 1 ×n 2 ×n b , where n = n 1 × n 2
represents the number of pixels and n b represents the number of bands.
Each pixel is represented as a one-dimensional vector in the spectral space formed
by the number of bands. The similar type of materials are grouped using clustering
algorithms, based on the spectral property, which are close to each other. The popular
clustering algorithms used in hyperspectral image analysis are k-means clustering,
fuzzy c-means clustering and spectral unmixing based clustering methods. As the

Fig. 1 Hyperspectral data cube with spectral signature


Hyperspectral Image: Fundamentals and Advances 403

correlation in the spectral space is high, the data is represented in the lower dimen-
sional space, which is less than the number of spectral bands. The dimensionality
reduction of the data is obtained using some techniques [4] such as principal com-
ponent analysis (PCA) [5] or independent component analysis (ICA) [6]. In this
context, an image is represented as matrix in the spatial space. Similar to the spectral
property, spatial property of the similar materials are close to each other. The group-
ing of materials based on the spatial property is known as segmentation. While, the
simultaneous processing of a pixel based on the neighboring pixels presented in the
spectral space and the band processing based on the neighboring bands presented in
the spatial space is known as spectral-spatial representation [7, 8].
Hyperspectral imaging covers a broad range of imaging systems, such as med-
ical hyperspectral imaging, atmospheric sounding, and close-range hyperspectral
imaging. Though, hyperspectral imagery was developed for mining and geology
applications, it is recently considered as a valuable source of information for sev-
eral potential applications such as mineralogy, environmental monitoring, precision
agriculture, defense and security-based applications, chemical imaging, astronomy,
and ecological sciences as well as for food industry to characterize products quality
[9–11]. Further applications of hyperspectral imaging are forensic examination of
artworks, historic and questioned documents, defense and homeland security, counter
terrorism, food quality assessment and image guided surgery.
Hyperspectral imaging systems [12] have several advantages compared to color
and multispectral imaging (MSI). Color and multispectral imaging system generally
comprises of three to ten bands [13]; while, hyperspectral imaging system has hun-
dreds of co-registered bands. For the MSI, the spectral bands are spaced irregularly
and widely. But, the HSI is with contiguous and regularly spaced spectrum, which
is continuous in nature. The continuous nature of the spectrum of the HSI provides
much more information about the surface compared to the MSI [14]. Figure 2 shows
a comparison between the number of multispectral and hyperspectral bands in the
same area.
In general, the platform of the hyperspectral sensors can be either airborne or space
borne. The airborne sensors include the AVIRIS, HYDICE, CASI and HYMAP,
which show fixed wing (airplane) or rotary (helicopter) platforms. While, the space
borne sensors include Hyperion, HYSI, MODIS and CHRIS. The description of
hyperspectral sensors are given in Table 1. The advantages of hyperspectral imagery
are:
• Continuous spectral reflectance curve and hence better material identification
• High spectral resolution
• Moderate spatial resolution
• Large area coverage
Data analyst faces various challenges in Hyperspectral data interpretation. Where,
hyperspectral image provides large volume of data due to high spectral resolution.
Since there is only minor difference in the spectral information of two adjacent bands,
the grayscale images of the wavelength bands seem to be similar. Therefore, much
of the information in the scene appears redundant, but the bands often contain the
404 V. Sowmya et al.

Fig. 2 Bands number of


multispectral and
hyperspectral in same area

Table 1 Description of hyperspectral sensors


S. no. Sensor Number of spectral Operating wavelength
bands range (nm)
1. Airborne visible 224 400–2500
infrared imaging
spectrometer
(AVIRIS)
2. Hyperspectral digital 210 400–2500
imagery collection
experiment (HYDICE)
3. Compact airborne 288 400–900
spectrographic imager
(CASI)
4. HyMap 126 400–2500
5. Hyperion 220 400–2500
6. Hyperspectral imaging 32 400–950
camera (HySI)
7. Moderate resolution 36 400–1440
imaging
spectroradiometer
(MODIS)
8. Compact high 19 400–1050
resolution imaging
spectrometer (CHRIS)
Hyperspectral Image: Fundamentals and Advances 405

critical information used to identify the surface materials. Active researchers are
conducting different experiments to determine the proper approaches and tools for
information analysis [15]. Due to the high dimensionality of the hyperspectral data,
most of the traditional classification techniques used for multispectral images cannot
be used for HSI [16], but it can be modified to handle the high dimensionality. The
major challenges present in the processing of hyperspectral data are noise, huge data
dimension and spectral mixing [17]. Particular challenges of dealing with hyperspec-
tral data are how to deal with the high amount of data produced by HSI systems. In
this regard, several techniques aim at reducing the data amount by choosing only a
subset of wavelengths or linear combinations of the same that carry the most infor-
mation for certain tasks. Approaches that handle all information available in the HSI
are yet rare which means that the full potential of HSI is not fully explored yet [18].
The modern tools used for hyperspectral image analysis are MATLAB version of
hyperspectral image analysis toolkit (HIAT), HYPER-tools, ENVI, etc.
The chapter is organized as follows: Sect. 2 discusses the hyperspectral image
enhancement techniques. The hyperspectral image denoising and restoration is pre-
sented in Sect. 3. Section 4 describes the most commonly used classifiers for hyper-
spectral image classification. While, Sect. 5 presents the description of hyperspectral
datasets followed by the conclusion in the final section.

2 Hyperspectral Image Enhancement

Image enhancement is a process that allows for the transformation of an original


image when contrast is insufficient, or when the image has a high level of noise to be
converted to another image that can be utilized for further analysis [19]. Methods used
for enhancement vary according to the chosen imaging modality. For instance, the
methods used to enhance MRI image [20] is unlikely to represent the best approach
to improve hyperspectral images taken in the visible near infrared band of the elec-
tromagnetic spectrum. On the other hand, as mentioned before the HS images (HSIs)
frequently possess high spectral resolution and there is a tradeoff between spatial and
spectral resolutions due to the radiometric sensitivity in the design of sensors [21].
Figure 3 illustrates three images with different spatial resolutions for the same region.
The problem of high spectral resolution and many other factors such as imperfect
imaging optics, secondary illumination effects, atmospheric scattering, and sensor
noise degrade the acquired image quality and hence limit the performance of some
approaches used in analyzing the input HSIs. In many cases, modifying the imaging
optics or the sensor array is not an available option, which highlights the necessary
for post processing or enhancement techniques [22].
In this context, hyperspectral resolution enhancement can be considered as the
joint processing of such data in order to derive (or reconstruct) a hyperspectral image
product that exhibits, ideally, the spectral characteristics of the observed hyperspec-
tral image at the spatial resolution and sampling of the higher resolution image [24].
In the literature, there are various methods that can be used for hyperspectral resolu-
406 V. Sowmya et al.

Fig. 3 Images with different spatial resolutions for the same region [23]

tion enhancement [25, 26]. In [24], a maximum a posteriori estimation method that
employs the stochastic mixing model is presented to improve the spatial resolution of
a hyperspectral image using a higher resolution auxiliary image. Ghasrodashti et al.
[26] proposed a method for spatial resolution enhancement using spectral unmixing
and a Bayesian sparse representation through combining the high spectral resolution
from the HSI with the high spatial resolution from a multispectral image (MSI) of the
same scene and high resolution images from unrelated scenes. While in [27], a num-
ber of spectra of different materials is extracted from both HSI and MSI data and a set
of transformation matrices is generated based on linear relationships between HSI
and MSI of specific materials. Another spectral unmixing-based HS and MS image
fusion method is presented in [28]. In this method, the linear spectral unmixing with
sparsity constraint is utilized taking into consideration the impact of linear observa-
tion model on linear mixing model. That is, the method is based on the combination
of spectral mixing model and observation model.
Pansharpening is another well-known process to enhance HSIs, which aims at
fusing a panchromatic image with a multispectral one to generate an image with the
high spatial resolution of the former and the high spectral resolution of the latter.
Where, many sharpening methods are designed to merge multispectral or hyper-
spectral data with a high-spatial resolution panchromatic image [23, 29, 30]. This
pansharpening using multispectral images may help in achieving high classifica-
tion accuracies and more detailed analysis of its composition. In fact, most of the
initial efforts on hyperspectral image enhancement are extensions of pansharpen-
ing of Multispectral data. The wavelets based method [31], machine learning based
method [32], unmixing based methods [33], purely statistical based method [34]
are examples of the pansharpening approaches. Other methods such as superreso-
lution mapping [35], vector Bi-lateral filtering [36] aim at enhancing hyperspectral
data without using auxilary high resolution data. According to [37], most of the
existing pansharpening approaches can be categorized into four classes: component
projection-substitution [38], multiresolution analysis, and Bayesian and variational
methods. In [39], a regression-based high-pass modulation pansharpening method is
introduced. Wang et al. [40] explored the intrinsic tensor structure and local sparsity
Hyperspectral Image: Fundamentals and Advances 407

of MS images. Where, the MS image is formulated as some spectral tensors, and


each tensor with its nearest neighbor tensors are assumed to lie in a low-dimensional
manifold. These tensors are sparsely coded under their neighbor tensors, and a joint
sparse coding assumption is cast on bands to develop an n-way block pursuit algo-
rithm for solving sparse tensor coefficients. MS tensors of high resolution can be
obtained by weighting panchromatic image with these sparse tensor coefficients.
Besides, following the recent technological and theoretical advances in computer
vision achieved using machine learning, neural network and deep learning based
methods have been applied recently for the pansharpening tasks [41–43].

3 Hyperspectral Image Denoising and Restoration

Although hyperspectral images provide abundance of information, the presence of


noise suppresses the data analysis for potential applications. In recent years, several
researchers have initiated the data pre-processing tasks such as denoising, dimen-
sionality reduction, etc. In hyperspectral images, the variation of noise level differs
from band to band. Therefore, the denoising technique must be adaptive depending
on the level of noise present in each band. The detailed information present in the
bands with low noise level is preserved, while removing the noise present in the
bands with high noise level. As the spatial property of the pixels vary, the level of
denoising must also differ for different pixels. Therefore, both spectral and spatial
noise differences are considered in the noise reduction process.
Generally, the fine features of HSI data cubes are destroyed during the denoising
process. Therefore, the image information of HSI data cube from noise is uncorre-
lated using some techniques such as principal component analysis [44]. Most of the
total energy of HSI data lies in the first principal component and a small amount
of energy is vested in rest of the channels, which has large amount of noise. Two-
dimensional bivariate wavelet thresholding is used to remove the noise in the low
energy PCA channels. To remove the noise at each pixel level, one-dimensional
dual tree complex wavelet transform denoising is applied [45]. Two denoising algo-
rithms are proposed by Zelinki and Goyal [46], which apply simultaneous sparsity on
their wavelet representation to analyze the correlation between bands. First, denois-
ing algorithm helps to denoise the entire data cube and outperforms wavelet based
global soft thresholding method. The second one, denoises a set of noisy bands (junk
bands) by the analysis of correlated information between the bands of the same
scene. In [47], spectral-spatial adaptive total variation denoising model is developed
by incorporating both the spatial and spectral information. The Legendre Fenchel
denoising is proposed for edge preservation in denoising [48, 49]. Also, least square
denoising technique gives better classification accuracy, when compared to other
denoising techniques [50]. For denoising task, the strong dependencies across spa-
tial and spectral neighbors have been proved to be very useful. Zhong and Wang
[51] introduced a multiple-spectral-band CRF (MSB-CRF) to simultaneously model
and use the spatial and spectral dependencies in a unified probabilistic framework,
408 V. Sowmya et al.

which can significantly remove the noise, while maintaining the important image
details. Recently, several other algorithms for denoising of the hyperspectral images
are proposed in the literature [52–56].

4 Hyperspectral Image Classification

Machine learning [57] is a method of analyzing the data and building an analytical
model which is capable of independently adapting with the new datasets. Algorithms
are used to iteratively learn the data and optimize its performance to produce a reli-
able output. It is an area of artificial intelligence which does not require explicit pro-
gramming. Machine learning algorithms can be predictive or descriptive. Regression
model is an example of predictive machine learning method whereas classification
model is an example of descriptive machine learning method. In regression, predic-
tion of a certain phenomenon is made and we get a continuous valued output. In
classification problems, different classes are distinguished to give a discrete valued
output. Various applications of classification in the field of remote sensing include
environmental monitoring, agriculture, military surveillance, soil type analysis, etc.
According to [58], the main categories of classification are:
• Supervised Classification—In this type of classification, class labels of the samples
are available. A mapping function between the training samples and the output
class label is found out. This map is further used in a new sample of input data
to predict the classes. Some of the examples of supervised classification include
support vector machines, k-nearest neighbor, decision trees, neutral networks,
kernel estimation etc.
• Unsupervised Classification—Class labels of the training data are unknown in
the unsupervised classification. It predicts how the data are clustered or grouped.
Clustering and mixture models comes under unsupervised classification.
• Semi-supervised Classification—Here, some of the training samples are labeled
while the class labels of the rest are unknown. It falls between supervised and
unsupervised classification. Most of the real world problems are semi-supervised.
Several hyperspectral data analysis like classification, unmixing etc., require pre-
processing step. A large number of classifiers are used for hyperspectral image classi-
fication. The Orthogonal Matching Pursuit (OMP), Support Vector Machines (SVM),
and Independent Component Discriminant Analysis (ICDA) are the most widely
used classifiers. In [59], Multinomial Logistic Regression (MLR) was developed to
determine the posterior class probability. Melgani and Bruzzone used SVM for the
classification of Hyperspectral images. To show the effectiveness of this method, the
authors have compared the performance of SVM with that of artificial neural net-
works and the K-nearest neighbour classifier. The problem of mapping binary SVM
to multiclass problems in high dimensional hyperspectral data is also studied in the
research [60–62].
Hyperspectral Image: Fundamentals and Advances 409

Pixels in the HSI can be represented sparsely as a linear combination of a few


numbers of training samples from a well organized dictionary matrix. This led to
the development of a sparsity based classification algorithm, which represents an
unknown pixel as a sparse vector with its nonzero entries correspond to weights of the
chosen training samples. The class label of the test pixel vector can be directly deter-
mined from the sparse vector, which is obtained by solving the sparsity-constrained
optimization problem. Chen et al. [63] improved the classification performance by
embedding the contextual information into the sparse recovery optimization prob-
lem. This sparsity based algorithm is used for HSI classification. Composite Kernel
machines can also be used for HSI classification. The properties of Mercers Kernels
are utilized by Camps-Valls et al. [64] to create a family of composite kernels, which
simply integrate spectral and spatial information. This gives a better classification
accuracy compared to conventional methods that consider only the spectral informa-
tion. Also, this approach is flexible and has good computational efficiency. A novel
semi-supervised segmentation algorithm for higher dimensional hyperspectral data
is presented by Li et al. [65], which is implemented using semi-supervised learn-
ing of posterior class probability distribution followed by segmentation. Here, the
regressors are modelled by labeled and graph based methods. Based on the entropy
of corresponding class label, these unlabeled samples are selected. The optimization
algorithm called, expansion min-cut-based integer is used to compute the maximum
posterior segmentation.

4.1 Orthogonal Matching Pursuit

Orthogonal matching pursuit (OMP) [66–69] is one of the iterative greedy algorithms
that are used for sparse approximation. The main highlight of this algorithm is its
simplicity. It states the category of the pixel vector without prior knowledge on
the labels. The class label is decided based on the residue. In this algorithm, the
column of dictionary matrix that have highest correlation with the current residue is
selected at each iteration. While doing so, the orthogonality property is maintained;
i.e., columns once selected will not be repeated. The main goal of OMP is to find the
sparse coefficient vector x which has only K non−zero elements; i.e., sparsity is K .
Consider a dictionary matrix A of size b × t where, b is the number of bands and
t is the number of training pixel vectors. y is the test pixel vector of size b × 1. x is
the sparse vector of size t × 1 and with sparsity level K . The problem formulation
is given by:
min x0 , such that Ax = y (1)

Optimization of l0 norm is an NP−hard problem. Consequently, the problem can be


reformulated with l2 norm as given below

min y − Ax2 , such that x0 <= K (2)


410 V. Sowmya et al.

The algorithm for OMP based classification is explained below.

I nput : Dictionary matrix A = [A1 , A2 , · · · , At ], where Ai ∈ R b


Testing pixel vector y ∈ R b
Stopping criterion.
Out put : Sparse vector x ∈ R t with sparsity K

Algorithm :
• Initialize the residual r0 = y, activeset=[ ], Ã0 = [ ], k=1
• rk = rk−1
• Find the column of A which has greatest correlation with the residual vector. This
is achieved by performing the inner product or dot product of A j and rk .
 
newindex = arg max A j , rk
j

• activeset = [activeset newindex]


• Update the new dictionary matrix with the columns of A corresponding to
newindex
Ãk = [ Ãk−1 Anewindex ]

• Now, find the least square solution using the updated dictionary matrix à to esti-
mate the value of x.
xk = ( Ãtk Ãk )−1 Ãtk y

• Estimate the new residue


rk = Ãk xk − y

• Increment k and repeat the process from step 2 till the stopping criterion is reached.
• The vector x obtained after all the above processes will have K non−zero elements
and x ∈ R t . The residue vectors for all the classes are found out using the formula:

ri = Bb×t xt×1 − yb×1


i=1 to c

where, c is the total number of classes, B is the matrix of size b × t which has the
columns of A belonging to ith class and the rest of the columns as zero vectors.
Hyperspectral Image: Fundamentals and Advances 411

• Estimate the minimum value of l2 norm of residues from all classes and the index
of the minimum value is used to find the class label of the given test vector.

class(y) = arg min ri 2


i=1 to c

4.2 Regularized Least Square

Regularized least square (RLS) reduces all the problems to solving a linear system
and thus uses numerical linear algebra to exploit the latest mathematical and software
tools. This shows that RLS can easily handle large, massive datasets. RLS has given a
good performance in many of the learning tasks [70]. From the experimental results
discussed in [70], RLS classification is a good alternative to other classification
techniques like SVM. It is favorable to use this technique in multiclass classification
since many of the existing algorithms becomes complicated in the case of multiclass
datasets.
RLS minimizes the L2 norm of error and calculates the weight matrix. Let
(t1 , t2 , . . . , tn ) denotes the training samples and training labels be represented by
(l1 , l2 , . . . , ln ), where ti ∈ Rd , li ∈ {1, 2, . . . , T } for i = 1, 2, . . . , n. Let M be an
n × n matrix including the kernel functions Mi j = m(ti , tj ). O is an n × T output
matrix with Oi j = 1 if ith training sample belongs to jth class and −1 otherwise.
The optimization problem can be formulated as:
 
1
min O − MC F + λC MC .
2 T
(3)
C∈Rn ×T n

where λ is the regularization parameter. The formulation can be modified based on


the linear model as:
 
1
min O − XW F + λ W F
2 2
(4)
W ∈Rd ×T n

where X = [t1 , t2 , . . . tn ]T is a n × d matrix. Using this optimum value, the class


labels for the testing samples are estimated. Regularization parameter takes into con-
sideration, the problem of overfitting. It also helps in simplification and smoothening
of the hypothesis. In regularization, we keep all the features but reduce the magnitude
or values of parameters. It works well when there are a lot of features, each of which
contributing a bit to prediction.
Regularized least square technique for regression and multiclass classification
can be implemented using a software library called Grand Unified Regularized Least
Squares (GURLS) [71]. Randfeats and radial basis function (RBF) are the different
kernels present in the GURLS library. The cross validation techniques in GURLS
are Hold-Out (HO) and Leave-One-Out (LOO). The Hold-Out cross validation fits
412 V. Sowmya et al.

the function using the training data and validates the fitted function on the testing
data. Leave-One-Out cross validation fits the function using all the data, leaving only
one data for validation. The process is repeated for the number of times equal to the
number of training data.

4.3 Support Vector Machine

Support vector machine (SVM) [72–74] is a supervised learning algorithm mainly


used for data mining and machine learning. The main applications of SVMs include
pattern recognition, bioinformatics, image classification, text and categorization [75].
It is originally formulated by Vladimir Vapnik and colleagues in 1979, but is unno-
ticed till 1992. In SVM, hyperplane is used to linearly separate the higher dimensional
data. Non-linear data in the original dimension is mapped to linearly separable higher
dimensional space. The concept of kernels are used for higher dimensional mapping
in the feature space, which prevents the issue of overfitting. The optimal hyperplane
is estimated in such a way that the two classes are maximally separable as illustrated
in Fig. 4.
A constraint to maximize separation between the two classes is considered in
order to minimize the generalization error. Reducing the generalization error means
on giving a new test sample to the classifier, the chance of misclassification should be

Fig. 4 Optimal hyperplane with maximum margin


Hyperspectral Image: Fundamentals and Advances 413

Fig. 5 Non-linear data in 2D is mapped to 3D

minimum. In SVM, two parallel planes equidistant from the classifier passes through
one or more data points. While training, a hyperplane is determined such that, it has
maximum separable margin. The data points used to determine the bounding planes
are called support vectors.
Suppose there are two class dataset in the R2 space as shown in Fig. 5. It is clear
from the figure that, the data is not linearly separable in this space. A non-linear
classifier is required to classify this data in 2D and thus making it complex in the
lower dimensional space. So, the data is mapped to R3 space; i.e., higher dimensional
space, where it can be linearly classified. Original space in which the data lie is called
input space and the higher dimensional space in which the data is mapped is called
feature space.
Let (xi , yi ) where xi = [xi1 , xi2 , . . . , xin ]T represents a training set, which are
linearly separable binary class n-dimensional data points ie, xi ∈ Rn is the training
sample. The training label is denoted by yi and i = 1, 2, . . . , m. There number of
training data points and features are denoted by m and n respectively. The hyperplane
for binary classification is given by:

w T xi − ξi = 0 (5)

where w = [w1 , w2 , . . . wn ]T and ξ ∈ R. Thus, the classification function will be


sign(w T xi − ξ ). The data points belong to the positive class, if the classification
function value is greater than 0. If the classification function value is less than 0, the
data points belong to the negative class. The two bounding planes are given by:

w T xi − ξi ≥ +1 (6)

w T xi − ξi ≤ −1 (7)
414 V. Sowmya et al.

These two equations can be combined and written as:

d(w T xi − ξi ) ≥ 1 (8)

where d = +1 or −1. This can be represented in matrix form as:

D(X w − ξ e) ≥ e (9)

where e is the column vector of ones, D is a diagonal matrix with −1 and +1 as


diagonal elements, w = [w1 , w2 , · · · , wn ]T and X = [x1T , x2T , · · · , xmT ]T .The per-
2
pendicular distance ie, shortest distance between the two bounding planes is w .
w
Maximizing this distance is similar to minimizing 2 . Hence, the matrix form of
the problem formulation of SVM is:

1 T
min w ws.t : D(X w − ξ e) >= e (10)
w,ξ 2

In case of non—linear data, φ(x) is utilized instead of x. So the equation for hyper-
plane becomes
w T φ(xi ) − ξ = 0 (11)

SVMs are formulated for binary class data classification. But the hyperspectral image
classification deals with multiclasses. Thus, we have to effectively extend SVM for
dealing with the multiclass data [76]. There are two approaches currently available
for this. One approach is to construct many binary classifiers and combining the
results whereas, the other approach is to frame an optimization problem for the
whole data. But the latter is computationally complex and expensive. Accordingly,
it is preferable to convert the multiclass problems into several binary class problems.
The two methods for performing various binary classifications in multiclass data are
one-against-one method and one-against-all method. In earlier days, one-against-
all method was used to implement SVM classification in multi class data. In this
method, data in jth class are given positive labels and the rest of the data are labeled
as negative. Thus k binary classifications are done for a k class data. The SVM
formulation for the jth class is

1 j T j
min (w ) w
w j ,ξ j 2
T (12)
(w j ) φ(xi ) − ξ j ≥ +1 , i f yi = j
T
(w j ) φ(xi ) − ξ j ≤ −1 , i f yi = j

While in one-against-one method, the binary classification is performed for each


class data against the every other class data. So, there are k(k−1)
2
binary classifiers
framed. The SVM formulation for the jth class and pth class is given by:
Hyperspectral Image: Fundamentals and Advances 415

1 jp T jp
min (w ) w
w j p ,ξ j p 2
T (13)
(w j p ) φ(xi ) − ξ j p ≥ +1 , i f yi = j
T
(w j p ) φ(xi ) − ξ j p ≤ −1 , i f yi = p

The LibSVM [77] can be used to perform classification using SVM. It is a kernel
based library, which utilizes one-against-one technique for multi class classifica-
tion in SVM. The different kernels used in LibSVM are linear, polynomial, radial
basis function (RBF) and sigmoid. The tunable parameters present in SVM are cost
function, gamma, degree, kernel type, etc.

Fig. 6 Architecture of VCNN [78]


416 V. Sowmya et al.

4.4 Vectorized Convolutional Neural Network

Recently, deep neural networks outperforms all the traditional classifiers. The vec-
torized convolutional neural network (VCNN) for hyperspectral image classification
is proposed in [78]. In general, the VCNN contains the convolution layer followed
by the pooling layer. The filters used for the convolution are the learnable parameters
by the deep neural networks. The number and the size of filters are to be chosen
experimentally based on the data used for the classification. For example, the net-
work architecture shown in Fig. 6 consists of convolution layer, pooling layer and
a fully connected layer. Each pixel sample can be represented as a vector of length
equal to the number of bands. The length of input layer is n 1 , where n1 is the number
of bands. The number of filters used for the convolution layer is experimentally fixed
as 20. The length of each filter is denoted by k1 . the length of the convolution out-
put is denoted by n 2 . The number of learning parameters between the input and the
convolution layer are 20 × (k1 + 1). The number of neurons present in the pooling
layer and in the layer prior to the output are 20 × 1 × n 3 and n 4 respectively. The
number of neurons in the final layer are n 5 , with (n 4 + 1) × n 5 number of trainable
parameters.

5 Hyperspectral Datasets

Actually, there are several hyperspectral datasets publicly available for testing and
evaluating algorithms such as Botswana, Pavia University, Indian pines, Kennedy
Space Center (KSC) and Salinas-A. The following subsections give a brief descrip-
tion for the most widely used sets of these datasets.

5.1 Botswana

The Botswana hyperspectral dataset [79–81] is acquired over Okavango Delta in


Botswana on May 2001 using NASA EO-1 satellite. Hyperion sensor captured the
image in the wavelength (400–2500 nm) over a 7.7 km strip in 10 nm windows at
30 m pixel resolution. The dataset used in this work has 145 bands with 1476 × 256
pixels and 14 classes. The dataset is illustrated in Fig. 7.

5.2 Pavia University

The optical sensor, ROSIS-3 (Reflective Optics System Imaging Spectrometer)


acquired the data over Pavia University in Italy with a spectral coverage range of
Hyperspectral Image: Fundamentals and Advances 417

Fig. 7 Botswana dataset

Fig. 8 Pavia university dataset

430–860 nm and geometric resolution of 1.3 m. The Pavia University dataset [82],
[83] has 9 classes in 610 × 340 pixels. There are 103 spectral bands available in this
dataset. The dataset of Pavia University is shown in Fig. 8.

5.3 Indian Pines

The Indian Pines [82, 84] was acquired on June 1992 using AVIRIS (Airborne
Visible/InfraRed Imaging Spectrometer). In the wavelength range of 400–2500 nm,
the data consists of 220 spectral bands with 145 × 145 pixels. The dataset, shown in
Fig. 9 includes 16 different classes of crops.
418 V. Sowmya et al.

Fig. 9 Indian pines dataset

5.4 Kennedy Space Center (KSC)

The KSC dataset [79, 80] over the Kennedy Space Center in Florida was acquired
by NASA AVIRIS instrument on March 1996. It has 13 classes representing various
land cover types, 176 spectral bands and 512 × 614 pixels in the wavelength (400–
2500 nm) of the electromagnetic spectrum. The KSC dataset is illustrated in Fig. 10.

5.5 Salinas-A

The hyperspectral dataset was acquired in 1998 using AVIRIS sensor. It was captured
at a lower altitude with 3.7 m spatial resolution. Salinas data [84] comprises of
512 scan lines, 217 samples, 16 classes and 224 spectral bands (400–2500 nm) in
the electromagnetic spectrum. A subscene of the Salinas dataset called Salinas-A
comprises of 83 × 86 pixels and 6 classes. The Salinas-A dataset is given in Fig. 11.
Hyperspectral Image: Fundamentals and Advances 419

Fig. 10 KSC dataset

Fig. 11 Salinas-A dataset

6 Conclusion

Hyperspectral imaging is a trending technique in remote sensing. The applications


of HSI have moved from traditional remote sensing (e.g., urban mapping, precision
agriculture, mining and environmental monitoring) to more industry based appli-
cations including military surveillance, food quality inspection, medical applica-
tions and even computer vision applications. This chapter provides a general view
and fundamentals of the HSI. It discusses the most recent approaches and direc-
tions in HSI enhancement, classification, denoising and restoration. Where, the most
420 V. Sowmya et al.

commonly used classifiers; namely support vector machines (SVM), regularized


Least Squares (RLS), Orthogonal Matching Pursuit (OMP) are presented in some
details. Furthermore, this chapter explains the most recently popular algorithm known
as deep convolutional neural network (DCNN) for hyperspectral image classifica-
tion. Besides, the standard hyperspectral datasets used for the research purposes are
presented.

References

1. Thenkabail, P.S., Lyon, J.G.: Hyperspectral Remote Sensing of Vegetation. CRC Press (2016)
2. Manolakis, D., Shaw, G.: Detection algorithms for hyperspectral imaging applications. IEEE
Signal Process. Mag. 19(1), 29–43 (2002)
3. Pohl, C., van Genderen, J.: Remote Sensing Image Fusion: A Practical Guide. CRC Press
(2016)
4. Deng, Y.J., Li, H.C., Pan, L., Shao, L.Y., Du, Q., Emery, W.J.: Modified tensor locality pre-
serving projection for dimensionality reduction of hyperspectral images. IEEE Geosci. Remote
Sens. Lett. (2018)
5. Du, Q., Fowler, J.E.: Low-complexity principal component analysis for hyperspectral image
compression. Int. J. High Perform. Comput. Appl. 22(4), 438–448 (2008)
6. Wang, J., Chang, C.I.: Independent component analysis-based dimensionality reduction with
applications in hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 44(6), 1586–
1600 (2006)
7. Vakalopoulou, M., Platias, C., Papadomanolaki, M., Paragios, N., Karantzalos, K.: Simultane-
ous registration, segmentation and change detection from multisensor, multitemporal satellite
image pairs. In: IEEE International Conference on Geoscience and Remote Sensing Sympo-
sium (IGARSS), pp. 1827–1830. IEEE (2016)
8. Ferraris, V., Dobigeon, N., Wei, Q., Chabert, M.: Detecting changes between optical images
of different spatial and spectral resolutions: a fusion-based approach. IEEE Trans. Geosci.
Remote Sens. 56(3), 1566–1578 (2018)
9. ElMasry, G., Kamruzzaman, M., Sun, D.W., Allen, P.: Principles and applications of hyper-
spectral imaging in quality evaluation of agro-food products: a review. Crit. Rev. Food Sci.
Nutr. 52(11), 999–1023 (2012)
10. Lorente, D., Aleixos, N., Gómez-Sanchis, J., Cubero, S., García-Navarrete, O.L., Blasco, J.:
Recent advances and applications of hyperspectral imaging for fruit and vegetable quality
assessment. Food Bioprocess Technol. 5(4), 1121–1142 (2012)
11. Xiong, Z., Sun, D.W., Zeng, X.A., Xie, A.: Recent developments of hyperspectral imaging
systems and their applications in detecting quality attributes of red meats: a review. J. Food
Eng. 132, 1–13 (2014)
12. Kerekes, J.P., Schott, J.R.: Hyperspectral imaging systems. Hyperspectral Data Exploit. Theory
Appl. 19–45 (2007)
13. Liang, H.: Advances in multispectral and hyperspectral imaging for archaeology and art con-
servation. Appl. Phys. A 106(2), 309–323 (2012)
14. Fischer, C., Kakoulli, I.: Multispectral and hyperspectral imaging technologies in conservation:
current research and potential applications. Stud. Conserv. 51, 3–16 (2006)
15. Du, Q., Yang, H.: Similarity-based unsupervised band selection for hyperspectral image anal-
ysis. IEEE Geosci. Remote Sens. Lett. 5(4), 564–568 (2008)
16. Chang, N.B., Vannah, B., Yang, Y.J.: Comparative sensor fusion between hyperspectral and
multispectral satellite sensors for monitoring microcystin distribution in lake erie. IEEE J. Sel.
Top. Appl. Earth Obs. Remote Sens. 7(6), 2426–2442 (2014)
Hyperspectral Image: Fundamentals and Advances 421

17. Bioucas-Dias, J.M., Plaza, A., Camps-Valls, G., Scheunders, P., Nasrabadi, N., Chanussot, J.:
Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens.
Mag. 1(2), 6–36 (2013)
18. Plaza, A., Benediktsson, J.A., Boardman, J.W., Brazile, J., Bruzzone, L., Camps-Valls, G.,
Chanussot, J., Fauvel, M., Gamba, P., Gualtieri, A.: Recent advances in techniques for hyper-
spectral image processing. Remote Sens. Environ. 113, S110–S122 (2009)
19. Bhabatosh, C., et al.: Digital Image Processing and Analysis. PHI Learning Pvt, Ltd (2011)
20. Bankman, I.: Handbook of Medical Image Processing and Analysis. Elsevier (2008)
21. Bendoumi, M.A., He, M., Mei, S.: Hyperspectral image resolution enhancement using high-
resolution multispectral image based on spectral unmixing. IEEE Trans. Geosci. Remote Sens.
52(10), 6574–6583 (2014)
22. Akgun, T., Altunbasak, Y., Mersereau, R.M.: Super-resolution reconstruction of hyperspectral
images. IEEE Trans. Image Process. 14(11), 1860–1875 (2005)
23. Amro, I., Mateos, J., Vega, M., Molina, R., Katsaggelos, A.K.: A survey of classical methods
and new trends in pansharpening of multispectral images. EURASIP J. Adv. Signal Process.
2011(1), 79 (2011)
24. Eismann, M.T., Hardie, R.C.: Hyperspectral resolution enhancement using high-resolution
multispectral imagery with arbitrary response functions. IEEE Trans. Geosci. Remote Sens.
43(3), 455–465 (2005)
25. Yokoya, N., Grohnfeldt, C., Chanussot, J.: Hyperspectral and multispectral data fusion: a com-
parative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 5(2), 29–56 (2017)
26. Ghasrodashti, E.K., Karami, A., Heylen, R., Scheunders, P.: Spatial resolution enhancement
of hyperspectral images using spectral unmixing and Bayesian sparse representation. Remote
Sens. 9(6), 541 (2017)
27. Sun, X., Zhang, L., Yang, H., Wu, T., Cen, Y., Guo, Y.: Enhancement of spectral resolution for
remotely sensed multispectral image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 8(5),
2198–2211 (2015)
28. Zhang, Y.: Spatial resolution enhancement of hyperspectral image based on the combination
of spectral mixing model and observation model. In: Image and Signal Processing for Remote
Sensing XX, vol. 9244, p. 924405. International Society for Optics and Photonics (2014)
29. Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G.A., Restaino,
R., Wald, L.: A critical comparison among pansharpening algorithms. IEEE Trans. Geosci.
Remote Sens. 53(5), 2565–2586 (2015)
30. Loncan, L., de Almeida, L.B., Bioucas-Dias, J.M., Briottet, X., Chanussot, J., Dobigeon, N.,
Fabre, S., Liao, W., Licciardi, G.A., Simoes, M.: Hyperspectral pansharpening: a review. IEEE
Geosci. Remote Sens. Mag. 3(3), 27–46 (2015)
31. Amolins, K., Zhang, Y., Dare, P.: Wavelet based image fusion techniques: an introduction,
review and comparison. ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007)
32. Fechner, T., Godlewski, G.: Optimal fusion of TV and infrared images using artificial neural
networks. In: Applications and Science of Artificial Neural Networks, vol. 2492, pp. 919–926.
International Society for Optics and Photonics (1995)
33. Gross, H.N., Schott, J.R.: Application of spectral mixture analysis and image fusion techniques
for image sharpening. Remote Sens. Environ. 63(2), 85–94 (1998)
34. Khan, M.M., Chanussot, J., Alparone, L.: Pansharpening of hyperspectral images using spatial
distortion optimization. In: 16th IEEE International Conference on Image Processing (ICIP),
pp. 2853–2856. IEEE (2009)
35. Mianji, F.A., Zhang, Y., Gu, Y., Babakhani, A.: Spatial-spectral data fusion for resolution
enhancement of hyperspectral imagery. In: IEEE International Conference on Geoscience and
Remote Sensing Symposium (IGARSS), vol. 3, pp. III–1011. IEEE (2009)
36. Peng, H., Rao, R.: Hyperspectral image enhancement with vector bilateral filtering. In: 16th
IEEE International Conference on Image Processing (ICIP), pp. 3713–3716. IEEE (2009)
37. Karoui, M.S., Deville, Y., Benhalouche, F.Z., Boukerch, I.: Hypersharpening by joint-criterion
nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 55(3), 1660–1670 (2017)
422 V. Sowmya et al.

38. Qu, J., Li, Y., Dong, W.: Guided filter and principal component analysis hybrid method for
hyperspectral pansharpening. J. Appl. Remote Sens. 12(1), 015003 (2018)
39. Vivone, G., Restaino, R., Chanussot, J.: A regression-based high-pass modulation pansharp-
ening approach. IEEE Trans. Geosci. Remote Sens. 56(2), 984–996 (2018)
40. Wang, M., Zhang, K., Pan, X., Yang, S.: Sparse tensor neighbor embedding based pan-
sharpening via N-way block pursuit. Knowl.-Based Syst. 149, 18–33 (2018)
41. Yuan, Q., Wei, Y., Meng, X., Shen, H., Zhang, L.: A multiscale and multidepth convolutional
neural network for remote sensing imagery pan-sharpening. IEEE J. Sel. Top. Appl. Earth Obs.
Remote Sens. 11(3), 978–989 (2018)
42. Yang, J., Zhao, Y.Q., Chan, J.C.W.: Hyperspectral and multispectral image fusion via deep
two-branches convolutional neural network. Remote Sens. 10(5), 800 (2018)
43. Xing, Y., Wang, M., Yang, S., Jiao, L.: Pan-sharpening via deep metric learning. ISPRS J.
Photogramm. Remote Sens. (2018)
44. Chen, G., Qian, S.E.: Denoising of hyperspectral imagery using principal component analysis
and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 49(3), 973–980 (2011)
45. Rasti, B., Sveinsson, J.R., Ulfarsson, M.O.: Wavelet-based sparse reduced-rank regression for
hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 52(10), 6688–6698 (2014)
46. Zelinski, A., Goyal, V.: Denoising hyperspectral imagery and recovering junk bands using
wavelets and sparse approximation. In: IEEE International Conference on Geoscience and
Remote Sensing Symposium, pp. 387–390. IEEE (2006)
47. Yuan, Q., Zhang, L., Shen, H.: Hyperspectral image denoising employing a spectral–spatial
adaptive total variation model. IEEE Trans. Geosc. Remote Sens. 50(10), 3660–3677 (2012)
48. Santhosh, S., Abinaya, N., Rashmi, G., Sowmya, V., Soman, K.: A novel approach for denois-
ing coloured remote sensing image using Legendre Fenchel transformation. In: International
Conference on Recent Trends in Information Technology (ICRTIT), pp. 1–6. IEEE (2014)
49. Reshma, R., Sowmya, V., Soman, K.: Effect of Legendre-Fenchel denoising and SVD-based
dimensionality reduction algorithm on hyperspectral image classification. Neural Comput.
Appl. 29(8), 301–310 (2018)
50. Srivatsa, S., Ajay, A., Chandni, C., Sowmya, V., Soman, K.: Application of least square denois-
ing to improve ADMM based hyperspectral image classification. Procedia Comput. Sci. 93,
416–423 (2016)
51. Zhong, P., Wang, R.: Multiple-spectral-band CRFs for denoising junk bands of hyperspectral
imagery. IEEE Trans. Geosci. Remote Sens. 51(4), 2260–2275 (2013)
52. Li, Q., Li, H., Lu, Z., Lu, Q., Li, W.: Denoising of hyperspectral images employing two-phase
matrix decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 7(9), 3742–3754
(2014)
53. He, W., Zhang, H., Zhang, L., Shen, H.: Total-variation-regularized low-rank matrix factoriza-
tion for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 54(1), 178–188
(2016)
54. Ma, J., Li, C., Ma, Y., Wang, Z.: Hyperspectral image denoising based on low-rank represen-
tation and superpixel segmentation. In: IEEE International Conference on Image Processing
(ICIP), pp. 3086–3090. IEEE (2016)
55. Bai, X., Xu, F., Zhou, L., Xing, Y., Bai, L., Zhou, J.: Nonlocal similarity based nonnegative
tucker decomposition for hyperspectral image denoising. IEEE J. Sel. Top. Appl. Earth Obs.
Remote Sens. 11(3), 701–712 (2018)
56. Zhuang, L., Bioucas-Dias, J.M.: Fast hyperspectral image denoising and inpainting based on
low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 11(3),
730–742 (2018)
57. Camps-Valls, G., Bruzzone, L.: Kernel Methods for Remote Sensing Data Analysis. Wiley
Online Library (2009)
58. Ang, J.C., Mirzal, A., Haron, H., Hamed, H.: Supervised, unsupervised and semi-supervised
feature selection: A review on gene selection. IEEE/ACM Trans. Comput. Biol. Bioinform.
13(5), 971–989 (2016)
Hyperspectral Image: Fundamentals and Advances 423

59. Li, J., Bioucas-Dias, J.M., Plaza, A.: Semisupervised hyperspectral image classification using
soft sparse multinomial logistic regression. IEEE Geosci. Remote Sens. Lett. 10(2), 318–322
(2013)
60. Foody, G.M., Mathur, A.: A relative evaluation of multiclass image classification by support
vector machines. IEEE Trans. Geosci. Remote Sens. 42(6), 1335–1343 (2004)
61. Ghamisi, P., Yokoya, N., Li, J., Liao, W., Liu, S., Plaza, J., Rasti, B., Plaza, A.: Advances in
hyperspectral image and signal processing: a comprehensive overview of the state of the art.
IEEE Geosci. Remote Sens. Mag. 5(4), 37–78 (2017)
62. Wang, M., Wan, Y., Ye, Z., Lai, X.: Remote sensing image classification based on the optimal
support vector machine and modified binary coded ant colony optimization algorithm. Inf. Sci.
402, 50–68 (2017)
63. Chen, Y., Nasrabadi, N.M., Tran, T.D.: Sparse representation for target detection in hyperspec-
tral imagery. IEEE J. Sel. Top. Signal Process. 5(3), 629–640 (2011)
64. Camps-Valls, G., Bruzzone, L.: Kernel-based methods for hyperspectral image classification.
IEEE Trans. Geosci. Remote Sens. 43(6), 1351–1362 (2005)
65. Li, J., Bioucas-Dias, Jose, M., Plaza, A.: Semisupervised hyperspectral image segmentation
using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens.
48(11), 4085–4098 (2010)
66. Cai, T.T., Wang, L.: Orthogonal matching pursuit for sparse signal recovery with noise. IEEE
Trans. Inf. Theory 57(7), 4680–4688 (2011)
67. Davenport, M.A., Wakin, M.B.: Analysis of orthogonal matching pursuit using the restricted
isometry property. IEEE Trans. Inf. Theory 56(9), 4395–4401 (2010)
68. Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal match-
ing pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007)
69. Chen, Y., Nasrabadi, N.M., Tran, T.D.: Hyperspectral image classification using dictionary-
based sparse representation. IEEE Trans. Geosci. Remote Sens. 49(10), 3973–3985 (2011)
70. Nikhila, H., Sowmya, V., Soman, K.: Gurls vs libsvm: performance comparison of kernel
methods for hyperspectral image classification. Indian J. Sci. Technol. 8(24), 1–10 (2015)
71. Tacchetti, A., Mallapragada, P.S., Santoro, M., Rosasco, L.: GURLS: A Toolbox for Regular-
ized Least Squares Learning (2012)
72. Soman, K., Loganathan, R., Ajay, V.: Machine Learning with SVM and Other Kernel Methods.
PHI Learning Pvt. Ltd. (2009)
73. Soman, K., Diwakar, S., Ajay, V.: Data Mining: Theory and Practice. PHI Learning Pvt. Ltd.
(2006)
74. Gualtieri, J., Chettri, S.R., Cromp, R., Johnson, L.: Support vector machine classifiers as applied
to AVIRIS data. In: Proceedings of Eighth JPL Airborne Geoscience Workshop (1999)
75. Steinwart, I., Christmann, A.: Support Vector Machines. Springer Science & Business Media
(2008)
76. Hsu, C.W., Lin, C.J.: A comparison of methods for multiclass support vector machines. IEEE
Trans. Neural Netw. 13(2), 415–425 (2002)
77. Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Trans. Intell.
Syst. Technol. (TIST) 2(3), 27 (2011)
78. Slavkovikj, V., Verstockt, S., De Neve, W., Van Hoecke, S., van de Walle, R.: Hyperspectral
image classification with convolutional neural networks. The 23rd ACM International Confer-
ence on Multimedia, pp. 1159–1162 (2015)
79. Ham, J., Chen, Y., Crawford, M.M., Ghosh, J.: Investigation of the random forest framework
for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 43(3), 492–501
(2005)
80. Rajan, S., Ghosh, J., Crawford, M.M.: Exploiting class hierarchies for knowledge transfer in
hyperspectral data. IEEE Trans. Geosci. Remote Sens. 44(11), 3408–3417 (2006)
81. Jun, G., Ghosh, J.: Spatially adaptive semi-supervised learning with Gaussian processes for
hyperspectral data analysis. Stat. Anal. Data Min. 4(4), 358–371 (2011)
82. Dópido, I., Li, J., Marpu, P.R., Plaza, A., Bioucas Dias, J.M., Benediktsson, J.A.: Semisuper-
vised self-learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
51(7), 4032–4044 (2013)
424 V. Sowmya et al.

83. Fauvel, M., Benediktsson, J.A., Chanussot, J., Sveinsson, J.R.: Spectral and spatial classifica-
tion of hyperspectral data using svms and morphological profiles. IEEE Trans. Geosci. Remote
Sens. 46(11), 3804–3814 (2008)
84. Li, J., Bioucas-Dias, J.M., Plaza, A.: Semisupervised hyperspectral image segmentation using
multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 48(11),
4085–4098 (2010)

Вам также может понравиться