Академический Документы
Профессиональный Документы
Культура Документы
Processing
Introduction
Hyperion
(http://eo1.usgs.gov/sensors/hyperion)
(Pearlman
et
al,
2003)
is
a
hyperspectral
instrument
on-‐board
the
Earth
Observer
one
(EO-‐1)
satellite
(http://eo1.gsfc.nasa.gov/,
http://eo1.usgs.gov/).
NASA
launched
EO-‐1
on
the
21st
of
November,
2000
as
a
one
year
investigation
into
advanced
Earth
observing
techniques.
The
mission
was
extended
to
the
present
day
under
the
control
of
the
United
States
Geological
Survey
(USGS).
The
EO-‐1
mission
included
2
other
optical
instruments,
the
Linear
Etalon
Imaging
Spectral
Array
(LEISA)
Atmospheric
Corrector
(LAC)
and
the
Advanced
Land
Imager
(ALI)
(Ungar
et
al.,
2003).
Initially
the
ALI
and
Hyperion
were
going
to
be
integrated
to
share
the
same
fore-‐optics
but
time
restraints
meant
that
these
were
designed
and
built
as
separate
instruments
(Pearlman
et
al,
2003).
Hyperion
was
designed,
built,
tested
and
delivered
in
less
than
12
months
to
make
the
launch
time
so
it
is
a
remarkable
achievement
in
such
a
short
time.
As
the
Hyperion
mission
was
only
designed
as
a
technical
demonstrator
and
had
such
a
fast
turn
around
time,
the
data
produced
require
a
number
of
processing
steps
if
these
are
to
be
used
for
scientific
investigation.
Hyperion
has
242
spectral
bands
ranging
from
355.59
–
2577.08
nm.
Due
to
poor
signal
to
noise
from
some
of
these
bands
only
198
were
actually
processed
(Pearlman
et
al,
2003).
The
sensor
operates
in
‘push-‐broom’
mode,
which
means
that
the
sensor
is
not
scanned
across
the
surface
like
other
sensor
but
only
sees
what’s
in
the
instantaneous
field
of
view.
The
instrument
consists
of
two
spectrometers,
the
Visible
Near
InfraRed
(VNIR)
and
the
Short
Wave
InfraRed
(SWIR).
The
processed
bands
consist
of
50
VNIR
and
148
SWIR.
The
detector
arrays
for
the
spectrometers
have
256
pixels
in
the
across
track
direction
and
128
and
256
in
the
spectral
direction.
Not
all
of
the
spectral
pixels
are
used
with
70
and
172
pixels
used
for
each
spectrometer
respectively.
The
arrays
are
illuminated
by
a
diffraction
grating
lined
up
in
the
across
track
direction.
As
light
leaves
the
grating
it
illuminates
pixels
on
the
arrays
based
on
the
wavelength
of
the
light.
This
gives
a
single
across
track
line
of
256
pixels
with
a
spectral
signature
of
242
bands.
As
the
satellite
progresses
along
track
more
lines
are
built
up
generating
what
is
known
as
a
data
cube.
Hyperion
ATBD
1
Figure
(i):
Illustration
of
how
the
Hyperion
data
cube
is
built.
The
data
cube
is
constructed
as
shown
in
figure
(i)
(CSIRO,
2011).
The
detector
grid
is
illuminated
with
one
dimension
(across
track)
being
the
field
of
view
of
the
sensor
and
the
other
dimension
creating
the
spectral
range.
The
image
is
built
up
over
time
as
successive
across
track
lines
are
samples.
The
Time
axis
in
figure
(i)
is
in
the
along
track
direction.
The
diffraction
grating
is
aligned
with
the
field
of
view
axis.
This
push
broom
configuration
can
lead
to
problems
in
the
data.
A
dead
detector
in
the
array
will
cause
a
line
to
appear
in
the
along
track
direction
in
the
across
track
position
and
band
where
the
dead
pixel
is
located.
Figure
(ii):
Band
11
(left)
and
band
96
(right)
from
the
same
Hyperion
scene.
Band
11
shows
that
the
1st
and
6th
across
track
detectors
on
the
11th
spectral
line
are
dead.
This
results
in
the
lines
in
the
image
for
band
11.
Band
96
is
ok.
The
missing
lines
are
repaired
by
interpolating
values
from
pixels
on
either
side
of
the
line
and
replacing
the
missing
value.
This
process
is
explained
in
more
detail
in
the
section
on
python
modules.
The
other
major
problem
associated
with
hyperspectral
push
broom
sensors
is
spectral
smile
(Richter
et
al.,
2011).
This
is
caused
by
misalignment
or
aberration
of
the
diffraction
apparatus
with
the
sensor
grid
along
the
field
of
view
axis.
The
affect
of
this
is
shifting
of
the
centre
wavelengths
of
the
light
that
lands
on
the
Hyperion
ATBD
2
detector
grid
from
one
side
of
the
field
of
view
to
the
other.
This
is
illustrated
in
figure
(iii),
which
shows
the
wavelength
in
the
Hyperion
header
file
subtracted
from
the
actual
wavelengths
for
each
position
as
reported
in
the
Auxiliary
file.
The
actual
values
were
determined
pre-‐launch
and
again
post-‐launch
by
examining
absorption
features
in
the
spectra.
As
most
applications
that
allow
viewing
and
processing
of
hyperspectral
data
are
expecting
a
single
range
of
wavelength
for
every
pixel
this
affects
any
processing
done
with
the
data.
The
method
employed
here
to
remedy
this
problem
is
to
interpolate
each
and
every
pixel
spectra
in
the
Hyperion
files
to
a
common
wavelength
range.
Here
we
have
chosen
to
use
the
wavelength
range
reported
in
each
Hyperion
header
file.
The
algorithms
to
doe
this
are
explained
in
the
python
modules
section.
Figure
(iii)
shows
spectra
from
the
VNIR
spectrometer
(bands1
and
18)
and
the
SWIR
spectrometer
(bands
117
and
215).
This
shows
that
problem
is
only
minor
with
the
SWIR
spectrometer.
As
described
in
the
section
on
Python
modules,
each
spectrometer
is
treated
separately
and
the
data
recombined
in
the
output
product.
Figure
(iii):
Centre
wavelength
difference
across
the
detector
array
for
various
bands
A
number
of
other
processing
steps
can
be
applied
to
the
Hyperion
data.
Many
of
these
steps
are
described
in
the
paper
by
Datt
et
al,
2003.
A
Private
communication
with
Dr.
David
Jupp,
Principle
Investigator
for
the
Australian
membership
of
the
NASA
EO-‐1
science
validation
team
(retired),
identified
11
processing
steps
that
can
be
applied
to
Hyperion
data.
These
steps
are,
1.
Fix
bad
pixels
(low
SNR
bands
and
pixel
dropouts)
2.
Gain
and
Offset
correction
(to
radiance)
3.
Fix
out-‐of-‐range
values
(artifact
of
process
2)
4.
Interpolate
wavelengths
(corrects
spectral
‘smile’)
5.
De-‐spike
(adjust
extreme
values
based
on
image
mean)
Hyperion
ATBD
3
6.
De-‐streak
(accounts
for
differences
in
detector
sensitivity:
performed
separately
for
each
spectrometer)
7.
Atmospheric
correction
8.
Sun
angle
correction
9.
Cross
track
illumination
correction
10.
Minimum
noise
fraction
(reduces
uncorrelated
spatial
noise)
11.
Empirical
line
correction
to
enable
comparison
between
swaths
Steps
2
and
3
relate
to
Hyperion
Level
1B1
data
where
radiometric
corrections
have
not
been
done.
The
AusCover
data
archive
consists
of
Level
1R
data,
which
has
been
radiometrically
corrected
already.
Of
the
remaining
9
steps
the
4
of
most
importance
in
descending
order
to
achieve
an
adequate
surface
reflectance
product
are;
1.)
Atmospheric
correction,
2.)
Fix
bad
bands
and
pixels,
3.)
smile
correction
and
4.)
de-‐streaking.
The
4
steps
post-‐atmospheric
correction
were
not
considered
for
inclusion
in
this
work.
De-‐streaking
was
considered
and
recommended
by
Dr.
Jupp
but
it
was
decided
to
implement
this
module
as
time
permitted,
which
it
hasn’t.
The
atmospheric
correction
algorithm
used
for
this
work
was
developed
using
techniques
in
the
literature
and
with
the
help
of
colleagues
well
versed
in
hyperspectral
atmospheric
correction.
The
algorithm
and
technical
basis
is
described
in
some
detail
in
the
python
modules
section.
As
with
all
remote
sensing
products
these
data
should
be
used
with
caution
given
the
possible
limitations
of
the
data.
Python
Modules
1.0)
Introduction
This
document
will
describe
the
python
modules
used
to
process
the
archive
of
L1R
files
currently
on
the
iVEC
cortex
system.
2.0)
Pre-‐processing
At
the
time
of
the
preparation
of
this
document
there
were
only
2
pre-‐processing
modules.
These
are
designed
to
1.)
Replace
missing
or
bad
pixels
in
the
data
and
ensure
that
bands
that
are
known
to
be
‘dead’
are
zeroed,
and
2.)
Correct
for
spectral
smile.
The
following
section
will
detail
how
the
python
modules
address
these
issues
within
the
data.
Module:
fix_bad_bands_and_pixels.py
Synopsis:
This
module
emulated
the
behavior
of
the
IDL
Envi
package
“Workshop”
function
“Apply
Bad
Pixel
List”.
The
Envi
package
and
the
underlying
IDL
code
were
provided
by
David
Jupp,
Retired
fellow,
Division
of
Marine
and
Atmospheric
Science
based
in
Canberra.
Instructions
on
how
to
install
and
use
this
Envi
package
can
be
found
in
Datt
and
Jupp
(2004).
The
function
“Apply
Bad
Pixel
List”
is
created
in
the
Envi
package
by
the
IDL
program
“bad_pix_fix.pro".
The
python
module
“fix_bad_bands_and_pixels.py”
is
designed
to
exactly
emulate
the
code
segments
Hyperion
ATBD
4
from
“bad_pix_fix.pro"
which
work
on
the
Hyperion
data
cube
to
fix
bad
pixels
and
write
zeros
over
bad
bands.
Algorithm:
The
module
reads
in
a
list
of
bad
bands
and
bad
pixels.
A
list
entry
indicating
a
bad
band
results
in
all
the
pixel
values
in
that
band
being
set
to
zero.
A
bad
pixel
(element)
in
the
detector
array
results
in
a
column
of
bad
pixels
in
a
band.
The
detector
array
consists
of
3
spectrometers
each
with
a
diffraction
grating.
The
detector
arrays
consist
of
256
across
track
and
a
number
of
pixels
in
the
spectral
dimension.
As
the
satellite
moves
over
the
surface,
lines
are
built
up
to
form
an
image.
Energy
reaching
the
diffraction
grating
diffracts
dependent
on
the
wavelength
so
that
pixels
(detectors)
in
the
spectral
direction
of
the
array
are
energized.
If
the
across
track
line
is
the
x
direction
and
the
along
track
is
the
y
direction
then
the
spectral
direction
is
the
z
direction.
The
detector
array
produces
x
and
z
directions
while
the
y
direction
is
created
by
each
successive
activation
of
the
sensing
array.
What
this
means
is
a
bad
detector
(pixel)
for
a
band
at
an
across
track
position
will
appear
as
a
line
in
the
along
track
direction
for
a
particular
band.
If
a
pixel
in
a
particular
band
is
shown
as
bad
then
the
response
of
the
algorithm
depends
on
the
across
track
position
of
the
bad
pixel.
This
can
be
expressed
in
pseudo-‐code
(where
x
is
along
track,
z
is
spectral
band
and
y
is
across
track)
as:
(Case
1)
If
bad
pixel
position
at
[x,z,0]
then
position
[x,z,0]
=
position
[x,z,1]
(Case
2)
If
bad
pixel
position
at
[x,z,255]
then
position
[x,z,255]
=
position
[x,z,254]
(Case
3)
If
bad
pixel
position
at
[x,z,1
-‐
254]
then
position
[x,z,1
-‐
254]
=
(position
[x,z,y-‐1]
+
position
[x,z,y+1])/2
where
y
is
a
number
from
1
to
254.
The
python
code
segment
which
executes
this
algorithm
is:
if
sort_pixels[indx]
==
0:
(If
band
is
bad,
set
all
in_cube[:,z,:]
=
0
values
to
zero)
elif
sort_pixels[indx]
==
1:
(Case
1
from
above)
in_cube[:,z,0]
=
in_cube[:,z,1]
elif
sort_pixels[indx]
==
256:
(Case
2
from
above)
in_cube[:,z,255]
=
in_cube[:,z,254]
elif
(sort_pixels[indx]
>
1)
and
(sort_pixels[indx]
<
256):
(Case
3
from
above)
for
x
in
range(0,colm_size-‐1):
Hyperion
ATBD
5
in_cube[x,z,sort_pixels[indx]-‐1]
=
int(round((in_cube[x,z,sort_pixels[indx]-‐2]
+
in_cube[x,z,sort_pixels[indx]])/2))
Module
execution:
The
module
is
designed
to
be
executed
from
the
command
line
and
requires
2
inputs.
These
are
the
Hyperion
L1R
file
to
be
processed
and
the
list
with
bad
bands
and
pixels.
These
inputs
are
entered
on
the
command
line
using
switch
options
so
the
order
is
not
essential.
The
command
line
execution
is:
python
fix_bad_bands_and_pixels.py
–i
{Hyperion
L1R
path
and
filename}
–l
{bad
band
and
pixel
list
path
and
filename}
When
this
module
is
run
on
another
system
the
call
to
python
can
be
removed
as
long
as
the
python
path
is
included
in
the
python
script.
Calling
the
script
with
the
switch
–h
will
display
the
script
usage.
The
module
has
4
distinct
parts.
The
first
part
reads
in
the
file
that
contains
the
bad
band
and
pixel
locations.
It
then
sorts
the
list
by
band
number.
The
band
numbers
and
pixel
positions
are
stored
to
separate
arrays,
which
are
passed
to
the
code
segment,
which
performs
the
fixes
(see
above)
The
second
part
reads
in
the
Hyperion
L1R
hdf
file.
This
uses
Pyhdf
to
read
in
the
data
cube
as
in_cube,
file
attributes
as
file_attributes
and
data
attributes
as
data_attributes.
The
third
part
performs
the
corrections
to
the
data
cube
as
shown
in
the
code
segment
above.
The
forth
part
writes
out
a
new
L1R
file
which
is
designated
with
“_BP_FIX”
at
the
end
of
the
filename
just
before
the
file
suffix
“.L1R”.
This
also
uses
Pyhdf
functionality.
The
repaired
data
cube
is
written
into
the
new
file
along
with
the
data
attributes
and
file
attributes
from
the
original
file
as
nothing
has
really
changed.
The
only
addition
to
the
file
attributes
is
the
line
“Post
processing:
Bad
bands
and
pixels
fixed,
processed
by
Curtin
RSSRG
on
{system
date
and
time
inserted
here}”
Result:
This
module
results
in
a
new
file
being
produced
where,
hopefully,
all
of
the
bad
bands
and
pixels
errors
in
the
data
cube
have
been
addressed.
As
this
is
only
a
temporary
file,
a
header
file
has
not
been
produced.
To
view
this
file
in
something
like
Envi,
copy
the
header
file
from
the
original
file
and
change
the
name
to
correspond
to
the
new
file
name.
When
this
module
is
integrated
into
a
processing
system,
the
“_BP_FIX”
file
will
be
deleted
as
soon
as
it
has
undergone
further
processing.
Module:
fix_spectral_smile.py
Synopsis:
This
module
emulated
the
behavior
of
the
IDL
Envi
package
“Workshop”
function
“Desmile”.
The
function
“Desmile”
is
created
in
the
Envi
package
by
the
IDL
program
“Desmile.pro".
This
module
reduces
the
affect
of
band
centre
shifting
across
the
spatial
direction
of
the
data
cube
which
results
in
spectral
frown
or
smile
depending
on
which
way
you
look
at
it.
While
there
are
other
methods
to
reduce
spectral
smile,
interpolation
to
a
common
set
of
band
centres
(Jupp
et
al.,
2003)
produces
the
best
results
without
completely
contaminating
the
spectra.
Hyperion
ATBD
6
Algorithm:
The
Hyperion
instrument
has
2
spectrometers
that
measure
from
around
400
–
1000
nm
and
900
–
2500
nm
respectively.
This
means
that
there
are
overlapping
bands.
Bands
at
either
end
of
both
spectrometers
are
set
to
zero
due
mainly
to
low
signal
levels.
This
gives
spectra
as
shown
in
figure
1.
Figure
1.)
Hyperion
spectra
with
the
VNIR
spectrometer
output
shown
as
blue,
the
SWIR
shown
as
red
and
the
wavelength
in
nm
displayed
on
the
x-‐
axis
Given
the
overlap
it
was
decided
to
perform
2
spectral
interpolations,
one
for
each
spectra.
This
is
sensible
anyway
as
the
smile
affect
is
more
prevalent
in
the
VNIR
spectrometer.
The
other
issue
is
that
bands
on
either
end
of
the
spectral
regions
are
set
to
zero.
If
an
interpolation
is
performed
on
this
data
as
is
then
there
is
significant
“ringing”
caused
by
rapid
changes
in
values.
This
is
evident
in
figure
2.
Hyperion
ATBD
7
Figure
2.)
The
original
Hyperion
spectra
in
blue
and
the
interpolated
spectra
(dashed
green
line).
The
red
circles
indicate
the
areas
where
ringing
can
occur
due
to
rapid
changes
in
value.
To
combat
this
the
zero
values
at
either
end
of
the
spectra
are
set
to
the
first
non-‐
zero
proceeding
or
preceding
value
depending
on
which
end
of
the
spectrum
they
occur
at.
One
this
is
done
then
the
spectra
is
interpolated.
The
Python
module
Scipy
has
a
few
inbuilt
interpolation
functions.
Here
the
function
interp1d
is
used.
For
further
explanation
of
the
function
and
the
source
code
refer
to
http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html.
This
function
has
a
few
different
interpolation
methods.
Tests
were
performed
for
both
linear
interpolation
and
using
a
cubic
spline
interpolation.
The
results
of
this
are
shown
in
figure
3.
Hyperion
ATBD
8
(a)
(b)
Figure
3.)
Linear
interpolation
(a)
and
cubic
spline
interpolation
(b)
are
compared
with
the
original
spectra.
Hyperion
ATBD
9
From
inspection
neither
method
appears
to
be
vastly
superior
to
the
other.
The
linear
interpolation
does
seem
to
produce
less
ringing
around
rapid
changes
of
value.
The
actual
band
centres
for
each
Hyperion
scene
are
located
in
the
auxiliary
file.
This
gives
the
band
centre
locations
for
each
position
in
the
across
track
pixels
of
the
scene.
Each
pixel
spectra
is
then
interpolated
to
the
band
centres
values
from
the
image
header
file,
as
these
do
not
change
from
file
to
file.
This
way
all
scenes
will
be
interpolated
to
the
same
centres
and
can
be
compared
directly.
Module
execution:
The
module
is
designed
to
be
executed
from
the
command
line
and
requires
3
inputs.
These
are
the
Hyperion
L1R
file
or
preferably
the
L1R
file
that
has
had
bad
lines
and
pixels
fixed,
the
auxiliary
file
and
the
header
file.
These
inputs
are
entered
on
the
command
line
using
switch
options
so
the
order
is
not
essential.
The
command
line
execution
is:
python
fix_bad_bands_and_pixels.py
–i
{Hyperion
L1R
path
and
filename}
–a
{Hyperion
auxiliary
path
and
filename}
–e
{Hyperion
header
path
and
filename}
This
module
has
three
part,
the
first
of
which
reads
in
the
three
input
files,
parses
out
the
required
information
and
loads
this
into
arrays.
The
second
part
does
the
actual
processing
and
the
final
part
writes
out
a
new
file.
The
interpolation
is
done
for
every
pixel
in
the
scene.
This
means
that
a
256
pixel
wide
by
5000
pixel
long
scene
will
require
1.28
million
interpolations.
After
some
tweaking
of
the
code
this
process
takes
less
than
5
minutes
on
a
6500
line
file
when
run
on
a
dual-‐core
iMAC
and
is
about
20%
quicker
when
run
on
the
iVEC
system.
3.0)
Atmospheric
Correction
The
atmospheric
correction
of
the
Hyperion
data
involves
a
number
of
steps
that
are
divided
over
a
number
of
python
modules.
Due
to
a
total
lack
of
imagination,
most
of
these
modules
explain
their
function
in
the
file
name.
The
process
can
be
broadly
divided
into
a
few
basic
steps;
1.)
Extract
information
from
the
Hyperion
files
to
set
up
radiative
transfer
(RT)
runs
and
execute
the
RT
program,
2.)
use
these
files
to
produce
a
water
vapour
(WV)
product
from
the
Hyperion
file,
3.)
create
lookup
files
from
the
RT
files,
4.)
using
the
lookup
files
and
the
WV
product,
perform
atmospheric
correction
(AC)
on
the
file.
This
section
will
detail
the
operation
of
the
python
modules
that
perform
the
functions
described
in
the
previous
paragraph.
These
modules
are
designed
to
be
run
stand-‐alone
but
may
require
previous
modules
to
be
run
to
produce
the
appropriate
input
files.
Module:
CIBR_calculate_ratios.py
Synopsis:
This
module
takes
a
Hyperion
L1R
scene
and
calculates
a
flat
file
(ENVI
dat
file
with
associated
header)
of
Continuum
Interpolated
Band
Ratios
(CIBR)
using
the
equations
from
Schlapfer
et
al.,
1998.
The
CIBR
ratio
values
are
Hyperion
ATBD
10
determined
from
the
L1R
data
using
a
water
vapour
absorption
feature.
The
ratio
is
determined
using
a
band
centred
on
the
absorption
feature
and
1
band
either
side
that
do
not
respond
to
the
level
of
water
vapour
in
the
atmosphere.
The
deepening
of
the
feature
indicates
a
higher
level
of
atmospheric
water
vapour
and
thus
can
be
exploited
to
estimate
this
level.
Algorithm:
The
algorithm
is
an
implementation
of
the
equations
for
the
CIBR
method
shown
on
page
354
of
the
paper
by
Schlapfer
et
al.,
1998.
This
is,
!!
!!"#$ =
(1)
!!! ×!!! !!!! ×!!!
where;
!!! − !!
!!! =
!!! − !!!
!! − !!!
!!! = *
!!! − !!!
Figure
4:
This
shows
band
averaged
radiance
values
for
Hyperion
bands
around
the
1132
nm
water
vapour
feature.
The
radiance
values
were
produced
by
the
MODTRAN5
RT
model.
*
The
actual
equation
in
Schlapfer
et
al.,
1998
has
the
terms
in
the
numerator
reversed
and
is
incorrect.
Hyperion
ATBD
11
Figure
4
shows
where
the
bands
are
situated
in
regards
to
the
absorption
feature.
This
data
is
modeled
and
shows
how
the
ratio
determined
from
the
Hyperion
data
should
behave
for
different
levels
of
atmospheric
water
vapour.
This
modeling
will
be
discussed
more
thoroughly
in
later
sections.
Figure
1
shows
the
sharp
absorption
feature
in
actual
Hyperion
data.
This
is
seen
in
the
red
trace
that
shows
radiance
data
(as
counts)
for
the
second
radiometer.
Each
pixel
in
the
scene
is
processed
which
results
in
a
single
band
data
file
with
a
ratio
value
for
each
along
track
by
cross
track
pixel
position.
The
output
file
is
written
out
as
an
ENVI
format
.dat
file
with
associated
header.
Module
execution:
This
module
requires
three
input
files.
These
are
a
Hyperion
files
containing
radiance
values,
the
associated
header
file
and
a
template
file
for
the
.dat
file
associated
header.
This
module
should
work
with
either
the
original
L1R
file,
the
output
of
the
band
pixel
fix
process,
the
output
of
the
smile
correction
process
or
any
Hyperion
file
where
the
format
of
the
file
and
header
remain
unchanged.
The
template
file
contains
the
following
information;
ENVI
description
=
{
Continuum
Interpolated
Band
Ratio}
samples
=
256
lines
=
3400
bands
=
1
header
offset
=
0
file
type
=
ENVI
Standard
data
type
=
5
interleave
=
bsq
sensor
type
=
Unknown
byte
order
=
0
wavelength
units
=
Unknown
band
names
=
{
CIBR
ratios}
The
only
change
made
for
each
Hyperion
file
is
to
the
output
name
and
the
number
of
lines
and
pixels.
The
CIBR
data
is
written
out
as
a
flat
binary
file.
This
file
can
be
read
into
ENVI
but
can
also
be
read
by
Spectral
python.
The
module
can
be
run
on
the
command
line
or
in
a
script
like;
python
CIBR_calculate_ratios.py
-‐i
(hyperion_radiance_file}
-‐e
(hyperion_hdr_file}
-‐t
(output_template_file)
The
module
will
output
a
.dat
file
and
a
.hdr
file
with
the
naming
convention
(file_base_name)_CIBR_ratios.hdr
and
(file_base_name)_CIBR_ratios.dat,
i.e.
EO1H0890832006187110K1_CIBR_ratios.dat.
Module:
Create_tape5_files.py
Hyperion
ATBD
12
Synopsis:
Downstream
processes
such
as
water
vapour
retrieval
and
atmospheric
correction
require
radiative
transfer
model
output
based
on
the
conditions
of
viewing
from
Hyperion.
This
module
gather
information
from
metadata
files
associated
with
the
Hyperion
radiance
file
and
uses
this
to
create
a
set
of
tape5
files
based
on
the
viewing
conditions
of
the
overpass.
The
tape5
files
are
used
by
MODTRAN5
to
execute
radiative
transfer
modeling
based
on
the
viewing
conditions.
Algorithm:
There
is
no
algorithm
as
such
so
a
brief
description
of
the
module
operation
will
be
given.
This
module
requires
a
template
tape5
file
as
it
only
changes
a
few
key
parameters
and
does
not
write
the
tape5
file
from
scratch.
The
form
of
the
file
is
extremely
important.
The
module
is
not
at
all
clever
as
it
removes
and
replaces
text
based
on
line
and
position
so
changing
the
template
file
will
in
all
likelihood
cause
the
entire
processing
chain
to
fail.
The
module
also
requires
a
.MET
file
for
each
Hyperion
scene.
The
.MET
file
contains
information
on
the
geographic
position
of
swath,
day
and
time
of
sampling
(although
the
day
can
be
derived
from
the
file
name).
Using
the
USGS
data
explorer
tool
(http://earthexplorer.usgs.gov/)
it’s
possible
to
place
a
bounding
box
around
the
area
of
interest
and
generate
a
.csv
file
that
contains
information
about
Hyperion
overpasses
within
the
bounding
box
for
the
archive
of
Hyperion
files.
This
contains
a
field
for
look
angle.
The
Hyperion
imager
can
look
up
to
approximately
20
degrees
from
nadir
at
the
equator.
This
look
angle
is
extracted
from
the
.csv
file
to
be
used
in
the
creation
of
the
tape5
file.
There
are
situations
where
an
overpass
in
the
AusCover
archive
is
not
included
in
the
.csv
file.
If
this
is
the
case
then
nadir
viewing
is
assumed
and
the
look
angle
is
0
degrees.
The
4
parameters
that
are
changed
based
on
metadata
are
the
look
angle,
day
of
year,
swath
centre
latitude,
swath
centre
longitude.
These
are
derived
from
the
.MET
file
and
the
.csv
file.
Attempts
have
been
made
to
account
for
variations
in
the
.MET
file
(missing
lines
and
values)
but
a
few
Hyperion
files
have
required
that
the
.MET
files
be
manually
edited.
The
other
parameters
that
are
changed
are
the
surface
reflectance
and
the
column
water
vapour
(CWV)
amount.
Two
surface
reflectance
figures
are
used
(0.5
and
1.0)
and
34
CWV
amounts
are
used
ranging
from
0.05
to
5.0
g.cm2.
The
CWV
amounts
are
chosen
to
give
a
reasonably
smooth
progression
on
the
curve
that
is
produced
when
the
CWV
amount
is
plotted
against
the
CIBR
ratio
that
results
from
the
MODTRAN
modeling.
Interpolated
plots
are
shown
for
various
surface
reflectance
values
in
figure
5.
Hyperion
ATBD
13
Figure
5:
Interpolated
CIBR
vs
CWV
for
various
surface
reflectance
values
In
all
68
tape5
files
are
created.
This
module
does
2
other
important
tasks.
The
first
of
these
is
to
create
a
mod5root.in
file.
This
file
lists
all
of
the
tape5
files
that
need
to
be
processed.
When
MODTRAN5
is
run
from
a
directory
that
does
not
contain
the
executable,
it
searches
that
directory
for
a
mod5root.in
file.
If
it
doesn’t
find
it,
it
will
search
in
the
directory
that
contains
the
executable.
The
other
important
task
when
executing
in
a
remote
directory
is
to
create
a
symbolic
link
to
the
MODTRAN
DATA
directory.
I
think
you
can
implicitly
write
the
path
to
the
required
data
files
in
the
tape5
file
but
I
have
never
been
able
to
get
it
to
work.
Creating
the
symbolic
link
means
that
MODTRAN
will
sort
all
of
this
out
for
you.
MODTRAN
is
executed
from
the
command
line
(which
could
be
done
from
this
module)
but
at
the
time
I
decided
to
do
this
from
within
the
perl
module
that
links
the
python
modules
but
more
on
this
later.
Module
execution:
This
module
can
be
run
independently
like
all
of
the
other
modules.
It
is
run
like:
python
Create_tape5_files.py
-‐i
(tape5
template
file)
-‐m
(.MET
file)
-‐e
(Hyperion
overpass
.csv
file)
-‐t
(directory
containing
the
tape5
files)
-‐d
(path
to
the
MODTRAN
DATA
directory)
-‐l
(name
of
the
processing
log
file
name)
Module:
Make_final_lookups.py
Synopsis:
Continuing
on
with
the
lack
of
imagination
naming
convention,
this
module,
as
the
name
suggests
makes
a
final
set
of
lookup
files.
Each
lookup
file
is
produced
from
2
output
files
from
MODTRAN
with
the
same
input
values
apart
from
surface
reflectance.
Each
lookup
file
is
for
a
different
water
vapour
value
Hyperion
ATBD
14
and
is
matched
for
the
observational
geometry
for
each
overpass.
For
each
scene
then
34
lookup
files
are
produced.
Algorithm:
The
production
of
the
lookup
files
is
based
on
the
work
of
Guanter
et
al,
(2009),
Rodger,
(2008)
and
Rodger,
(2011).
The
aim
of
the
Guanter
et
al.
approach
is
to
decouple
the
surface
radiative
effects
from
those
of
the
atmosphere.
The
surface
reflectance
can
then
be
estimated
by
providing
the
at-‐
sensor
radiance
as
the
atmospheric
radiative
effects
are
estimated
by
the
MODTRAN
outputs.
Equation
2
and
3
(Guanter
et
al,
2009)
show
the
at-‐sensor
atmospheric
path
radiance
!! !
and
the
total
at-‐sensor
radiance
! !"# !
respectively.
↑
!!"# !!↓ ! !
!! ! = !! 0 +
(2)
!
↑
!!"# !!↓ ! ! (3)
! !"# ! = !! ! +
!
↑ ↑
Here
!
is
the
surface
reflectance,
!!"#
and
!!"#
are
the
spectral
transmittance
for
diffuse
and
direct
radiation
respectively.
The
term
!!↓ !
represents
the
total
downwelling
flux
at
the
surface
and
can
be
expressed
as,
↓ ↓
!!↓ ! = !!"# !! + !!"# !
(4)
↓ ↓
where
!!"#
and
!!"#
represent
the
direct
and
diffuse
downwelling
flux
at
the
surface
↓
and
!!
is
the
cosine
of
the
solar
zenith
angle.
!!"#
has
a
component
that
represents
the
multiple
scattering
events
that
occur
between
the
surface
and
the
atmosphere
and
is
still
coupled
to
the
surface.
Combining
equations
2
and
3
and
re-‐writing
allows
the
downwelling
flux
to
be
expressed
without
multiple
scattering
(!!↓ 0 )
(essentially
de-‐coupling
this
from
the
surface
reflectance)
where
this
effect
is
expressed
in
a
term
for
the
spherical
albedo
(S)
of
the
atmosphere
as,
! ↑ !!↓ ! !
! !"# ! = !! 0 +
(5)
! !!!"
↑ ↑
where
! ↑ = !!"# + !!"#
.
Now
that
the
surface
reflectance
has
been
decoupled
from
all
of
the
other
radiative
components,
equation
5
can
be
re-‐written
to
express
!
as
(Rodger,
2008),
! !!! !
! !"#
↓ ↑
!! ! !
!=
!
!!! !"#
!!! !
! (6)
↓ ↑
!! ! !
If
A
is
defined
as,
Hyperion
ATBD
15
!!"# !!! !
!=!
(7)
!!↓ ! ! ↑
Then
the
equation
to
determine
the
surface
reflectance
can
be
written
as,
!
! = !!!"
(8)
Using
equations
7
and
8
we
now
have
a
method
to
estimate
surface
reflectance
from
the
at-‐sensor
radiance
(! !"# )
if
we
can
determine
!! 0 ,
!!↓ 0 ,
! ↑
and
!.
Using
the
output
tape7
or
tape7sc
files
from
MODTRAN5,
these
parameters
can
be
calculated
from
2
runs
with
different
reflectances.
Here
the
tape7sc
file
has
been
used,
as
it
outputs
data
at
wavelength
selectable
intervals
(Berk
et
al,
2011).
An
example
of
the
output
from
this
type
of
file
is
shown
in
figure
6.
The
columns,
explanations
and
how
these
parameters
were
calculated
is
shown
if
figure
7.
To
calculate
the
required
parameters
from
the
2
run
method
the
following
equations
are
used,
!! !!↓ !! !!! !! !!! !!↓ !! !!! !!
!! 0 = !! !!↓ !! !!! !!↓ !!
(9)
! !!" !! !!! ! !!!!!
!!↓ 0 = !! ! ↑
(10)
!!↓ !! !!!↓ !!
!=! ↓ !! !!! !!↓ !!
(11)
! !!
! !!! !! − !! 0 (12)
↑
!!"# =
!! !!↓ !!
↑
As
!!"#
can
be
directly
read
from
the
tape7
files
from
the
TRAN
column
we
can
↑ ↑
calculate
! ↑ = !!"# + !!"! .
To
calculate
equation
9,
data
is
required
from
the
SOL
SCAT
column
for
both
runs.
Equation
10
requires
data
from
the
TOTAL
RAD
column
and
equation
12
requires
data
from
the
SOL
SCAT
column.
Equations
10
and
11
can
be
evaluated
with
data
from
either
MODTRAN
run
as
long
as
the
retrieved
data
is
matched
to
the
radiative
components
for
that
run.
To
evaluate
equation
11
we
need
to
calculate
both
!!↓ !!
and
!!↓ !!
using,
!!!" !
!!↓ ! = ↑ !
(13)
!!"#
Hyperion
ATBD
16
where
!!!
is
retrieved
from
the
GRND
RFLT
column
for
both
MODTRAN
runs.
Each
of
the
34
lookup
files
will
contain
values
for
!! 0 ,
!!↓ 0 ,
! ↑
and
!
at
each
of
the
wavelength
values
within
the
tape7sc
file.
This
gives
2200
values
at
1nm
resolution
over
the
range
of
wavelengths
where
Hyperion
retrieves
data.
This
module
is
written
is
such
a
way
that
it
will
accept
any
number
of
input
files
as
long
as
the
files
naming
convention
remains
the
same.
This
means
that
either
more
runs
with
a
greater
range
of
CWV
can
be
run
or
if
a
more
complicated
type
of
lookup
file
system
was
envisaged
with
other
variable
atmospheric
parameters
(such
as
aerosol
optical
depth),
then
this
module
will
adapt.
As
long
as
each
pair
of
input
tape7sc
file
ends
with
_0.50.7sc
and
_1.00.7sc.
Module
execution
syntax:
This
module
is
capable
of
being
run
independently
as
are
the
other
modules.
It
can
be
executed
from
the
command
line
using,
python
Make_final_lookups.py
–i
(working
directory).
The
working
directory
will
be
the
directory
that
contains
the
tape7sc
files
produced
from
the
multiple
MODTRAN
runs.
The
script
will
search
and
list
all
files
ending
with
_0.50.7sc
and
_1.00.7sc.
A
loop
is
executed
for
each
set
of
files
with
the
values
from
each
column
of
the
tape7sc
files
(see
figure
6)
read
into
multi-‐
dimensional
arrays,
1
array
for
each
file.
The
appropriate
column
values
are
used
to
calculate
parameters
as
described
above.
For
each
set
of
files
a
lookup
file
is
written
which
contains
columns
for
4
required
values.
A
separate
file
is
produced
which
contains
the
wavelength
values.
Subsequent
modules
use
both
of
these
files.
Hyperion
ATBD
17
MMF
1
2
2
-‐1
1
1
1
1
1
1
1
0
0
300.000
0.50
1
1
1
3
0
0
50.00000
0.00000
0.00000
0.00000
0.30000
-‐99.000
-‐99.000
-‐99.000
-‐99.00000
-‐99.00000
-‐99.00000
0.050000
0.000592
!
H2O
&
O3
COLUMNS
[GM/CM2]
36TROPICAL
MODEL
100.00000
0.30000
169.46964
101.43521
0.16651
0.00000
0
0.00000
0
2
187
0
-‐32.24763
208.65001
22.73250
171.41467
23.50100
0.00000
0.00000
0.00000
3905.0
29415.0
1.0
1.0RN
NGAA
0
0
0.000
0
0.000
0
1.000
WAVLEN(NM)
TRAN
PTH_THRML
THRML_SCT
SURF_EMIS
SOL_SCAT
SING_SCAT
GRND_RFLT
DRCT_RFLT
TOTAL_RAD
REF_SOL
SOL@OBS
DEPTH
DIR_EM
TOA_SUN
BBODY_T[K]
350.000
0.4182395
0.0000E+00
0.0000E+00
0.0000E+00
5.7267E+00
1.7976E+00
2.0193E+00
4.0986E-‐01
7.7460E+00
6.17E+00
1.14E+02
0.872
0.5000
1.1364E+02
2108.620
351.000
0.4215315
0.0000E+00
0.0000E+00
0.0000E+00
5.8589E+00
1.8438E+00
2.1001E+00
4.3255E-‐01
7.9590E+00
6.51E+00
1.17E+02
0.864
0.5000
1.1703E+02
2107.085
352.000
0.4248399
0.0000E+00
0.0000E+00
0.0000E+00
5.5999E+00
1.7670E+00
2.0423E+00
4.2717E-‐01
7.6422E+00
6.43E+00
1.13E+02
0.856
0.5000
1.1262E+02
2098.253
353.000
0.4283030
0.0000E+00
0.0000E+00
0.0000E+00
5.6811E+00
1.7977E+00
2.1108E+00
4.4878E-‐01
7.7919E+00
6.75E+00
1.15E+02
0.848
0.5000
1.1492E+02
2095.923
~
1847.000
0.2381447
5.2464E-‐07
1.3824E-‐08
3.4637E-‐07
2.7209E-‐03
1.8792E-‐03
6.6952E-‐02
6.4898E-‐02
6.9674E-‐02
9.77E-‐01
1.45E+01
2.805
0.5000
1.4504E+01
490.268
1848.000
0.1661969
5.4503E-‐07
1.2811E-‐08
2.4670E-‐07
1.9604E-‐03
1.6168E-‐03
2.7353E-‐02
2.6573E-‐02
2.9314E-‐02
4.00E-‐01
1.46E+01
2.999
0.5000
1.4567E+01
464.758
1849.000
0.2973246
5.4172E-‐07
1.8017E-‐08
4.4746E-‐07
3.0324E-‐03
2.0842E-‐03
8.0784E-‐02
7.8468E-‐02
8.3817E-‐02
1.18E+00
1.48E+01
1.896
0.5000
1.4833E+01
495.672
1850.000
0.4262851
5.2997E-‐07
2.1766E-‐08
6.4671E-‐07
4.1936E-‐03
2.5397E-‐03
1.4330E-‐01
1.3921E-‐01
1.4749E-‐01
2.10E+00
1.49E+01
1.066
0.5000
1.4946E+01
513.996
1851.000
0.5159544
4.8220E-‐07
2.2684E-‐08
7.9184E-‐07
5.5922E-‐03
2.8628E-‐03
2.1949E-‐01
2.1274E-‐01
2.2508E-‐01
3.20E+00
1.49E+01
0.785
0.5000
1.4936E+01
528.578
~
2546.000
0.3044265
1.9863E-‐04
2.7837E-‐06
1.1183E-‐04
3.7034E-‐04
2.6503E-‐04
1.7267E-‐02
1.6912E-‐02
1.7948E-‐02
2.55E-‐01
4.62E+00
1.448
0.5000
4.6189E+00
361.315
2547.000
0.2616541
1.9062E-‐04
2.4399E-‐06
9.6345E-‐05
3.3779E-‐04
2.4260E-‐04
1.5285E-‐02
1.4968E-‐02
1.5910E-‐02
2.25E-‐01
4.59E+00
2.118
0.5000
4.5881E+00
358.456
2548.000
0.1446764
1.8110E-‐04
1.8624E-‐06
5.3624E-‐05
2.2524E-‐04
1.8483E-‐04
6.7648E-‐03
6.6229E-‐03
7.2248E-‐03
9.97E-‐02
4.60E+00
3.151
0.5000
4.5982E+00
341.263
2549.000
0.2101394
1.9165E-‐04
2.2853E-‐06
7.8573E-‐05
2.7501E-‐04
2.1692E-‐04
9.7362E-‐03
9.5220E-‐03
1.0281E-‐02
1.43E-‐01
4.61E+00
2.473
0.5000
4.6129E+00
348.604
2550.000
0.2665780
2.0561E-‐04
2.5799E-‐06
9.9933E-‐05
3.1993E-‐04
2.4588E-‐04
1.2347E-‐02
1.2073E-‐02
1.2973E-‐02
1.82E-‐01
4.61E+00
1.919
0.5000
4.6060E+00
353.588
-‐9999.
Figure
6:
Example
of
an
excerpt
MODTRAN5
tape7sc
file.
Hyperion
ATBD
18
Figure
7:
Copy
of
table
1
from
Rodger,
2009
(with
permission)
Hyperion
ATBD
19
Module:
Calculate_wv_V2.py
Synopsis:
This
module
produces
a
water
vapour
field
for
the
Hyperion
scene.
It
does
this
by
comparing
the
CIBR
file
produced
from
the
Hyperion
data
to
CIBR
data
generated
from
modeled
MODTRAN
data.
It
interpolates
the
modeled
CIBR
vs
CWV
values
to
match
the
actual
CIBR
values
from
the
Hyperion
data.
Algorithm:
This
module
has
3
basic
steps.
The
first
step
is
to
calculate
CIBR
ratios
from
the
tape7sc
files.
These
files
are
described
in
figures
6
and
7.
Calculation
of
the
CIBR
is
exactly
as
expressed
by
equation
1.
Figure
4
shows
graphical
output
of
the
water
vapour
feature
used
for
this
determination.
This
figure
shows
data
from
MODTRAN
where,
for
a
set
of
geometric
parameters
(sun/satellite
orientation,
surface
target
location),
a
number
of
different
CWV
values
are
used.
Each
CWV
value
has
an
associated
CIBR
calculated.
The
data
here
are
total
radiance
values
taken
from
the
TOTAL
RAD
column
of
the
tape7sc
file.
These
data
are
then
band
averaged
to
the
spectral
resolution
of
Hyperion.
Each
of
the
Hyperion
input
files
has
been
corrected
for
spectral
smile,
which
means
that
they
all
have
the
same
set
of
band
centres.
The
spectral
response
of
the
Hyperion
spectrometers
is
reported
as
being
a
Gaussian
around
the
band
centres
with
each
band
having
a
specified
FWHM.
These
values
are
contained
in
the
L1R
header
file.
A
normalized
Gaussian
envelope
is
produced
around
the
wavelength
that
is
closest
to
the
band
centre
so
that
a
scaling
factor
is
produced
for
each
wavelength
of
the
TOTAL
RAD.
This
is
shown
graphically
in
figure
8.
Figure
8:
Normalised
band
response
function
overlaid
with
the
Total
Radiance
Hyperion
ATBD
20
Each
of
the
points
shown
in
the
graph
are
multiplied
together
and
the
resultant
is
averaged
to
produce
a
band
equivalent
value.
As
the
Gaussian
function
is
continuous
this
would
result
in
every
TOTAL
RAD
value
being
included
in
the
average.
The
actual
range
of
the
radiometer
band
will
be
finite
(but
I
don’t
know
what
that
is)
so
a
cut
off
is
employed
here
to
ignore
all
values
where
the
Gaussian
function
in
below
or
equal
to
0.001.
This
approach
is
employed
for
the
three
bands
required
to
produce
the
CIBR
so
that
we
get
CIBR
results
that
are
equivalent
to
the
Hyperion
bands.
This
gives
a
set
of
36
CIBR
values
for
the
range
of
CWV
that
were
used
to
produce
the
tape7sc
files
and
the
lookup
files.
The
second
step
of
the
water
vapour
retrieval
process
is
to
interpolate
this
set
from
36
to
200
values
using
a
cublic
spline
interpolation
function
within
python.
The
results
of
the
interpolation
are
shown
in
figure
5.
All
of
the
CWV
retrievals
done
with
this
module
use
the
curve
associated
with
a
0.5
surface
reflectance.
Figure
5
shows
that
there
is
a
difference
in
the
CIBR
vs
CWV
relationship
for
different
values
of
surface
reflectance.
Figure
9:
CIBR
vs
Water
vapour
uncertainty
for
various
surface
reflectance
values
Figure
9
shows
differences
in
CWV
values
at
a
selection
of
CIBR
values
for
various
surface
reflectance
values.
Here
all
the
CWV
values
have
had
the
CWV
from
the
1.0
surface
reflectance
value
subtracted.
This
gives
an
indication
of
the
error
in
retrieved
CWV
value
for
a
CIBR
value
if
the
surface
reflectance
is
not
0.5
(which
it
is
not
likely
to
be
for
the
three
bands
that
are
used
to
calculate
the
CIBR).
It
is
expected
that
surface
reflectance
would
be
region
of
0.10
to
0.50
and
the
majority
of
the
CIBR
values
to
be
around
0.2
to
0.6.
This
would
give
a
maximum
error
of
around
0.5
for
the
water
vapour
retrieval
(assuming
the
Hyperion
ATBD
21
Hyperion
CIBR
retrieval
was
without
error,
which
it
is
not).
This
is
considered
to
be
acceptable.
If
a
more
accurate
CWV
retrieval
were
required
then
a
more
sophisticated
method
would
need
to
be
used.
The
final
step
in
the
process
is
to
actually
retrieve
the
CWV
data.
This
is
done
using
the
CIBR
field
generated
for
the
Hyperion
scene
from
the
CIBR_calculate_ratios.py
module.
Each
pixel
CIBR
value
is
compared
to
the
curve
generated
from
the
MODTRAN
data.
The
two
nearest
values
are
chosen
from
this
and
a
linear
interpolation
between
the
points
is
done.
The
module
will
also
try
to
compensate
for
pixels
where
the
retrieval
fails.
This
can
result
in
NaN
and
infinite
values
as
well
as
values
outside
the
range
of
modeled
CWV’s.
These
failed
values
are
replaced
with
the
mean
CWV
from
the
scene
where
the
retrieval
hasn’t
failed.
The
CWV
field
is
saved
as
an
ENVI
.dat
file
complete
with
header
file
so
it
is
able
to
be
opened
either
in
ENVI
of
with
spectral
python.
This
module
(and
a
few
others)
also
write
data
into
a
log
file.
This
particular
module
reports
the
name
of
the
water
vapour
file,
mean,
median
and
standard
deviation
as
well
as
the
water
vapour
feature
centre
wavelength.
Module
execution:
This
module
is
executed
from
the
command
line
using,
python
–i
{CIBR_file_header}
–e
{Hyperion_file_header}
–d
{tape5_directory}
–l
{log_file_name}.
The
CIBR_file_header
is
generated
by
the
CIBR_calculate_ratios.py
module.
The
header
file
for
the
CIBR
data
file
already
contains
the
appropriate
line,
pixel,
data
type,
endian
etc
so
this
is
renamed
to
match
the
WV
file
name
and
the
field
decription
in
the
file
is
changed
to
Water
vapour
g.cm^2.
It
also
is
used
to
open
the
CIBR
file
for
generation
of
CWV
values.
The
Hyperion
header
file
is
the
original
L1R
header
file
that
contains
the
band
centres
and
FWHMs.
The
tape5
directory
is
essentially
the
working
directory.
This
is
here
so
that
the
module
can
be
run
independently
by
specifying
where
the
tape7sc
files
are
located,
if
this
data
has
been
generated
previously.
The
log
file
is
generated
by
the
perl
script
which
links
all
of
the
processing
modules
together.
In
order
for
this
module
to
be
run
independently
the
few
lines
that
add
data
to
the
log
file
would
need
to
be
removed,
or
a
further
run
time
option
added
to
turn
this
on
or
off.
Module:
Atmospheric_correction_V3.py
Synopsis:
This
module
as
the
name
suggests
performs
atmospheric
correction
or
compensation
as
some
refer
to
it.
Compensation
is
a
more
concise
description
of
this
process,
as
the
radiative
transfer
effects
through
the
atmosphere
are
not
completely
corrected,
only
compensated
for.
This
module
takes
the
lookup
files,
CWV
data
and
uses
these
to
calculate
an
estimate
of
the
surface
reflectance.
Algorithm:
This
module
first
of
all
opens
all
of
the
data
required
to
calculate
a
surface
reflectance
product.
This
includes
the
water
vapour
file,
the
lookup
files
and
the
Hyperion
L1R
data.
This
will
work
with
an
L1R
file
that
has
been
corrected
for
bad
lines
and
spectral
smile
or
a
virgin
file,
as
long
as
the
header
file
and
file
formats
are
the
same.
Hyperion
ATBD
22
The
second
function
of
the
module
is
creating
2
masks
based
on
the
CWV
field
for
the
scene.
For
each
pixel
a
number
between
1
and
36
is
placed
in
the
masks,
one
mask
indicates
the
lower
value
of
CWV
from
the
lookup
files
and
the
other
mask
the
upper
value
from
the
lookup
files
for
which
the
retrieved
pixel
value
of
CWV
falls
between.
These
mask
values
for
each
pixel
are
then
used
later
to
determine
which
set
of
lookup
values
will
be
interpolated
between
to
get
the
final
set
of
values
that
will
be
used
to
determine
the
surface
reflectance.
The
third
step
is
to
take
the
lookup
table
values
and
convert
these
into
band
equivalent
values.
This
is
done
in
the
same
manner
as
the
band
equivalent
calculations
performed
by
the
Calculate_wv_V2.py
module.
Here
however
values
are
calculated
for
every
band
of
Hyperion.
This
results
in
four
arrays
of
242
x
36
values
for
!! 0 ,
!!↓ 0 ,
! ↑
and
!.
As
these
arrays
are
indexed
by
the
CWV
amount
and
we
have
previously
constructed
two
masks
with
the
positions
of
the
upper
and
lower
CWV
for
each
pixel,
we
can
now
select
the
values
of
the
4
components
from
the
!! 0 ,
!!↓ 0 ,
! ↑
and
!
arrays
that
will
be
interpolated
and
used
in
the
final
surface
reflectance
calculation.
The
surface
reflectance
is
now
calculated
for
each
pixel
( !, ! )
using
equations
7
and
8
so
that
!!"# !,! !!! ! !,!
! !, ! = !
(14)
!!↓ ! !,! ! ↑ !,!
! !,!
! !, ! = !!! !,! ! !,!
(15)
where
!! 0 !, ! ,
!!↓ 0 !, ! ,
! ↑ !, !
and
! !, !
have
all
been
band
averaged
from
MODTRAN
data
and
interpolated
based
on
the
retrieved
CWV
value.
The
! !"# !, !
data
is
read
from
the
corrected
L1R
data
and
is
converted
to
the
same
radiance
units
as
the
MODTRAN
values.
This
involves
dividing
the
Hyperion
data
for
the
VIS/NIR
spectrometer
by
400.0
and
the
NIR
spectrometer
by
800.0.
This
will
convert
the
integer
values
within
the
L1R
file
to
float
values
for
the
reflectance
calculations.
The
last
data
processing
step
is
to
convert
all
of
the
calculated
reflectance
values
to
unsigned
integer
values.
All
reflectance
values
are
first
of
all
multiplied
by
10000.0
and
then
rounded
to
the
nearest
whole
number.
This
gives
a
data
cube
with
values
ranging
from
0
to
10000
representing
reflectance
values
of
0.0000
–
1.0000.
Converting
the
float
reflectance
values
to
unsigned
integer
values
saves
a
significant
amount
of
space.
Finally
the
data
are
output
as
both
hdf
and
ENVI
bil
formatted
data.
Module
execution:
This
module
can
be
executed
from
the
command
line
as
python
Atmospheric_correction_V3.py
-‐i
{input
L1R
file}
-‐e
{input
L1R
file
hdr}
-‐
q
{band
wavelengths
file}
-‐l
{working
directory
–
where
the
lookup
files
are}/
-‐x
{water
vapor
file
hdr}
-‐f
{envi
hdr
template}
As
stated
above,
this
module
will
work
on
the
original
L1R
file,
one
that
has
been
corrected
for
bad
pixels,
one
that
has
been
smile
corrected
or
one
that
has
had
Hyperion
ATBD
23
both
steps
done.
If
further
modules
are
added
to
perform
other
corrections
then
it
will
work
on
these
as
well
as
long
as
the
format
of
the
file
and
the
header
file
do
not
change.
Module:
produce_wv_quality_mask_V2.py
Synopsis:
This
module
is
not
in
any
way
essential
and
could
be
removed
if
desired.
All
it
does
is
give
an
indication
where
the
wrong
CWV
amount
was
retrieved
per
pixel.
It
does
this
by
examining
features
in
the
spectra
that
are
introduced
when
the
CWV
amount
used
in
the
calculation
of
reflectance
is
wrong.
It
creates
a
mask
of
the
good
and
bad
pixels
(good
=1,
bad
=
0).
Multiplying
the
data
by
the
mask
sets
all
bad
pixels
to
zero.
This
mask
is
not
saved
but
the
values
are
used
to
give
a
percentage
of
pixels
per
scene
which
failed
the
quality
test.
Algorithm:
This
quality
mask
is
provided
as
an
indication
of
the
success
of
the
water
vapour
retrieval
module.
There
is
no
startling
scientific
basis
for
the
tests
involved
as
they
were
created
from
visual
inspection
of
the
spectra.
From
inspection
the
behavior
of
the
bands
centred
on
1104.19
nm,
1124.28
nm
and
1134.38
nm
(Hyperion
bands
96,
98
and
99)
are
reasonably
distinct
if
the
CWV
amount
used
for
the
reflectance
calculation
is
incorrect.
Figure
10:
Three
spectra
showing
where
CWV
used
for
reflectance
calculations
were
too
high,
too
low
and
in
the
vicinity
Hyperion
ATBD
24
Figure
11:
Three
spectra
where
the
CWV
used
for
reflectance
calculations
were
too
high,
too
low
and
in
the
vicinity.
The
About
right
spectra
has
been
offset
to
sit
with
the
other
spectra
Figures
10
and
11
show
three
spectra
where
the
CWV
retrieved
is
too
high,
too
low
and
about
right.
What
this
means
when
the
surface
reflectance
is
calculated,
the
atmospheric
transmittance
is
overestimated
(too
low
condition),
underestimated
(too
high
condition)
or
are
about
right.
This
produces
an
underestimation
of
the
surface
reflectance
(too
low
condition),
overestimation
of
surface
reflectance
(too
high
condition)
or
no
affect.
As
the
atmospheric
transmission
of
some
bands
is
not
affected
by
CWV
this
gives
a
way
to
compare
the
success
of
the
CWV
retrieval.
With
the
very
simple
method
used
here,
bands
98
and
99
are
subtracted
from
band
96.
These
bands
are
shown
as
the
magenta
dots,
yellow
dots
and
grey
dots
respectively
in
figure
11.
If
the
resultant
is
less
than
or
equal
to
-‐1000
(reflectance
of
-‐0.1)
then
the
CWV
amount
is
deemed
too
high,
if
the
resultant
is
greater
than
or
equal
to
1000
then
the
CWV
amount
is
deemed
too
low.
!ℎ!"! !!" − !!! ≤ −1000 !ℎ!" !"# = !"" ℎ!"ℎ
(16)
!ℎ!"! !!" − !!" ≥ 1000 !ℎ!" !"# = !"" !"#
The
threshold
value
of
1000
(difference
from
band
96)
was
chosen
to
be
conservative
and
hopefully
outside
of
any
natural
variation
due
to
surface
type.
This
could
be
tested
using
MODTRAN
with
a
number
of
different
surface
types,
CWV
values
and
whatever
other
variables
one
wishes
to
employ
but
the
tests
performed
here
are
not
based
on
this
type
of
investigation
and
should
be
used
cautiously.
Hyperion
ATBD
25
Two
masks
(one
for
too
high,
one
for
too
low)
are
constructed
by
performing
the
subtractions
from
equation
16
and
then
setting
values
outside
the
threshold
to
0.
The
masks
are
then
multiplied
together
which
will
set
all
values
outside
of
either
threshold
to
zero.
The
values
in
the
resultant
mask
that
are
not
zero
are
now
set
to
1.
If
required
each
band
of
the
data
could
be
multiplied
by
the
final
mask.
Any
mask
value
which
is
zero
would
zero
out
the
data
for
a
bad
pixel
and
the
good
pixel
values
would
remain
unchanged.
The
percentage
of
bad
pixels
is
calculated
and
a
failure
rate
is
reported
within
the
processing
log
file.
From
observation
a
high
failure
rate
percentage
indicates
that
the
bands
used
to
retrieve
the
CWV
are
likely
to
be
very
noisy
and
streaky.
Module
execution:
The
module
is
run
with
the
following
usage.
python
produce_wv_quality_mask_V2.py
-‐i
{Reflectance_file_(hdf
format)}
-‐e
{hyperion_file_hdr}
-‐t
{cibr_template_hdr}
-‐l
{log_file}.
Module:
image_swath.py
Synopsis:
This
module
makes
an
RGB
image
of
the
Hyperion
swath.
It
uses
a
package
called
pyhdf
to
open
the
HDF
reflectance
file
created
previously.
It
selects
bands
closest
to
process
red,
green
and
blue
and
saves
greyscale
images
of
these
bands
as
.png
files.
The
calling
perl
script
(discussed
later)
executes
bash
commands
to
run
ImageMagick,
which
combines
the
three
images
into
the
RGB
image.
Algorithm:
The
data
cube
is
loaded
into
a
3
dimensional
array
in
the
BIL
format.
Three
slices
are
extracted
from
the
cube,
1
each
for
bands
29,
20
and
12
(640.5
nm,
548.92
nm,
467.52
nm).
For
reasons
I
am
unable
to
explain,
these
slices
result
in
a
3
dimensional
array
where
the
third
dimension
contains
a
single
element.
This
is
the
same
as
a
2d
array
but
causes
trouble
when
trying
to
same
these
as
images.
A
numpy
function
called
squeeze
is
used
which
converts
the
data
cube
slices
to
2d
arrays.
The
three
images
are
then
saved
as
.png
images.
The
perl
script
calls
the
ImageMagick
functions
convert
and
composite.
The
first
three
calls
to
convert
perform
a
linear
linear
stretch
on
the
three
.png
files
created
previously.
This
‘brightens’
the
images
which
tend
to
be
quite
dark.
If
the
image
has
cloud
in
it
these
tend
to
saturate.
Composite
is
called
to
create
the
RGB
by
compositing
the
three
images
together.
Finally
convert
is
called
once
again
to
sharpen
the
final
RGB
image.
This
does
not
always
create
the
best
image
possible
but
in
general
is
does
give
a
good
visual
indication
of
the
surface
being
imaged
by
Hyperion.
Module
execution:
This
module
is
run
from
the
command
line
as
python
image_swath.py
-‐i
{reflectance
file
in
hdf
format}
Hyperion
ATBD
26
Module:
make_quicklook.py
Synopsis:
This
module
uses
basemap
to
plot
the
Hyperion
swath
location
on
an
Australian
map.
It
embeds
the
Hyperion
RGB
image
on
the
map.
Algorithm:
This
module
makes
extensive
use
of
basemap,
which
is
a
toolkit
for
matplotlib.
This
allows
data
to
be
transformed
spatially
so
that
it
can
be
plotted
on
various
map
projections.
It
will
also
read
shape
files
and
allow
these
to
be
plotted
as
well.
The
module
sets
up
a
basemap.
This
defines
a
set
of
extents
for
the
map.
These
are
currently
hardcoded
to
latitude
6S
–
45S,
108E
–
156E.
The
module
draws
a
landmask
and
colours
this
so
it
is
distinctive
from
the
ocean.
There
are
lines
commented
out
in
the
file
that
replace
the
landmask
with
a
blue
marble
image.
These
had
to
be
removed
as
for
some
reason
they
ceased
to
function
properly
on
the
iVEC
system.
This
functionality
can
be
added
in
if
the
software
is
installed
on
another
system.
A
shape
file
that
contains
a
high-‐resolution
coastline
and
Australian
state
boundaries
is
opened
and
added
to
the
map.
Each
.MET
file
has
the
corner
coordinates
for
the
extent
of
the
Hyperion
swath
it
is
associated
with.
This
module
opens
the
.MET
file
and
extracts
these
corner
coordinates.
A
polygon
is
drawn
on
the
base
map
using
the
corner
extents
derived
from
the
.MET
file.
The
.MET
file
also
contains
the
site
centre
location.
This
is
read
from
the
file
and
is
written
into
a
label
on
the
map
next
to
the
polygon
that
has
been
drawn
on
the
map.
The
module
also
draws
grid
lines
on
the
map.
The
swath
image
created
earlier
is
imbedded
in
the
map
on
the
left
hand
side.
Each
map
has
a
title
added
that
specifies
the
date
of
capture
of
the
image.
Module
execution:
Command
line
execution
usage
is
python
make_quicklook.py
-‐q
{name
of
the
swath
image}
-‐m
{hyperion
.MET
file}
-‐i
{hyperion
header
file}
-‐o
{overpass
csv
file}
-‐s
{shape
file
name}
Perl
Script
-‐
Execute_Atmospheric_Correction.pl
This
script
is
used
to
link
all
of
the
python
modules
and
associated
processes
together.
It
execute
bash
commands
as
required,
ImageMagick
commands
as
required,
moves
files
around
and
cleans
up
working
directories.
There
are
many
paths
and
file
locations
hard
coded
into
this
script.
If
this
was
to
be
run
on
another
system
then
this
is
the
place
to
change
these
locations
as
required.
All
of
the
hard
coded
file
locations
and
paths
are
passed
through
to
the
python
modules
so
adapting
to
another
system
should
only
require
rewriting
this
script
and
not
a
bunch
of
python
modules.
This
script
is
designed
to
be
executed
in
a
directory
that
contains
the
extracted
data
from
an
L1R
.tgz
file.
This
includes
the
L1R
file
in
hdf
format,
the
associated
header
file,
the
.MET
file
and
the
.AUX
file.
Hyperion
ATBD
27
Find
*.L1R
file
Execute fix_bad_bands_and_pixels.py
Execute fix_spectral_smile.py
Execute CIBR_calculate_ratios.py
Execute
Create_tape5_files.py
Direction
of
execution
Launch Mod90_5.2.1.exe
Execute Make_final_lookups.py
Execute Calculate_wv_V2.py
Execute Atmospheric_correction_V3.py
Execute produce_wv_quality_mask_V2.py
Execute image_swath.py
Figure
12:
Module
execution
order.
The
red
boxes
indicate
commands/functions
executed
in
bash
and
the
black
boxes
represent
the
python
modules.
Hyperion
ATBD
28
Figure
12
shows
the
execution
order
within
the
perl
module.
Most
of
the
input
files
for
the
python
modules
are
located
using
the
find
command
in
bash.
Some
files
are
intrinsically
named
using
the
Hyperion
file
base
name
(this
is
the
.L1R
file
with
the
.L1R
stripped
off)
and
whatever
extension
will
be
assigned
to
a
new
file
by
the
various
python
modules.
If
there
are
multiple
files
in
a
directory
of
a
particular
type
(such
as
*_BP_FIX.L1R)
then
the
processing
could
become
unstable
or
fail
as
some
modules
may
just
select
the
first
file
or
try
to
run
a
second
process.
The
perl
script
performs
no
error
checking
so
it’s
a
good
idea
to
try
to
get
this
right.
In
practice
when
this
is
used
with
the
iVEC
HPC
system
this
script
is
copied
into
a
directory
where
the
L1R
archive
has
been
untarred
and
unzipped.
In
such
cases
there
are
exactly
the
required
files
in
the
directory.
The
Mod90_5.2.1.exe
runs
the
MODTRAN5
radiative
transfer
code.
This
takes
the
tape5
files
created
by
the
Create_tape5_files.py
module.
The
MODTRAN
RT
code
generates
a
number
of
output
files
for
each
run.
The
tape7sc
files
are
used
by
the
Make_final_lookups.py
and
Calculate_wv_V2.py
modules.
The
Launch
ImageMagick
commands
box
referrers
to
a
number
of
bash
commands
which
produce
an
RGB
swath
image
from
the
output
of
image_swath.py
module.
The
final
command
box
also
represents
a
number
of
bash
commands.
After
all
of
the
processing
is
complete
a
number
of
files
are
tarred
and
zipped
to
be
copied
into
sub
directories
under
an
output
directory.
The
script
checks
to
see
if
these
directories
exist
first
and
if
not
then
the
sub
directories
are
created.
Two
archives
are
created
that
contain
1.)
.L1R
file
in
ENVI
BIL
format,
.hdr
file,
.AUX
file,
.MET
file
and
the
quicklook.png,
2.)
The
reflectance
product
as
an
ENVI
.bil
file,
the
water
vapor
product,
a
header
file
for
each
and
the
quicklook
image.
The
HDF
files
are
copied
without
alteration.
Once
the
files
are
created
and
copied
elsewhere,
the
working
directories
are
deleted.
As
stated
previously,
all
of
the
python
modules
can
be
run
independently
as
long
as
the
modules
have
access
to
the
correct
files.
If
there
were
previously
enhanced
files
where
bad
pixels,
smile
and
other
corrections
had
been
done
then
you
need
only
run
the
modules
that
retrieve
water
vapour
and
the
atmospheric
correction
module.
If
you
don’t
wish
to
produce
quicklook
images
and
the
quality
mask
then
only
run
up
to
the
atmospheric
correction
module.
This
code
has
been
run
on
an
Apple
iMAC
and
the
linux-‐based
epic
system
based
at
iVEC.
The
iVEC
processing
has
taken
less
than
30
min
per
Hyperion
scene
for
a
majority
of
the
scenes.
If
a
conservative
number
of
processes,
say
20,
were
run
on
a
High
Performance
Computing
system
it
would
possible
to
process
the
entire
archive
in
just
over
a
day.
For
this
work
the
workflow
was
a
bit
more
hands-‐on
so
the
processing
took
3
days.
Some
processes
failed
when
there
were
errors
in
the
.MET
file.
These
were
generally
errors
of
omission
with
spatial
coordinates
missing
from
the
file.
The
formatting
of
some
of
the
numbers
changes
from
file
to
file,
which
also
caused
issues.
These
errors
were
identified
and
fixes
for
them
are
programmed
into
the
code.
Other
files
that
are
not
in
the
current
archive
may
have
other
problems
that
have
not
been
encountered.
If
a
file
fails
to
process,
examine
the
.MET
file
first
to
see
if
it
has
a
format
that
differs
from
the
expected
format.
Hyperion
ATBD
29
Conclusion
This
document
provides
the
theoretical
basis
and
algorithm
descriptions
for
a
number
of
python
modules.
It
also
provides
information
on
the
workflow
structure
required
to
link
these
modules
into
the
correct
processing
chain.
The
code
described
herein
has
been
used
to
process
all
of
the
1276
files
currently
in
the
AusCover
Hyperion
database
to
a
surface
reflectance
product.
The
processing
is
not
as
extensive
as
the
multitude
of
steps
as
suggested
by
Dr.
David
Jupp
but
should
provide
an
adequate
surface
reflectance
products
where
the
data
are
altered
as
little
as
possible.
If
in
the
future
further
processing
modules
are
developed
it
will
be
easy
to
slot
them
into
the
current
workflow
structure.
The
relatively
short
time
period
required
to
re-‐
process
the
data
makes
this
an
achievable
task.
Bibliography
Berk,
A.,
Anderson,
G.,
Acharya,
P.,
Shettle,
A.,
2011,
“MODTRAN®5.2.1
USER’S
MANUAL”,
Spectral
Sciences
Inc.,
Air
force
Research
Laboratory
CSIRO,
2011,
“Information
from
the
CSIRO
Data
Users
Workshop”,
received
2011
Datt,
B.,
McVicar,
T.,
Van
Neil,
T.,
Jupp,
D.,
Pearlman,
J.,
2003,
“Preprocessing
EO-‐1
Hyperion
Hyperspectral
Data
to
Support
the
Application
of
Agricultural
Indexes”,
IEEE
Transactions
on
Geoscience
and
Remote
Sensing,
Vol.
41,
No.
6,
pp
1246
-‐
1259
Datt,
B.,
Jupp,
D.,
2004,
“Hyperion
Data
Processing
Workshop:
Hands-‐on
processing
instructions”,
CSIRO
Earth
Observation
Centre
Guanter,
L.,
Richter,
R.,
Kaufmann,
H.,
2009,
“On
the
application
of
MODTRAN4
Atmospheric
Radiative
Transfer
Code
to
Optical
Remote
Sensing”,
International
Journal
of
Remote
Sensing,
30
(6),
pp.
1407
-‐
1424
Jupp,
D.,
Datt,
B.,
McVicar,
T.,
Van
Niel,
T.,
Pearlman,
J.,
Lovel,
J.,
King,
E.,
2003,
“Proceedings:
SPIE
Third
international
Asia-‐Pacific
Remote
Sensing
symposium”,
Image
Processing
and
Pattern
Recognition
in
Remote
Sensing,
Vol.
4898,
pp
78
–
92
Pearlman,
J.,
Barry,
P.,
Segal,
C.,
Shepanski,
J.,
Beiso,
D.,
Carman,
S.,
2003,
“Hyperion,
a
Space-‐Based
Imaging
Spectrometer”,
IEEE
Transactions
on
Geoscience
and
Remote
Sensing,
Vol.
41,
No.
6,
pp
1160
–
1173
Richter,
R.,
Schläpfer,
D.,
Müller,
A.,
2011,
“Operational
Atmospheric
Correction
for
Imaging
Spectrometers
Accounting
for
the
Smile
Effect”,
IEEE
Transactions
on
Geoscience
and
Remote
Sensing,
Vol.
49,
No.
5,
pp
1772
–
1780
Rodger,
A.P,
2009,
“Getting
to
Grips
with
MODTRAN4’s
Tape7
output
Files”,
CSIRO
Exploration
and
Mining,
private
communiqué
Hyperion
ATBD
30
Rodger,
A.P,
2011,
“SODA:
A
new
method
of
in-‐scene
atmospheric
water
vapour
estimation
and
post-‐flight
spectral
recalibration
for
hyperspectral
sensors,
Application
to
the
HyMap
sensor
at
two
locations”,
Remote
Sensing
of
Environment,
115,
536
-‐
547
Schläpfer,
D.,
Borel,
C.,
Keller,
J.,
Itten,
K.,
1998,
“Atmospheric
Precorrected
Differential
Absortion
Technique
to
Retrieve
Columnar
Water
Vapour”,
Remote
Sensing
of
Environment,
65,
pp
353
–
366
Ungar,
S.,
Pearlman,
J.,
Mendenhall,
J.,
Reuter,
D.,
2003,
Overview
of
the
Earth
Observing
One
(EO-‐1)
Mission”,
IEEE
Transactions
on
Geoscience
and
Remote
Sensing,
Vol.
41,
No.
6,
pp
1149
-‐
1159
Hyperion ATBD 31