Вы находитесь на странице: 1из 13

J. theor. Biol.

(2002) 214, 619}631


doi:10.1006/jtbi.2001.2484, available online at http://www.idealibrary.com on

Insects Could Exploit UV}Green Contrast for Landmark Navigation


RALF MOG LLER*-?
*Arti,cial Intelligence aboratory, Department of Computer Science, ;niversity of Zurich,
=interthurerstrasse 190, 8057 Zurich, Switzerland and -Department of Zoology, ;niversity of Zurich,
=interthurerstrasse 190, 8057 Zurich, Switzerland
(Received on 26 February 2001, Accepted in revised form on 22 October 2001)

Illumination-invariant detection of landmark features is a prerequisite for landmark navigation in insects. It is suggested that a contrast mechanism involving the UV and green
receptors of insect eyes could guarantee a robust separation between natural objects as
foreground and sky as background. Using a sensor with a UV and a green channel that in their
spectral characteristics are close to the corresponding insect photoreceptors, data of natural
objects and sky were collected. The data show that the two classes can be separated by a "xed
threshold in the UV}green color space, o!ering an advantage over a purely UV-based
separation that would require a dynamic threshold. Based on a numerical method, UV}green
antagonism is shown to guarantee a more reliable discrimination than UV}blue antagonism.
 2002 Elsevier Science Ltd

Introduction
Insects of a number of species, especially socially
organized hymenoptera (bees, wasps, and ants),
are known to rely on visual landmarks to get
back to important locations in their environment, mostly food sources and nest sites (reviews:
Wehner, 1992; Collett & Zeil, 1997, 1998; Judd
et al., 1999). A wide range of mechanisms has
been suggested as explanation for the landmark
navigation abilities of these insects, including
models based on the `Gestalta concept (Tinbergen, 1932; Anderson, 1977), template models
(Wehner & RaK ber, 1979; Cartwright & Collett,
1983; Wehner et al., 1996), and parameter models
(Lambrinos et al., 2000; MoK ller, 2001, 2002). With
few exceptions, one being the image warping
method developed by Franz et al. (1998), all
mechanisms require as a "rst processing step the
?Now with the Max Planck Institute for Psychological
Research, Amalienstr. 33, D-80799 Munich, Germany.
E-mail: moeller@mpipf-muenchen.mpg.de
0022}5193/02/040619#13 $35.00/0

detection of distinctive landmark features, or at


least the separation of foreground from background.
An example of a feature-based mechanism is
the `snapshot modela of bee navigation (Cartwright & Collett, 1983), where the image of
a landmark panorama is assumed to be segmented into dark and bright regions, corresponding to the landmarks and the gaps between them.
This segmentation is necessary for a subsequent
matching mechanism that establishes links between sectors of the same type in two images:
a `snapshota of the panorama taken at the target
location, and the panorama which is currently
perceived. From the discrepancies in the size and
bearing within each landmark pair, a direction of
movement is derived that reduces the discrepancies and thus brings the animal closer to the
target location. Parameter models (like the &&contour model''; MoK ller, 2001, 2002) do not necessarily have to deal with separate landmark entities,
but may consider the entire `skylinea of the
 2002 Elsevier Science Ltd

R. MOG LLER

panorama; a small set of parameters (in the


example contour length and contour eccentricity)
can be extracted from the shape of the skyline
and used to approach a goal by means of a gradient descent in the parameter space.
So far, navigation models have taken the ability of landmark or skyline detection for granted,
and only the subsequent processing stages have
been investigated. However, a reliable detection
of landmarks or a segmentation of foreground
from background may prove a di$cult problem
under varying conditions of illumination. Caused
by changes in the cloud cover or by the movement of the sun, terrestrial objects vary in their
brightness over time. A scene may appear completely di!erent to an insect when it leaves the
nest (and memorizes, for example, landmarks in
the shadow of a cloud) and when it later returns
to the vicinity after a longer foraging excursion
(and some of the landmarks are now found in the
bright sun). In this modi"ed scene, the insect's
ability to "nd the precise location of the nest
could be severely impaired.
A more robust method may be not to consider
the brightness of terrestrial objects, but to separate any type of terrestrial object (vegetation,
ground, rocks) as foreground from the blue sky
and any type of clouds as background. This paper investigates which mechanisms could reliably
accomplish such a separation under varying conditions of illumination and changing state of the
sky. It is suggested that a color-contrast mechanism between the ultraviolet (UV) and the green
channel of insects may enable a fully local decision to which of the two classes a small region of
an image belongs. A sensor was constructed
which in its spectral properties corresponds to
the UV and green receptors found in insect eyes;
with this sensor, samples of light from natural
objects and from small patches of sky were collected. When these data points are visualized in
a UV}green diagram, it becomes apparent that
a mechanism with "xed threshold line or curve
would be su$cient to separate the two classes.
Using numerical simulations it is shown that
contrast between the UV and the green channel
ensures a more reliable separation than contrast
between UV and the third, blue-sensitive channel
of the insects' visual system. Advantages and
limits of the contrast mechanism will be dis-

cussed, speci"cally in comparison with alternative


mechanisms that would require an adjustable
global threshold.

Skyline Detection in Insects


It has been suggested before that insects may
perceive landmarks as a silhouette or skyline
against the sky as background (Wehner, 1981,
1982, 1992; Collett & Zeil, 1997). This view receives support from experiments by Tinbergen
& Kruyt (1938) on digger wasps, and by Hoefer
& Lindauer (1975) on bees: the animals actually
preferred three-dimensional objects protruding
from the ground as landmarks instead of #at
objects lying on the ground, and one critical
parameter for the selection of an object as landmark was its height and therewith the chance
that it contributes features to the skyline (see
the discussion of these results by Wehner, 1981;
Collett & Zeil, 1997).
The putative role of contrasts between sky and
natural objects in the UV range for the skyline
detection was discussed by Wehner (1982).
Figure 1 shows the spectra of a patch of blue sky,
and of vegetation (green grass) illuminated
by direct and di!use skylight; the curves were
obtained from the numerical tool SBDART
(Ricchiazzi et al., 1998), using re#ectance
measurements from the USGS database (Clark

Irradiance, reflection; sensitivity

620

0.3

0.4

0.5
0.6
Wavelength [ m]

0.7

0.8

FIG. 1. Irradiance of the blue sky and of light re#ected


from vegetation (green grass), in quantum units, and sensitivity curves (normalized) of the UV and green sensor, computed from the spectral sensitivity of the photodiodes and
) blue sky; (
) green
the "lter trasmittance curves. (
grass; (
) UV sensor; (
) green sensor.

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

et al., 1993). It is obvious that skylight contains


a high portion of UV radiation in relation to the
small amount of UV light re#ected from most
natural objects (speci"cally vegetation, see the
spectra collected by Chittka et al., 1994).
Insects are equipped with highly sensitive UV
receptors (Menzel, 1975) and could exploit the
di!erence in the UV portion to separate landmarks from background*they would see a dark
`skylinea of objects (re#ecting a small amount of
UV) in front of the blue or clouded sky which
appears bright in the UV range (Wehner, 1982).
For all data samples collected with the sensor
system at one time, such a separation was actually possible by de"ning a threshold and assigning
all samples with higher UV signal to the `backgrounda class and all samples with lower UV
signal to the `foregrounda class. However, depending on the overall level of illumination, this
threshold would have to be adapted over almost
two orders of magnitudes. As will be discussed
below, the selection of an appropriate threshold
is complicated by the fact that both sky and
object samples strongly vary in their UV brightness even when the overall level of illumination is
approximately constant.
In addition to the UV receptors (with sensitivity peaks around 350 nm), insects have receptors
responding to green light (around 520 nm), and
in most species, a third channel with blue receptors (maxima between 420 and 460 nm) can be
found (Briscoe & Chittka, 2001; Chittka, 1996b;
Lythgoe, 1979; Menzel, 1975). The study explores
whether the combination of a UV receptor and
a receptor in the visible range enables alternative
mechanisms for a separation of objects and sky.
The sensor used in this study comprises only
a UV and a green channel; the blue channel is
ignored. While trichromacy is not considered
here, a numerical method is used to compare
UV}green and UV}blue dichromatic mechanisms. Desert ants*which heavily rely on
landmark navigation*manage to solve the
landmark navigation task with only UV and
green receptors (Mote & Wehner, 1980; Labhart,
1986); another argument for the focus on
UV}green antagonism is provided by Menzel
& Blakers, (1976) who found that in a bee's eye,
UV and green receptors occur within the same
ommatidium and appear to be stable elements of

621

all ommatidia over the eye. Note that the UV


channel is crucial for the separation task, while
visible light alone does not guarantee a reliable
discrimination between objects and sky, as will be
shown below.
Materials and Methods
The study is based on two methods: the collection of real-world data by a sensor with two
spectral channels, UV and green, and a numerical
method. The motivation for this dual approach
was that the value of a purely numerical method
will be limited if not supported by real-world
data. Numerical methods always rely on a number of simpli"cations*in this case, for example,
the assumption that all surfaces exhibit Lambertian re#ection, or the exclusion of multiple re#ection between objects*the e!ect of which is
di$cult to estimate. Moreover, the study at hand
would require a large spectral re#ectance
database of common natural objects, reaching
from the UV to the visible range; such a database
is currently not publicly available to the knowledge of the author (the USGS database used in
this study contains only a small number of vegetation samples). The e!ort for building a specialized sensor and directly measuring the brightness
in the two channels was judged to be smaller than
the e!ort for collecting re#ectance data and using
them in a numerical study. The results obtained
from the sensor measurements and from the limited numerical investigation are in correspondence, thus providing con"rmation for both the
sensor construction and the assumptions underlying the numerical method.
UV}GREEN SENSOR

Figure 2 provides a schematic diagram of the


two-channel sensor system. Light falling through
two interference "lters (Andover 350FS40-25 and
500FS80-25) is projected by two plano-convex
fused silica lenses (UV grade, OptoSigma) onto
two photodiodes with high UV sensitivity
(Hamamatsu S4112). Sensitivity maxima and
half-width of the interference "lters were selected
according to the data available for the photoreceptors of desert ants of the species Cataglyphis
bicolor (Mote & Wehner, 1980; Labhart, 1986):

622

R. MOG LLER

Charge
storage
amplifier

Logarithmic
amplifier

A
D

Neutral
density
filters

UV
interf. filter
green

Fused silica
plano-convex
lenses

Photodiodes

Charge
storage
amplifier

Analog / Digital
conversion

Logarithmic
amplifier

Computer

16 bit

A
D

FIG. 2. Schematic diagram of the UV}green sensor.

the UV "lter has its maximum transmittance at


350 nm and a half-width of 40 nm, and the green
(G) "lter peaks at 500 nm with a half-width of
80 nm. The transmittance curves obtained by
combining the spectral transmittance of the interference "lters and the spectral sensitivity of the
sensor are shown in Fig. 1. To reduce the total
amount of light it proved necessary to mount two
neutral density "lters in front of the interference
"lters (a glass "lter with optical density 2.0 in
front of the green "lter, and a fused silica metallic
"lter with optical density 1.0 in front of the UV
"lter).
The distance between lenses and photodiodes
was adjusted such that the focus was close to
in"nity, so that distant objects give a sharp image
on the sensor plane. Moreover, light falling on
the photodiode is passing parallely through
the interference "lter which is important since
interference "lters shift the central bandpass
wavelength with the incident angle. Both channels have parallel axes with a horizontal distance
of only 11 cm, so that the receptive "elds of the
photodiodes (23 in the vertical, 103 in the horizontal direction) are almost completely overlapping in a larger distance. Most object samples
were taken with distances of more than 10 m
(horizontal overlap '93%), for closer distances
(small objects) care was taken that the object was
within the visual "eld of both sensors. The sensor
readings were ampli"ed by integrating the charge
collected by the photodiode in the charge storage
mode (Hamamatsu C2334 ampli"er). The same
integration time (approximately 30 ms) was used

for the charge ampli"ers in all measurements


with the exception of a few extremely bright and
extremely dark situations, where the integration
time was decreased to avoid saturation or increased to amplify the small signal, respectively.
Later, the analog values were logarithmized
(Burr-Brown LOG100) to mimic the roughly logarithmic response of insect photoreceptors which
is a result of a nonlinear summation of voltage
response due to self-shunting of the receptor
potential (Laughlin, 1989, 1981; Autrum, 1981).
The logarithmized signals were digitized using
two 16-bit A/D converters and read into a
computer.
It is a di!erence between an insect photoreceptor and the technical sensor that the former responds to the number of photons arriving
(Laughlin, 1989), while the response of the latter
is proportional to the energy of the light. For
a system with bandpass "lters, however, this difference can be neglected. The factor transforming quantum-based intensity into energy-based
intensity depends on the wavelength (see e.g.
Anderson & Laughlin, 2000) and varies only
slightly within the bandpass range. Between the
two bands, the di!erence only becomes noticeable as a factor, which just shifts all data in the
logarithmic UV}green space.
NUMERICAL METHOD

The numerical method is based on the numerical tool SBDART (Ricchiazzi et al., 1998) and
uses spectral re#ectance measurements from the

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

USGS database (Clark et al., 1993) (vegetation


samples) as well as data digitized from a diagram
presented by Wyszecki & Stiles (1982) (sandstone). SBDART can be con"gured to provide
the spectral direct downward #ux and the spectral radiance values for a set of azimuth and
zenith angles, as seen from the ground. These
data were computed for solar zenith angles (angle
from the zenith to the sun) varying between 153
and 753 in steps of 153, varying visibility (10 km,
20 km), and di!erent values for the optical thickness of clouds, assuming a single cloud layer
extending from 5}8 km height and using optical
thickness values from the set +0, 1, 2, 3, 6, 10, 20,
30, 40, 50,. A `rurala aerosol model was chosen,
as well as a `vegetationa model of the surface
albedo. The resolution of the radiance grid was
63 for the zenith angle and 123 for the azimuth
angle. The spectral range reached from 0.3 to
0.8 m.
The data obtained from SBDART were converted from energy to quantum units (see e.g.
Anderson & Laughlin, 2000) and then processed
to compute spectral re#ection data; standard
radiometry methods were employed (see e.g.
Wolfe, 1998; Boyd, 1983). Direct and di!use solar
radiation received from a surface standing
perpendicularly on the ground were computed.
Di!use radiation was obtained by an angular
integration of the radiance values over half of the
hemisphere the surface is facing; direct radiation
was determined from the downward #ux and the
incidence angle on the surface. Lambertian re#ection of the surface was assumed and the spectral
re#ectance data were multiplied with the spectral
intensity values accordingly. The observer is perpendicularly looking at the surface, and at the
sky behind the surface with a viewer zenith angle
of 803 (elevation 103, close to the horizon). The
surface was rotated around the perpendicular
axis by angles between 03 and 1803 with respect
to the sun (azimuth) in steps of 453.
To both spectral curves (re#ected light and
skylight) the two "lters (UV and green or blue)
were applied; the integrated intensity was logarithmized which "nally gives the two values that
correspond to the output values of the sensor
device. For the numerical study, Gaussian "lter
characteristics with given center and half-width
were used.

623

Results
SENSOR MEASUREMENTS

In total, 260 samples of sky patches (including


blue sky and di!erent types of clouds) and 368
samples of di!erent, mostly natural objects
(grass, leaves, rocks, sand, soil, etc.) were collected
partly in November in Zurich and partly in August in Munich under varying conditions of illumination and at di!erent times of the day by
aiming the sensor towards the selected region
and registering the data for both channels. Except for few samples taken from the zenith, the
sensor was tilted between approximately #303
and !303 above and below the horizon. The
data of each channel were averaged over 30}50
subsequent readings (ca. 5 s) and the standard
deviations were determined.
Figure 3(a) visualizes the samples of both
classes in a log G vs. log UV diagram. It can be
seen that both the intensity of skylight and of
light re#ected from objects varies over nearly
three orders of magnitude, which corresponds to
the maximum intensity range that was previously
determined for outdoor scenes (Laughlin, 1981).
In both the green channel and the UV channel,
sky and object points are widely overlapping, so
that a separation of the two classes with a "xed,
global threshold that would apply to all conditions of illumination would not be possible in one
of the channels alone. Speci"cally, UV re#ected
from bright rocks in sunlight reaches the same or
even a higher intensity as UV from a gray or dark
cloud; moreover, a rock in bright sunlight appears brighter in the green range than blue sky.
However, the two sets can almost perfectly be
separated by a threshold line with a slope
w"0.73 and an appropriate o!set value, thus
a contrast measure allowing the separation
between objects and sky by means of a linear
threshold would be log UV!w log G. The histogram of the contrast measure is shown in Fig. 3(b).
The two classes are almost perfectly separable,
but it is also visible that there is no pronounced
gap between them. Note that the re#ection of
blue sky in water is clearly lying on the `skya side
of the threshold, so water would probably always
appear as sky for the contrast mechanism.
As a basis for the discussion of di!erent
mechanisms for the separation of foreground and

624

R. MOG LLER

(a)

Clouds

100
Blue sky
Gray clouds

log UV

Building
coarse gravel
Reflection of blue sky in water
Dark clouds

10

Bright rock
Wet sand
Gravel path in shadow

Dark clouds, 20' to sunset


Green lawn
Chestnut tree
Lawn in shadow

# Data points

Dark clouds
1hr to sunset

10
log G

100

60
50
40
30
20
10
0

(b)

Contrast

FIG. 3. (a) Samples (log G, log UV) obtained from the sensor by pointing towards the sky and towards di!erent objects. The
labeled object samples were taken under direct sunlight, or, if labeled accordingly, in the shadow on a clear day. The object
point labeled &&building'' stems from a building with a stone cladding. The threshold line (slope w"0.73) was determined so
that a minimum of points is mis-classi"ed (22 of 628). The rectangle in the lower right corner visualizes the maxima over all
) Sky;
samples of the standard deviations in the UV and green channel in each sample sequence (maximal sensor noise). (
(
) objects. (b) Histogram of the contrast measure log UV!w log G. The position of the centerline corresponds to the
o!set of the threshold line in (a). (
) Sky; (
) objects.

background, Fig. 4(a) presents data taken under


approximately constant conditions of illumination on a cloudy day (between 2.05 and 2.30 p.m.
on August, 30). It is obvious that both sky and
object points extend over a large range of brightness values, while the gap between the two classes
is comparatively small. Figure 4(b) pairs the object and the sky point that are closest to each
other in a given data collection, and visualizes the
pairs for di!erent data collections, each taken
under approximately constant conditions of illumination; in all cases, the two samples were taken
within (8 min. It can be seen that, "rstly, two of
the collections have object points which are
brighter in the green range than sky points, and
that, secondly, although each collection would be
separable in the UV range, a di!erent threshold
would be required for each collection. This
threshold is varying over almost two orders of
magnitude.
NUMERICAL RESULTS

The numerical results were obtained by varying solar zenith angle, visibility, cloud density,

and azimuth of the surface with respect to the


sun, and computing the UV and green (or blue)
portion of skylight and of light re#ected from
objects; selected data points are labeled accordingly, see Figs 5 and 6. The threshold lines were
determined by linear regression within each class
and averaging intercept and slope of the two
classes.
The correspondence between real-world data
and numerical results is visible by comparing
Fig. 3 with Figs. 5(a,c) and Fig. 6. As in the
real-world data, data points of the same class are
strongly correlated between the UV and the
green channel, with the cloud of the object points
lying underneath the cloud of the sky points.
While vegetation (in the numerical method: green
grass) is only overlapping on the green axis with
sky points [Fig. 5(a)], minerals (in the numerical
method: sandstone) result in data points that are
closer to the cloud of the sky points and are
overlapping on both the UV and the green axis
[Fig. 6(a)]. This can be explained by the fact that
mineral objects like rocks or sand re#ect a relatively high portion of UV*at 350 nm, sandstone

625

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

(b)

(a)
100

log UV

log UV

100

10

10

10
log G

100

10
log G

100

FIG. 4. (a) Samples taken on a cloudy day (August 30) between 2.05 and 2.30 p.m. (b) Pairs of closest object and sky points,
each pair from data collected under constant. Conditions of illumination (points within a pair were taken within (8 min).
) Sky; (
) objects.
The dashed lines depict optimal thresholds in the UV range. (

UV_green

UV_blue
(b)

(a)

100

SZA 30, V 20km, C 0, 45


SZA 60, V 20km, C 2, 135

SZA 30, V 20km, C 0, 45


SZA 60, V 20km, C 2, 135
SZA 75, V 20km, C 1, 180
SZA 60, V 10km, C 10, 0

100

SZA 75, V 20km, C 20, 135

10

SZA 45, V 20km, C 0, 0


SZA 30, V 20km, C 3, 45

log UV

SZA 75, V 20km, C 20, 135

log UV

SZA 75, V 20km, C 1, 180


SZA 60, V 10km, C 10, 0

10

SZA 45, V 20km, C 0, 0


SZA 30, V 20km, C 3, 45
SZA 30, V 20km, C 0,135
SZA 75 , V 20km, C 0, 0
SZA 60, V 20km, C 10, 135

SZA 30, V 20km, C 0, 135


SZA 75, V 20km, C 0, 0
SZA 60, V 20km, C 10, 135

SZA 75, V 20km, C 50, 135

SZA 75, V 20km, C 50, 135

100

( c)

300
# Data points

# Data points

200

10
log G

100

Contrast

10
log B

100

(d)

200
100

Contrast

FIG. 5. UV}green vs. UV}blue contrast. The numerical results obtained using spectral re#ection data for grass are shown
as log G (log B) vs. log UV diagram (a, b) and as a histograms of the corresponding contrast measure (c, d). The UV "lter is
a Gaussian centered at 0.35 m with a half-width of 0.04 m. The visible channel uses a Gaussian green "lter in (a) and (c)
(center 0.51 m, half-width 0.08 m) and a Gaussian blue "lter in (b) and (d) (center 0.42 m, half-width 0.08 m). The slopes of
the threshold lines are w"0.93 and 0.97, respectively, and their o!set di!ers by 0.26 log units. In both contrast histograms, the
bins have the same range and size. The labels specify the solar zenith angle (SZA), the visibility (V), the optical thickness of the
) sky; (
) objects; (
) threshold;
clouds (C), and the azimuth of the perpendicular surface with respect to the sun. (a) (
(b) (
) sky; (
) objects; (
) threshold; (c) (
) sky; (
) objects; (d) (
) sky; (
) objects.

626

R. MOG LLER

(a)
SZA 30, V 20km, C 0, 45
SZA 60, V 20km, C 2, 135

100

SZA 75, V 20km, C 1, 180


SZA 60, V 10km, C 10, 0
SZA 45, V 20km, C 0, 0

log UV

SZA 30, V 20km, C 3, 45


SZA 75, V 20km, C 20, 135
SZA 30, V 20km, C 0, 135
SZA 75, V 20km, C 0, 0
SZA 60, V 20km, C 10, 135

SZA 75, V 20km, C 50, 135

1
1

10

100

1000

log G

300
# Data points

10

(b)

200
100

Contrast

FIG. 6. Numerical results obtained using spectral re#ectance data for sandstone; shown as log G vs. log UV diagram (a) and
as a histogram of the contrast measure (b). The Gaussian "lters are centered at 0.35 m (UV, half-width 0.04 m) and at
0.51 m (green, half-width 0.08 m), respectively. The slope of the threshold line is w"0.92. The labels are explained in Fig. 5.
(a) (
) Sky; (
) objects; (
) threshold. (b) (
) Sky; (
) Objects.

re#ects around 14% of the light (Wyszecki


& Stiles, 1982), compared to only 2% measured
for green grass (Clark et al., 1993)
It is visible in both measurement and simulation that there is an overlap of mineral samples
taken in the shadow under a blue sky [see data
points `gravel path in shadowa in Fig. 3(a), and
`SZA 303, V 20 km, C 0, 1353a in Fig. 6(a)] with
dark clouds (data points `dark clouds, 20 to
sunseta and `SZA 753, V 20 km, C 20, 1353a).
Di!use illumination by light from a blue sky is
bright in the UV range, but contains a relatively
low portion of green light. This can be seen in
Fig. 3(a) (`blue skya) and in Fig. 5(a) (`SZA 303,
V 20 km, C 0, 453a). On the contrary, since direct
solar radiation adds relatively more green than
UV to the intensity of an object under di!use
radiation from a blue sky, objects lying in the sun
have a large distance from the threshold line
(`bright rocka and `SZA 753, V 20 km, C 0, 03a).
The sky points lying on the right side of the
threshold in Figs. 5(a) and 6(a) represent situations where the observer almost looks into the
sun (solar zenith angle 753, viewer zenith angle
803, azimuth 1803) in a situation with blue sky or
thin clouds (see data point labeled `SZA 753,
V 20 km, C 1, 1803a). Such situations have been
avoided with the sensor in order to prevent the
photodiode array from being destroyed.

The numerical method was applied to compare


a UV}green antagonism with a UV}blue antagonism, and to explore the possible in#uence of
a secondary peak of the green channel around
0.37 m. Figure 5 compares UV}green and
UV}blue antagonism for light re#ected from
green grass. It can be seen both in the log
G (log B) vs. log UV diagrams and in the histograms of the contrast measure that a UV}green
antagonism provides a more robust separation of
the two classes than a UV}blue antagonism, with
a larger gap between the two classes and less
overlap. Note that the distributions of sky and
object points become also more strongly correlated with a blue sensor; if the blue "lter would be
shifted even further into the UV range, the two
distributions would increasingly look like lines
and further approach each other. Separation
quality did slightly deteriorate for UV/blue-green
antagonism*where the visible channel extends
over blue and green range (not shown). It therefore seems justi"ed to assume that if such a mechanism would be used by insects, it would combine
the sensory channels with the largest distance in
the wavelength, which are UV and green.
In the same way, two systems with UV}green
contrast were compared, where one system had
only a single peak in the green system (at
0.51 m), while the other one had a secondary

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

peak at 0.37 m with a sensitivity of 20%, as


found for bees (Menzel & Backhaus, 1989) and
desert ants (Mote & Wehner, 1980). These two
systems did not noticeably di!er in the quality of
the separation (not shown), so the secondary
peak seems to have no function with respect to
skyline separation.

Discussion
POSSIBLE MECHANISMS OF SKYLINE DETECTION

The question raised by this study is, how insects could accomplish an illumination-invariant perception of a landmark panorama*a
capability that would be a prerequisite for all
landmark navigation mechanisms proposed so
far. A candidate mechanism was suggested that
produces a binary image where terrestrial objects
appear as a dark skyline in front of the sky as
bright background, independent of changes in
the position of the sun or the state of the sky.
As it is clear from Fig. 4(b), skyline separation
could not rely on receptors in the green range
alone, since sky samples and object samples
taken at the same time are overlapping in the
green channel. It could, however, be accomplished by just using the UV channel. For a given
overall illumination level, a threshold in the UV
signal could be de"ned that would reliably assign
all brighter points to the class `skya and all
darker points to the class `objecta. The threshold
itself would be a global threshold that applies to
all image points at the same time. Yet, with
changing overall illumination (e.g. appearance of
the sun behind clouds), the threshold would have
to be adjusted. While the gap between the
brightest object points and the darkest sky points
remains approximately constant, the optimal
threshold (lying between them) varies over
a range of almost two orders of magnitude.
Such a method brings about the necessity of
a mechanism that determines the threshold from
the currently perceived image. It would probably
not be su$cient to just take the average of all
receptor signals for a threshold, since this
measure would depend on the current location of
the animal (and di!er, for example, when the
insect is on a free "eld or within a narrow valley).
One could speculate that this problem could be

627

overcome by taking the zenith UV brightness as


reference for the threshold de"nition, since in the
direction of the zenith the animals will almost
certainly see sky. However, it remains an open
question whether such a mechanism would work
under inhomogeneous brightness of the sky (due
to partial covering or inhomogeneous clouds).
A similar argument holds for taking ground
points as reference. Even under approximately
constant conditions of illumination, both sky and
objects vary considerably in their brightness, as
can be seen in Fig. (4a). Since the gap between the
brightest object point and the darkest sky point is
small with respect to the overall distribution of
points, it also seems to be unlikely that information on the position of the gap could be exploited
to determine the threshold.
It is therefore worthwhile to consider alternative solutions that would not require an adjustable threshold. From the distribution of the
object and sky points in Fig. 3(a), it can be
concluded that a rigid threshold could be applied
to a vectorial measure that not only includes
a UV signal, but also a green signal from the
same receptive "eld. In the simplest case, such
a threshold would be a linear function, but it
would also be possible to have nonlinear, curved
thresholds separating two regions. A linear threshold as used throughout this study leads to
a scalar contrast measure log UV!w log G
(with given slope w) that guarantees a reliable
separation with only small overlap between the
two classes [see Fig. 3(b)]. The neural equivalent
of this simple linear mechanism would be a
neuron receiving antagonistic (excitatory and inhibitory) input from the two channels. Mis-classi"cations only occur for extremely dark clouds
(see data point `dark clouds, 1 hr to sunseta) and
for some mineral objects (stones, sand, gravel)
that lie in the shadow on a day with mostly blue
sky or only thin clouds (data point `gravel path
in shadowa). These object points are indistinguishable from dark clouds [in the numerical
method with an optical thickness larger than 20,
see Fig. 6(a)] which, however, would not occur at
the same time. Nevertheless, a reliable separation
with a "xed threshold would call for special
measures to cover this case, or would fail in one
or the other situation; a simple solution would be
to slightly reduce the slope of the threshold line

628

R. MOG LLER

on a cloudless day. In contrast, minerals lying in


the bright sun are no problem, since they usually
have a large distance to the threshold (data point
`bright rocka), and so are objects perceived under
a cloudy sky with no direct sunlight.
Water surfaces re#ecting the sky cannot be
distinguished from sky. A large lake or sea extending to the horizon would result in a false,
lower horizon in the perception of an insect. How
this would a!ect the navigation abilities will depend on the speci"c navigation method employed. A small lake should not pose serious
problems, since there will obviously appear pixels
of the object class above the false sky pixels, an
information that could be exploited by the navigation method.
As was shown above, clouds would not disturb
the suggested contrast mechanism. In contrast,
changes in the shape or position of terrestrial
objects do of course change the skyline and are
therefore likely to confuse the insects, depending
on the navigation method employed and the extent of the changes. This would, however, also
a!ect achromatic mechanisms.
COLOR-CONTRAST MECHANISMS IN INSECTS

It is not known whether a UV}green contrast


mechanism is actually used by insects for the
illumination-invariant detection of landmarks or,
more general, skyline features. So far, color vision
in insects was mostly studied in the context of
#ower recognition, stimulated by the discovery
that certain #owers re#ect a high portion of UV
as a signal for food-collecting insects like bees.
Chittka et al. (1994) and Chittka (1996a) found
that the spectral properties of the three types of
bee photoreceptors allow a nearly optimal detection of #owers against the background of green
foliage (and other background materials) and at
the same time the discrimination of di!erent
#ower colors.
Relative measures of UV and green intensity
are supposed to be exploited by desert ants and
bees to derive compass information from spectral
skylight gradients in addition or as alternative to
E-vector (polarization) gradients, and to discriminate between the solar and anti-solar halfs
of the sky in order to resolve ambiguities of the
polarized light compass (Rossel & Wehner, 1984;

Wehner, 1989, 1991, 1997); similar mechanisms


are discussed for pigeons (Coemans et al., 1993).
The spectral channel for skylight navigation of
ants receives input from ommatidia in the dorsal
half of the ant's compound eye, an anatomically
de"ned region looking at and above the horizon
(Wehner, 1997). The same region would be used
for the mechanism of skyline detection proposed
here, thus the "rst processing stages of the ant's
spectral compass and its skyline navigation system may coincide. Spectral cues (and supposedly
color-contrast mechanisms) are also used by bees
to discriminate between sun and sky: independent of the intensity, a light source emitting longer wavelength is interpreted as the sun (Rossel
& Wehner, 1984).
It is also interesting to note the correspondences between the polarization vision system of
insects and the hypothetical color-contrast mechanism. An antagonistic neural interaction between receptors with orthogonal axes of E-vector
absorption makes the response to polarized light
widely invariant against changes in light intensity
(Labhart & Petzold, 1993; Labhart, 2000). Input
to this opponent mechanism comes from the
`dorsal rim areaa of the ant's eye which contains
UV receptors with high polarization sensitivity.
The same type of receptors which is UV-sensitive
in the dorsal rim area (and provides one of the
inputs to the opponent mechanism) is greensensitive in the rest of the dorsal half, while the
subsequent neural apparatus appears to be identical in both cases (same axon terminations,
Wehner, 1991, 1994). It may thus be the case that
only the sensory periphery di!ers between the
two regions, but the same neural contrast mechanism is also used to accomplish an illuminationinvariant detection of landmarks.
There is evidence for color-contrast mechanisms of the suggested type in insect visual
systems. Kien & Menzel (1977) found coloropponent cells in the bee medulla which
responded with sustained excitation to UV illumination but with inhibition to blue and/or green
light, some also with the opposite polarity (see
also Hertel, 1980; Menzel & Backhaus, 1989).
Such an antagonism corresponds to the contrast
function log UV!w log G that was derived from
the data in Fig. 3. Cells with a UV/blue}green
antagonism in most cases had `homogeneousa

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

receptive "elds, i.e. the excitatory and inhibitory


portions completely overlapped. However, these
receptive "elds usually had a large size and would
thus not provide the visual acuity required for
a navigation task.
The UV}green contrast mechanism does of
course only concern navigation under the open
sky. While this will be the typical situation an
insect is facing, landmark navigation was also
demonstrated in indoor setups where UV radiation is almost completely missing. In this case,
the insect would have to switch to an intensitybased contrast mechanism, using, for example,
only the green channel. It is likely that the
UV}green mechanism is only one of the mechanisms that can be exploited for landmark
navigation by insects. At least bees with their
trichromatic vision system are also able to memorize scenes with colored landmarks (see e.g.
Gould, 1987), and digger wasps also heed #at
landmarks lying on the ground (rather than having the sky as background) if no other prominent
features are available (Tinbergen, 1932; Tinbergen & Kruyt, 1938). Moreover, it has been shown
that motion parallax cues are used to estimate
the distance to landmarks, and thus motion parallax cues may become part of the memory template (see review by Heisenberg, 1995). Since
motion-dependent behavior is mediated only by
the green-receptor system (Heisenberg, 1995), this
insight would provide support for a visual spatial
memory relying on achromatic images rather
than color contrast (Wehner, 1981).
ADAPTATION

Neither the technical sensor nor the numerical


method considers any form of adaptation processes so far. In the insect visual system, the
photoreceptors themselves as well as the photoreceptor}interneuron synapses adapt to the
background intensity (Juusola et al., 1995;
Laughlin, 1989; 1981; Autrum, 1981; Menzel
& Blakers, 1976). While adaptation processes in
the photoreceptors of #ies do not entirely remove
the background intensity, adaptation in the "rst
visual synapse leads to a signal that is widely
independent of the background level for the daylight range (bandpass). To retain the speci"city of
the contrast measure suggested here, the adapted

629

signals in the UV and the green channel would


still have to strictly monotonically depend on the
background level (as it is the case for photoreceptor adaptation). This would just compress and
distort the log G vs. log UV diagram, but not
destroy the contrast measure; however, a nonlinear threshold may be required in this case
instead of a linear one. Another hypothetical
mechanism would be a coupled adaptation of
corresponding UV and green receptors, but
whether such a mechanism is employed in insect
visual systems is not known. Complete removal
of the background signal in one of the channels
(as after the "rst visual synapse) would render
a di!erentiation between the two classes impossible. This concept does of course presume that
a relatively static image is perceived for the purpose of landmark navigation. Alternatively, only
the borderline between objects and sky might be
perceived dynamically by constantly changing
the direction of view. UV and green channel
could in this case completely adapt, and sky and
objects would be distinguished by a contrast
measure based on the relation of the transient
responses in the two channels.

Conclusions
Most of the mechanisms that have been suggested as explanation for the landmark navigation capabilities of insects are extremely
parsimonious with respect to the complexity of
the underlying neural models (see e.g. MoK ller
et al., 1999; MoK ller, 2000). However, since most of
these models took the capability of the landmark/background separation for granted, it remains an open question whether their parsimony
may not be a result of underestimating the e!ort
that has to be invested into this prior stage. The
color-contrast mechanism studied in this work
demonstrates that a robust skyline detection is
possible under varying conditions of illumination
without the necessity of adjustable thresholds,
and does therefore provide support for the parsimonious navigation mechanisms that rely on
this capability.
Financial support by a personal grant from the
`Kommission zur FoK rderung des akademischen
Nachwuchsesa of the University of Zurich is kindly

630

R. MOG LLER

acknowledged. Many thanks to Paul Ricchiazzi


(UCSB) for help with the SBDART tool, to Mark
Spaeth (MIT) who provided the driver software for the
sensor interface, and to the workshop of the Physics
Department (University of Zurich) for support with
the mechanical construction of the sensor. I'm grateful
to Thomas Labhart and RuK diger Wehner (University
of Zurich) for fruitful discussions and valuable contributions to the sensor design as well as comments on
the manuscript.
REFERENCES
ANDERSON, A. M. (1977). A model for landmark learning in
the honey-bee. J. Comp. Physiol. A, 114, 335}355.
ANDERSON, J. C. & Laughlin, S. B. (2000). Photoreceptor
performance and the co-ordination of achromatic and
chromatic inputs in the #y visual system. <ision Res. 40,
13}31.
AUTRUM, H. (1981). Light and dark adaptation in invertebrates. In: Comparative Physiology and Evolution of <ision
in Invertebrates, (Autrum, H., ed.), Chapter 1. Berlin:
Springer.
BOYD, R. W. (1983). Radiometry and the Detection of Optical
Radiation. New York: Wiley.
BRISCOE, A. D. & CHITTKA, L. (2001). The evolution of
colour vision in insects. Ann. Rev. Entomol. 46, 471}501.
CARTWRIGHT, B. A. & COLLETT, T. S. (1983). Landmark
learning in bees. J. Comp. Physiol. A, 151, 521}543.
CHITTKA, L. (1996a). Optimal sets of color receptors and
color opponent systems for coding of natural objects in
insect vision. J. theor. Biol. 181, 179}196.
CHITTKA, L. (1996b). Does bee color vision predate the
evolution of #ower color? Naturwissenschaften 83, 136}138.
CHITTKA, L., SHMIDA, A., TROJE, N. & MENZEL, R. (1994).
Ultraviolet as a component of #ower re#ections, and the
color perception of hymenoptera. <ision Res. 34,
1489}1508.
CLARK, R. N., SWAYZE, G. A., GALLAHER, A. J., KING, T. V.
V. & CALVIN, W. M. (1993). The U.S. geological
survey digital spectral library: version 1 (0.2 to 3.0 m).
Open File Report 93-592, U.S. Geological Survey,
http://speclab.cr.usgs.gov/spectral-lib.html.
COEMANS, M. A. J. M., HZN, J. J. V. & NUBOER, J. F. W.
(1993). The relation between celestial colour gradients and
the position of the sun, with regard to the sun compass.
<ision Res. 34, 1461}1470.
COLLETT, T. S. & ZEIL, J. (1997). The selection and use of
landmarks by insects. In: Orientation and Communication in
Arthropods (Lehrer, M., ed.), pp. 41}65. Basel, Switzerland:
BirkhaK user Verlag.
COLLETT, T. S. & ZEIL, J. (1998). Places and landmarks: an
arthropod perspective. In: Spatial Representations in Animals (Healy, S., ed.), Chapter 2, pp. 18}53. Oxford: Oxford
University Press.
FRANZ, M. O., SCHOG LKOPF, B., MALLOT, H. A. &
BUG LTHOFF, H. H. (1998). Where did I take that snapshot?
Scene-based homing by image matching. Biol. Cybernet.
79, 191}202.
GOULD, J. L. (1987). Landmark learning by honey bees.
Anim. Behav. 35, 26}34.
HEISENBERG, M. (1995). Pattern recognition in insects. Curr.
Opin. Neurobiol. 5, 475}481.

HERTEL, H. (1980). Chromatic properties of identi"ed interneurons in the optic lobes of bees. J. Comp. Physiol. 137,
215}231.
HOEFER, I. & LINDAUER, M. (1975). Das Lernverhalten
zweier Bienenrassen unter veraK nderten Orientierungsbedingungen. J. Comp. Physiol. 99, 119}138.
JUDD, S. P. D., DALE, K. & COLLETT, T. S. (1999). On the
"ne structure of view-based navigation in insects. In:
=ay,nding Behavior: Cognitive Mapping and Other Spatial
Processes (Golledge, R. G., ed.), Chapter 9, pp. 229}258.
Baltimore, Maryland: The Johns Hopkins University Press.
JUUSOLA, M., UUSITALO, R. O., & WECKSTROG M, M. (1995).
Transfer of graded potentials at the photoreceptor-interneuron synapse. J. Gen. Physiol. 105, 117}148.
KIEN, J. & MENZEL, R. (1977). Chromatic properties of
interneurons in the optic lobes of the bee. II. Narrow band
and colour opponent neurons. J. Comp. Physiol. 113,
35}53.
LABHART, T. (1986). The electrophysiology of photoreceptors in di!erent eye regions of the desert ant, Cataglyphis
bicolor. J. Comp. Physiol. A, 158, 1}7.
LABHART, T. (2000). Polarization-sensitive interneurons in
the optic lobe of the desert ant Cataglyphis bicolor. Naturwissenschaften 87, 133}136.
LABHART, T. & PETZOLD, J. (1993). Processing of polarized
light information in the visual system of crickets. In: Sensory Systems of Arthropods (Wiese, K., Gribakin, F. G.,
Popov, A. V. & Renninger, G., eds), pp. 158}169.
Basel/Switzerland: BirkhaK user Verlag.
LAMBRINOS, D., MOG LLER, R., LABHART, T., PFEIFER, R.
& WEHNER, R. (2000). A mobile robot employing insect
strategies for navigation. Robotics Autonomous Syst. 30,
39}64.
LAUGHLIN, S. (1981). Neural principles in the peripheral
visual systems of invertebrates. In: Handbook of Sensory
Physiology <II/6B (Autrum, H., ed.), pp. 133}280. Berlin:
Springer.
LAUGHLIN, S. B. (1989). The role of sensory adaptation in
the retina. J. Exp. Biol. 146, 39}62.
LYTHGOE, J. N. (1979). he Ecology of <ision. Oxford:
Clarendon Press.
MENZEL, R. (1975). Colour receptors in insects. In: he
Compound Eye and <ision of Insects (Horridge, G. A., ed.),
Chapter 7, pp. 121}153. Oxford: Clarendon Press.
MENZEL, R. & BACKHAUS, W. (1989). Color vision honey
bees: phenomena and physiological mechanisms. In:
Facets of <ision (Stavenga, D. H. & Hardie, R., eds),
Chapter 14, pp. 281}297. Berlin, Heidelberg: Springer.
MENZEL, R. & BLAKERS, M. (1976). Colour receptors in the
bee eye*morphology and spectral sensitivity. J. Comp.
Physiol. A, 108, 11}33.
MOG LLER, R. (2001). Do insects use templates or parameters
for landmark navigation? J. theor. Biol. 210, 33}45.
MOG LLER, R. (2001). Visual homing without image matching.
In: Neurotechnology for Biomimetic Robots (Ayers, J.,
Davis, J. & Rudolph, A., eds). Boston MA: MIT Press. To
appear.
MOG LLER, R. (2000). Insect visual homing strategies in a robot with analog processing. Biol. Cybernet. 83, 231}243.
MOG LLER, R., MARIS, M. & LAMBRINOS, D. (1999). A neural
model of landmark navigation in insects. Neurocomputing
26+27, 801}808.
MOTE, M. E. & WEHNER, R. (1980). Functional characteristics of photoreceptors in the compound eye and ocellus of

UV}GREEN CONTRAST FOR LANDMARK NAVIGATION

the desert ant, Cataglyphis bicolor. J. Comp. Physiol. A,


137, 63}71.
RICCHIAZZI, P., YANG, S. R., GAUTIER, C. & SOWLE, D.
(1998). SBDART: a research and teaching software tool for
plane-parallel radiative transfer in the earth's atmosphere.
Bull. Am. Metereol. Soc. 79, 2101}2114. http://www.icess.ucsb.edu/esrg/pauls}dir/.
ROSSEL, S. & WEHNER, R. (1984). Celestial orientation in
bees: the use of spectral cues. J. Comp. Physiol. A, 155,
605}613.
TINBERGEN, N. (1932). UG ber die Orientierung des Bienenwolfes (Philantus triangulum Fabr.). Z. <gl. Physiol. 16,
305}335.
TINBERGEN, N. & KRUYT, W. (1938). UG ber die Orientierung
des Bienenwolfes (Philantus triangulum Fabr.). III. Die
Bevorzugung bestimmter Wegmarken. Z. <gl. Physiol. 25,
292}334.
WEHNER, R. (1981). Spatial vision in arthropods. In: Handbook of Sensory Physiology <II/6C: Comparative Physiology and Evolution of <ision in Invertebrates (Autrum, H.,
ed.), pp. 287}616. Berlin, Heidelberg, New York: Springer.
WEHNER, R. (1982). Himmelsnavigation bei Insekten.
Neurophysiologie und Verhalten. Neujahrsbl. der Naturforsch. Ges. Zu( rich, 184, 1}132.
WEHNER, R. (1989). The hymenopteran skylight compass:
matched "ltering and parallel coding. J. Exp. Biol. 146,
63}85.

631

WEHNER, R. (1991). Highly parallel processing in the visual


system of an insect's brain. In: Proceedings of the 19th
Neurobiology Conference Go( ttingen (Elsner, N. & Penzlin,
H., eds), p. 270. Stuttgart: Georg Thieme Verlag.
WEHNER, R. (1992). Arthropods. In: Animal Homing (Papi,
F., ed.). Animal Behaviour Series, Chapter 3, pp. 45}144.
London: Chapman & Hall.
WEHNER, R. (1994). The polarization-vision project:
championing organismic biology. In: Neural Basis of
Adaptive Behaviour (Schildberger, K. & Elsner, N., eds),
Fortschritte der Zoologie, Vol. 39 pp. 103}143. Stuttgart:
G. Fischer.
WEHNER, R. (1997). The ant's celestial compass system:
spectral and polarization channels. In: Orientation
and Communication in Arthropods (Lehrer, M., ed.),
pp. 145}185. Basel: BirkhaK user.
WEHNER, R., MICHEL, B. & ANTONSEN, P. (1996). Visual
navigation in insects: coupling of egocentric and geocentric
information. J. Exp. Biol. 199, 129}140.
WEHNER, R. & RAG BER, F. (1979). Visual spatial memory
in desert ants, Cataglyphis bicolor (Hymenoptera:
Formicidae). Experientia 35, 1569}1571.
WOLFE, W. L. (1998). Introduction to Radiometry. Bellingham, Washington: SPIE Optical Engineering Press.
WYSZECKI, G. & STILES, W. S. (1982). Color Science:
Concepts and Methods, Quantitative Data and Formulae.
New York: Wiley.

Вам также может понравиться