Вы находитесь на странице: 1из 5

What is CIR Imagery and what is it used for?

August 2016

To understand CIR imagery you first must understand what we can and cannot see with our
eyes. The human eye can see high-frequency electromagnetic radiation (a.k.a. light) from
only a very small portion of the electromagnetic spectrum. To ‘see’ beyond this range, we
need instruments and cameras that can detect and then translate invisible radiation into the
familiar colors of the rainbow. Color-infrared (CIR) imagery uses a portion of the
electromagnetic spectrum known as near infrared (NIR) that lies just beyond the visible
wavelengths for the color red.

CIR imagery is imagery made up of a combination of colors within the visible spectrum with
the addition of NIR light which is represented by another, distinct color within the visible
spectrum.

What is CIR Imagery used for?


CIR Imagery is good at penetrating atmospheric haze and for determining the health of
vegetation; the pigment in plant leaves, chlorophyll, strongly absorbs visible light. The cell
structure of the leaves, on the other hand, strongly reflects near-infrared light. Therefore,
the stronger the light sensed by the camera, the healthier the plant is.

The most common use of CIR imagery is index calculations such as NDVI (Normalized
Difference Vegetation Index) which is a simple graphical indicator that can be used to analyze
these NIR measurements. This is done by applying a simple calculation to the measurements
of the NIR and visible values collected in the imagery:
NDVI is especially useful when comparing the health of crops, by which the relative NIR
values are represented by a color ramp, typically red-to-green; red being unhealthy and
green being healthy. Values with no NIR reflectance (i.e. the ground) or areas not relevant to
the area of interest are transparent:

How do we create CIR Imagery?


In order to understand how we generate CIR imagery, it is important to first understand how
traditional visible-spectrum cameras create images.

Inside each digital camera is a sensor which is covered in an array of cavities called
‘photosites’. These photosites are responsible for ‘seeing’ only one color; red, green or blue.
A red photosite for example, cannot ‘see’ blue or green light. However, red light can
penetrate the filter of a red photosite, and the volume (or brightness) of the light that passes
through is recorded. The camera’s processor knows that if any light penetrated the red
photosite, it can only be red.
To create a regular RGB (Red/Green/Blue) image like we are used to seeing, digital cameras
take the brightness values from each photosite and create what is called a color channel (or
‘band’). A color channel is essentially a copy of the image which only contains the values for
a single color – in other words, the image is composed of three images (one for each color),
whereby ‘stacking’ the channels in the correct order creates a normal RGB image:

Digital cameras are capable of collecting infrared light but since they are designed for visible
light only, they feature internal infrared-blocking filters, intended to keep all infrared light
out and only allow visible light through to the sensor:
In order to make use of the camera’s capability to collect infrared light, the camera is
modified to replace this infrared-blocking filter with a specialized infrared-passing filter,
thereby allowing the sensor’s photosites to ‘see’ infrared light. Once this camera
modification is complete, photosites from all three color channels are susceptible to the
infrared light. Then, in order to create a designated channel just for infrared light, a yellow
filter is attached to the camera’s lens.

Yellow (being the opposite of blue on the color spectrum) blocks blue light from entering
through the lens to the sensor. Since the infrared-passing filter allows infrared light on all
color channels, by blocking blue light we have essentially replaced the blue channel and
created a color channel which consists only of pure infrared light. Now that the blue channel
has been replaced with infrared, the camera’s channel order becomes R/G/NIR. This
combination of color channels is referred to as CIR:

These images can then be processed using a wide variety of software solutions to analyze
the NIR values to collect accurate measurements for identifying plant species, estimating
biomass of vegetation, assessing soil moisture and evaluating water clarity (i.e. turbidity) in
order to gain actionable information faster and more cost-effective than traditional satellite
or manned aircraft data.

What ‘should’ CIR Imagery look like?


There is no definitive standard for what CIR imagery should look like to the human eye – in
its truest definition, CIR imagery is ‘false-color’ imagery, wherein visible-spectrum colors are
used to represent and exaggerate the reflectance values of NIR light. The choice of color is
irrelevant to the ultimate usefulness of the data.

At Altavian, we try to adhere to the aesthetic of traditional film CIR imagery by swapping the
Red and NIR color channels, producing a similar appearance to Kodak Aerochrome which
many traditional surveyors and analysts have used.
By then applying a standard deviation histogram stretch in programs such as ArcGIS, the final
CIR imagery becomes intimately familiar to data analysts:

Note: It is important to keep track of channel orders when using analysis programs, especially
when utilizing index calculations such as NDVI which require specific channel specifications
to properly calculate the color values. This can potentially become confusing when using
programs which specify channels using a numbering system (1/2/3) instead of named
channels (R/G/NIR). When swapping Red and NIR channels, the order then becomes
NIR/G/R, therefore an NDVI calculation would be (1-3)/(1+3) instead of (3-1)/(3+1).

Вам также может понравиться