Академический Документы
Профессиональный Документы
Культура Документы
TECHNOLOGY
Roberto Bartali
ABSTRACT
The aim of this project is the description and the application of Charge Coupled
Devices as light detectors of different wavelengths: from UV to near infrared part of the
EM spectrum. The reader is firstly introduced to the technologies involved, then, follows, a
detailed description of how these sensors works and their advantages and drawbacks,
sometimes with the comparison to other technologies. With such a detailed description, the
reader will be able to understand and select the right sensor for some specific observing
purpose; even more he/she will be able to operate it correctly.
This work is directed to all people who want to use the state of the art technology in
image detection and wish to know, also, what there is behind and inside the black-box,
called imaging device, he/she is placing on the prime focus of a telescope.
1 - INTRODUCTION
Astronomy is a science based on light detection, with this in mind we can recognize
five “Periods in the History of Astronomy” each one characterized by the kind of light
observed or the sensor used, but not all these periods are well delimited, like, for example,
geologic eras; some times they overlap each other.
• Period 1. From prehistory to second half of XIX century. The only sensor
available is the human eye. All data collected by the naked eye is recorded on
cavern’s walls, monolithic monuments, stones, paper, etc. During this period the
knowledge of electromagnetic (EM) spectrum is reduced to visible light.
• Period 2. From the application of photography (near 1870) to radio wave
detection. The main sensor is the photographic plate and the EM spectrum is a
little wider, including near UV light. The greatest collecting power of telescopes
and the capability to integrate photons for hours, expand the known Universe
which includes now several millions of stars and galaxies.
• Period 3. From radio wave detection (1932) to gamma and x ray detection (near
1964). The EM spectrum includes now radio waves. It is clear that celestial
objects emit light of many different wavelengths, not just visible.
• Period 4. From high energy photons detection to the application of CCD to
Astronomy (1974). The most energetic radiations of the EM spectrum are
available to astronomers. Gamma rays and X rays emitted by stars are also
observable and measurable.
• Period 5. From the CCD to now. An enormous increment in sensitivity and
efficiency of the sensors expands the limit of the known Universe almost to its
full size. Full EM spectrum is observable thanks to space telescopes capable to
detect photons from extremely energetic gamma rays to UV, bands impossible
to study from the surface of our planet due to their absorption by the
atmosphere. Infrared light is almost fully blocked by the atmosphere, telescopes
1
must kept very cool, placed at high altitude or on very cold places like
Antarctica, so space telescopes orbiting far from the planet are a better solution.
Our actual knowledge of the Universe is based on the development and
enhancement of photon sensing and measuring technologies. As more sensitive and large
are sensors, more distant objects, in space and time, can be observed and studied. As
spectral response of detectors increase, more detailed observations of some particular
phenomenon can be performed. Silicon based sensors are now available for almost all
wavelengths: from gamma rays to far IR We will describe in this work the technology of
light detection from UV to near IR part of the EM spectrum.
2
some material whose atoms have more than 4 electrons in the valence band (called donors),
normally we can use the group 15 atoms listed in the Periodic Table of Elements [PTE].
These atoms, all, contain 5 electrons in their valence band and are: Nitrogen, Phosphorous,
Arsenic, Antimony and Bismuth. The extra electron (the fifth) can be moved around easily
in the presence of an external electric field. Semiconductors with impurities like those listed
are called n-type materials.
Conversely, to reduce the
conductivity of a semiconductor,
making it to behave like an
insulating material, we have to
dope it with some atoms with less
than 4 electrons in their valence
band. This way, there is an excess
of positive charge (a hole) and
electrons can easily fill that energy
level, so there is no electric current
flow possible. These materials are
called acceptors and they are listed
in the periodic table of elements as group 13 atoms. All, they have 3 electrons in the
valence band, there are five atoms sharing this property: Boron, Aluminium, Gallium,
Indium and Thallium. Semiconductors doped with some of the above materials are called p-
type materials. The right combination of both (P and N types) in the right place
(semiconductor geometry) gives us a working electronic device (figure 2).
After the above introduction about semiconductors, we can now see the structure of
a CCD and how it works. Basically, the structure of a pixel in the CCD is, as we can see in
figures 2, 3 and 3b, conformed by a bulk P-type silicon substrate and a thin N-type layer
above it. Another thin isolating oxide
layer, separating the electrodes from the
N-type silicon, prevents the trapping of
electrons by the electrodes. This
structure is really a small capacitor. A
positive voltage on one electrode, induce
an electric field which is able to create
electron-holes immediately below it,
holes are moved down, deeper in the P-
type silicon, this way a depletion zone is
generated. When photons arrive and
penetrate the surface of the CCD, they
can produce the so called photo-
electrons, when they are absorbed by silicon atoms. These photo-electrons are confined in
the depletion zone below the positive electrode, this area is called the pixel well. Electrodes
on each side of the well, are negatively biased, or with a voltage much less positive, so they
can repel electrons (both, photo generated and thermal electrons), preventing their diffusion
and recombination in the bulk P-type silicon. The pixel well is then a storage area and, until
the chip is exposed to light, it is filled by photo generated electrons. Each pixel is isolated
from it neighborough by a thin insulating region called the channel stop (figure 3B), this
3
prevents the overflow of electrons from one pixel to the next, otherwise, we can loose the
spatial resolution and the final reconstructed image became overexposed (blooming effect).
Photons, coming
from the object of
interest, strikes the
surface of the sensor and
are able to penetrate in the
silicon up to certain
depth, depending on their
wavelength (figure 5).
High energy photons
(shorter wavelengths) are
absorbed near the surface, instead, lower energy ones (longer wavelengths), can travel more
and are absorbed deeper in the silicon (figure 2). When one photon (or more) is absorbed
by the semiconductor atom, the latter frees an electron. The freed electron, that is generated
in some part of the p-type silicon, is moved (by the
electric field of the most positive electrode) and
stored in the well. This process continues until the
device is exposed to the incoming light, but there is
only a limited quantity of room, depending on the
thickness of the p-type silicon, the voltage applied to
the electrode and to the size of the pixel. Largest
pixels can accumulate more photoelectrons than
small ones (figure 4). The quantity of possible
electrons stored, is called the well capacity.
As we have seen, only photons with some range of
wavelengths are absorbed by the device (figure 5),
but it depends on the kind of semiconductors used
and on the physical structure of the CCD, so, that
graph is only representative. We can see (figure 2)
that short wavelength photons, less than 400 nm. are
reflected by the surface (not absorbed) and long
wavelength, more than 1000 nm., simply pass
through the semiconductor (in other words, it is
transparent to that radiation).
4
When the exposure time ends, applying the right voltages to electrodes, at a very
precise sequence, we can move all the electrons stored in each pixel well to the adjacent
one in a process called shifting. This process must be repeated as many times as the number
of rows and columns are
present in the CCD. Electrons
are transported this way from
the upper row, downside
(vertical shifting) until they
reach the lowest row, here
they are discharged into
another shift register and they
are moved horizontally
(horizontal shifting) until they
reach the charge node where
they are measured and converted to a voltage, then they are sent to the output amplifier.
This voltage is available on the output pin of the CCD for its subsequent processing.
This analogue signal (voltage) is proportional to the number of electrons and must be
transformed to a digital number by an analogue to digital converter circuit (ADC), some
times an interface, like an emitter follower or an amplifier, is connected from the CCD
output to the ADC. Now the information from each pixel can be introduced in the
computer, stored and processed in order to reconstruct the image of the object. This image,
thanks to the Internet, can be shared world-wide to scientific community and to general
people for subsequent analysis.
There are two kinds of CCD, depending on the pixels arrangement: a single line or a
matrix. A linear CCD has only one row of pixels (figure 6A) and a matrix CCD has an n by
m array of sensitive picture elements (figure 6B).
Linear CCD are used sporadically in Astronomy,
an example is the camera onboard of the Viking
Mars Lander. Some imaging techniques like Time
Integration Delay and Drift Scan can use linear
arrays. For most imaging tasks in Astronomy, a
matrix CCD is better.
To better understand how a CCD works,
we can make a rain-bucked analogy (figure 7).
Rain drops are photons, buckets are storage wells
and conveyor belts represent shift registers.
During the exposure time, each bucket is filled
with rain drops. When exposure ends, each bucket
is emptied and fills the adjacent one; conveyor
belts, move buckets vertically and horizontally
toward the last one, which act as the charge node
and is where the water (electrons) is weighted and
made available to the external circuit.
5
2.2 - DEVELOPMENT OF THE CCD
The silicon technology era begin with the invention of the transistor in 1948
[Massey 2005]. Soon the property of silicon to detect light, converting photons to electrons,
was discovered; but very high fabrication cost of semiconductors, avoided the further
development of silicon light sensors. During the decade of 1960, some devices were
fabricated for military and industrial purposes, but, their low efficiency and the need for
very expensive and complex electronics, led these devices mainly as a “Technological
curiosities”. The invention of integrated circuits, increase the interest in silicon based
imaging devices because the complexity and size of the support electronic circuits were
dramatically reduced. In the late of the 60th and in the beginning the decade of 1970, many
photosensitive transistors were integrated in a linear image array, a short time after, a
matrix image sensor was built. 1974 was the year when the first CCD was placed at the
focal plane of a telescope, it was a 100 by 100 pixel sensor made by Fairchild [Fairchild
Imaging]. In figure 8 we can see the first astronomical image ever taken (the full Moon).
That year, a new revolution in Astronomy begins, like the one when Galileo Galilei in 1610
observed for the first time through an eyepiece of a telescope.
Resolution and sensitivity of the first
CCD sensors were poor, if we compare images
in figure 8 and 9, it is clear the difference, the
image of M51 galaxy taken recently by the
Hubble Space Telescope is order of magnitude
better. To reach this kind of quality, a little
more than 30 years of development was
needed.
Astronomy is the science of collecting
light from distant objects and, many physical
processes can be identified and investigated
because each one emits light at different
wavelength. Until the advent of solid state
imaging, the only way to take pictures of the
Universe was using films and photographic
6
plates. Physical and chemical properties of silver halide grains (used in photography), make
them very sensitive to blue and near UV part of the electromagnetic spectrum [Ferreri
1977]. Photographic plates, are much more efficient than the eye because they can integrate
the image during a long exposure time (figure 10). Silicon is much more sensitive than
silver in the red and near IR region of the electromagnetic spectrum, this, and a better
quantum efficiency (capacity of converting photons to electrons), give us the possibility to
imaging, for example, hydrogen emission lines with a very short exposure time (10 to 100
times less).
7
Fairchild among others. Professional CCD are manufactured by Marconi, SITe, EEV,
among others.
8
response, two techniques were developed: deep depletion (figure 13a) and virtual phase
(figure 13b).
Deep depletion are essentially a back side illuminated CCD, but the sensitive silicon is
thicker than the one of a thinned CCD, but less than of a front side illuminated CCD. This
gives to the red incoming photons a better chance to be absorbed, the pixel well is also
wider. To avoid the recombination in their path toward the storage well, a higher voltage is
applied to electrodes. Blue photons can be directed easily to the storage well, by the
enhanced electric field produced by the electrode.
A virtual phase CCD, is a front illuminated CCD, but, instead to have three electrodes, it
has just one. This reduces the blocking area encountered by photons. Blue light can, then,
produce photo-electrons which can be directed to the well because they are not absorbed by
the electrode structures, like in a normal front side CCD. This is the approach developed by
Texas Instruments. Figure 13a and 13b, represent the spectral response of a deep depletion
and a virtual phase CCD respectively. The horizontal axis is approximately of the same
scale. We can see a nearly flat response from 350 to 800 nm in Texas Instruments
technology, but deep depletion CCD has more quantum efficiency in the visible region of
the spectrum.
9
Table 1: CCD and Photographic plates properties comparison.
10
As we can see in figure 14, there is a great difference between a photographic plate
image (figure 14-a, 14-b), a CCD image from Earth surface (figure 14-c) and a CCD image
from space (figure 14-d) of the same object.
Imaging with a CCD is not a straightforward task. Until the final image is ready to
be analysed or printed for the public, many intermediate images must be acquired.
First of all, we have to take a Dark Frame; this is an image with the shutter closed
and with the same exposure time as the science
image and at the same temperature, this way only
the thermal generated signal is available. To have a
better figure of the dark (thermal) signal we have
to take several images and average them; this is
called a Master Dark. A good Master Dark is not
necessarily made each night if the telescope and
CCD conditions do not change. If the CCD is
cooled to very low temperatures (100ºC below
zero) dark current is so low that is negligible, so
there is no need for a dark frame, but at higher
temperature we must have it, because the dark
current level is important (figure 15).
A zero exposure time image, called Bias
Frame, is then taken. In this image we have all the
electrons generated by the internal electronics,
electroluminescence( figure 24a), pixel defects like hot spots (figure 21) and black pixels
(figure 22), bad columns and cosmic rays (figure 16, 25). A good Bias is the average of
several images (master Bias). A professional grade CCD shows an almost uniform bias
frame (figure 16a), instead a less quality CCD shows many defects (figure 16b).The Flat
Frame is an exposure of a very
uniform illuminated source. This can
be done by taking a picture of an
evenly illuminated screen inside the
dome, a light box placed over the
telescope, the twilight sky or a
starless patch of the sky [Bartali
2005]. Exposure time for the flat
frame must be short enough to
optimize the very expensive
telescope time (we want to take as
11
many as possible science images, not spend all the time taking calibration frames), but long
enough to reach almost a 30 to 50 % of the full well capacity (half saturation). The
temperature of the CCD
during flat frame imaging
must be the same as that of
the raw image. Flats must
also be corrected with dark
and bias frames. Averaging a
set of Flats is a normal
technique to have a Master
Flat. Flat frames should be
taken each observing session.
Flat frames shows basically
the difference in sensitivity of
pixels, all defects due to dust
in the optics (CCD and
telescope), vignetting and fringes (figure 17a and 17b). Finally, the raw image is the
exposure of the object of interest (figure 18). It must be taken with the telescope and the
CCD at the same condition of temperature and exposure time of the auxiliary frames: dark,
flat and bias (remembering that bias is a zero exposure frame). Some times to reduce the
noise, several exposures (with much less exposure time) of the same object are averaged
12
together in a technique called Stacking. Now, all the images are stored in the memory of the
computer and are ready to be processed. As we can see in figure 20 we have firstly to
correct the raw image and the flat frame. To do this, we have to subtract dark frames (or
Master dark) and bias (or Master bias) from the Flat (or Master Flat) and form Raw frames.
Then, divide the resulting corrected Raw by the corrected Flat. The resulting image (figure
19) is called a Science Image and is theoretically free of defects and ready to be further
analysed. A great difference is shown between raw and science images
Even when, at first sight, it
seems to be a very difficult
task, taking an image with a
CCD is relatively easy due to
the increased (and increasing)
computer power and
automation technologies.
Today images can be taken and
retrieved, just by sending a
command to a remote control
computer and the telescope
camera (remotely operated and
robotic telescopes.
13
Problem: Hot spots
Some pixels can be near their saturation value, or at some specific value, even when
they are not exposed to light, in this case we call them hot spots; they appears in the raw
image as bright dots like stars (figure 21-a). They appear in all calibration frames and, of
course, in the raw image. If
a star light falls exactly
over a hot spot, no usable
information is available.
Possibly, a certain number
of photo-electrons increase
the level of that pixel, and,
in this case it is possible to
rescue some information,
but it is not reliable. Bright
columns can also appears (figure 21-b) produced by the leakage of charges during vertical
shifting, some electrons are left behind and increase the charge accumulated in the pixel
well of the row below the defective pixel.
Solution: subtract or setting to zero the value of that pixel. If we know that a star fall over
that pixel, a second image with a slight position shift is taken and subtracted from the other.
14
Problem: Blooming
If photo-electrons number are near the full well
capacity, there is the risk of an overflow of some of them
into adjacent pixels because the insulating regions (the
separation between pixels) are very small, this effect is
known as blooming. The raw image contains vertical
white streaks. The length and wide of the streak depends
on the exposure time and on the brightness of the star,
both can produce too many photo-electron and an
overflow occurs (figure 23). Tis overflow can extends
over many pixels.
Solution: reducing the exposure time to the minimum
possible to have a good signal to noise ratio. Taking more
images of the same field with some position shift and then
subtract one from the other, this way objects hided by the
overexposed or bloomed pixels can be visible again. CCD with great well capacity are less
affected by this problem. If, for example, a very faint object is too near a much brighter
one, like a satellite close to the parent planet, two sets of images must be taken. One, over-
exposing the planet in order to enhance the visibility of the satellite and a second, with a
much shorter exposure, to pick up planet details (surely satellites are not visible on this
image), then stacking both images, give us all objects visible.
Problem: Glowing
There are two main causes for glowing in a CCD: radioactivity and heat produced
by the output amplifier. The former is because some material used to manufacture the CCD
or the glass window, may be weakly radioactive, so this radioactivity can induce the release
of some photoelectrons producing a glow in some section of the image (figure 24-a). The
position of the internal output amplifier is very important, because it is one of the parts
which is working at higher temperature due to it complexity. If it is placed under, or too
close to the imaging array, it can produce more dark current on those closer pixels (figure
24-b).
15
Solution: Calibration frames can
eliminate this problem, dark
frame shows glowing like the raw
images, so, when they are
subtracted, glowing disappears.
Unfortunately, if it rise because
of radioactivity, it is not always
visible. Another solution may be
to turning off the CCD and wait a
few seconds, then, turn on again
and re-expose the image. For this
reason, manufacturers place the
output amplifier as far as they can from the image area. Turning off the output amplifier
during exposure time (procedure available for most CCD) eliminate the problem if it is
thermal, but, if is produced by material radioactivity, there is no way to anticipate when and
where it can be appears.
Problem: Reflection
CCD electrodes are metallic and metals are very reflective, even when they are
treated to be semitransparent to let photons been able to pass through, so some quantity of
photons can never reach the sensitive area because they are reflected off the sensor. The
protecting glass window over the sensitive area is also reflective.
Solution: antireflective coating over the glass windows and electrodes, help to reduce the
number of reflected photons; this is done during the manufacturing process.
16
spectral response, a compromise (to have both responses, in the blue and the red) is to use a
deep depletion CCD or a virtual phase device.
17
Solution: The analysis of the Point Spread Function (PSF) of the image can show if it is a
real star or a cosmic ray effect. PSF of a star tend to be Gaussian, instead a cosmic ray has a
sharper profile. The probability to have cosmic rays at the same position in all frames
(calibration and raw) is nearly zero, so during the processing phase, they can easily
eliminated. If a real star falls under a cosmic ray, the only way to see it is to re-expose the
image.
18
Problem: Focusing
Focusing an image on a CCD is not an easy task; any small deviation can produce
poor quality images. The full resolution of the optical system can not be achieved and some
precise observation like those needed for photometry and astrometry can not be done. If we
are exposing the CCD with the addition of filters, the correct focus is not necessarily the
same, due to the transmission properties of the filters and its mechanical construction (filter
thickness) and mounting.
Solution: Taking test images and process them until the best focus is obtained. We have to
re-focus the system each time we use a different filter, in addition, all calibration frames
must be taken for each filter. Several methods are used for focusing purposes, but, for all,
the process is the same: take a short exposure, process the image and move the focuser until
the sharpest image is obtained. If we know haw much the system is unfocused, we can
apply some mathematical algorithm to correct it for.
Problem: Mounting
CCDs are very fragile devices, they must be manipulated with extreme care,
especially large format thinned ones. During mounting, a small bend can change their
optical and electronic properties, in the worst case, a prolonged fatigue, can produce a
rupture. If they are not mounted correctly, the telescope focal plane may be not remain
parallel to the CCD surface, so the image can be out of focus and a point like star will be
seen as a comet.
Solution: mounting and manipulating with extreme care and only by skilled people.
Problem: Dust
Dust particles can be present over all optical surfaces, from the primary mirror to
the CCD window. It is impossible to avoid them. In the flat frame image they are like
unfocused donuts with different diameter depending on their size (figure17-a, 17-b).
19
Solution: Flat field frames show dust particles, but they disappears during image
processing.
Problem: Vignetting
This is a very common aberration in astronomical images, it consists of a
progressively darkening from the centre to the corner (figure 30). It is caused by the
obstruction of the image cone, formed by the
objective lens or the primary mirror, by some
mechanical part (filter, focuser, mirror support, CCD
support, etc), so the full image is formed on the focal
plane. It is also caused by the poor alignment of the
CCD respect to the focal plane. If the CCD size do not
match the dimension of the image on focal plane,
vignetting appears.
Solution: first of all we have to know or calculate the
image size on the focal plane of the telescope, short
focal length produce smaller images than long focal
lengths telescopes. Second, we have to know the exact shape of the image cone, but this is
difficult in large telescopes due to the great quantities of secondary and auxiliary mirrors
presents. Also, each of these mirrors and they support, must match the size of the image
cone. Filters dimensions and filter support can play a significant role in vignetting, so they
have to be placed as close as possible to the CCD in order to avoid any light obstruction.
3 - COOLING SYSTEMS
20
The electronic system interface must be able to maintain the desired temperature as
constant as possible (0.1ºC or better) during the exposure time.
This cooling method is simple in operation and installation, we only need a programmable
current regulated power supply, a temperature sensor placed as near as possible to the CCD
and a control electronic circuit. Power consumption and overall dimensions are small. Due
to the relatively low cost of such a system, it is mainly used by amateur astronomers.
21
refill and exhaust. Heat generated by the sensor, is enough to boil nitrogen and it is then
expulsed through this tube. A glass window, not in contact with the CCD, protect the sensor
and prevent the formation of frost. A separated chamber, between the glass window and the
CCD is held at very low pressure, near a perfect vacuum. To maintain the temperature at
constant level during exposure time, there is also an electrical heater (normally a resistor),
this way, temperature variations are limited to +/- 0.1ºC or less.
The process of lowering the temperature take some time, no more than a few degree
per minutes, otherwise a mechanical stress, due to different expansion coefficient of
materials, can destroy the sensor. The CCD must be maintained as long as possible at low
temperature, precisely to avoid stresses, but this imply the constant refill of liquid nitrogen
in the dewar, normally every 24 hours.
The following table (table 2) summarizes characteristics of both cooling systems.
22
4 – CCD APPLICATIONS
In this section we will describe the main features of CCDs specifically designed for the
detection of three wavelength range, in this case Visible (400 to 700 nm), Infrared (800nm
to 24 micron) and Ultraviolet (10nm to 380nm).
23
galaxies and to search for asteroids, mid or large size pixels can be used. Large telescopes
have an image area, measured at the focal plane, of several centimetres wide. To get full
advantage of that, for wide field imaging, we have to form a mosaic of several devices
(figure 38), but there are, many times, restrictions in their fabrication, because of the
internal control circuit position, so, at most we can have a three size buttable CCD. In other
words, a mosaic camera offers the possibility to have almost any number of CCDs and
billions of pixels available, but the resulting image is not continuous, it shows gaps (figure
39). To avoid these gaps it is necessary to take a second image of the same field, but with
the camera rotated 90 degree or the telescope position slightly shifted from the original one.
This imply to double the time needed to obtain the image and, a lot more processing time.
Computer memory, processing software and power needs are also incremented.
24
in the visible (blackbody peak emission depends on the object temperature, lower the
temperature, longer the wavelength emitted).
CCDs for IR observation are more difficult to build because different materials,
other than Silicon, are needed in
order to have the required spectral
sensitivity. Another difficult
consists of their capability to trap
photons with much less energy
than those of visible light. For this
reason, until a few years ago, IR
CCDs were built with a reduced
number and relatively large size
pixels. Today we have IR CCDs
large enough (but less than those
for the visible spectrum) and we
can build mosaic cameras like the one depicted in figure 38.
For visible light observation, CCD temperature must be maintained as low as
possible to avoid dark current generation and it is independent to the wavelength observed,
but, for IR detection, the working temperature depends also on the wavelength and the
sensitive materials used.
We will briefly describe most common material used and their performance for IR
CCD arrays [Hoffman, 2004]. IR sensors are made in hybrid form, they have some
electronics circuit inside, normally the read out interface, to avoid the increment of noise.
This approach is similar to CMOS imaging sensor technology, widely used in consumer
and industrial applications, but not used in professional Astronomy (with some exceptions).
25
4.2.2 – “HgCdTe” DETECTORS
Some of the better, and most used, materials for IR detection are Mercury (Hg),
Cadmium (Cd) and Tellurium (Te). Depending on their cut-off wavelength they are
classified as short wave and mid wave detectors.
• Short wave: their spectral
response is very uniform
from 0.85 to 3.2 micron
with sharp cut-on and cut-
off curve; they reach a 70%
to 80% of quantum
efficiency, but if coated
with antireflective coating
they can reach up to 95%. Modifying the quantity of Cadmium, Zinc and Tellurium
in the substrate, the cut-on wavelength can be extended to the visible spectrum. The
operating temperature is about 100ºK. In figure 42 there are a graph of their spectral
response and an image of them.
• Mid Wave: their spectral response is very
uniform up to 5.2 micron with sharp cut-off
curve; they reach an 80% of quantum
efficiency, but if coated with antireflective
coating they can reach up to 95%. The
operating temperature is about 70ºK. In figure
43 there is a graph of the spectral response of a
CCD made by Raytheon.
26
4.2.4 – “Si:As IBC” DETECTORS
Detection of the longest IR wavelength up to 25 micron (figure 45-a) needs a very
different combination of materials like Silicon (Si), Arsenic (As), Boron (B) and
Carbon(C). Noise and operating temperature must be maintained extremely low, their
working temperature are about
10ºK. Current state of the art
sensors arrays are made of 2k x
2k pixels (figure 45-b), but the
largest array tested is of one
million pixels (Raytheon). Their
spectral response is very linear
and reach 80% with one layer of
antireflective coating (solid line
in figure 45-a). The dashed line
represent the spectral response without the antireflective coating. This kind of sensors are
selected for the James Webb Space Telescope and are operating in the SPITZER space
telescope.
To avoid interference from heat generated by the telescope, control electronics,
motors, etc, a telescope for the IR must be maintained as cool as possible; the air inside the
dome and the telescope structure must be maintained at the same temperature of the air
outside the observatory, to do this, several hours before observing session, a series of
ventilation windows on the dome are opened and closed by a control computer.
27
used. The best technology available today, is the so called microchannel detector. These
detectors are an hybrid between a CCD and a photomultiplier tube, taking the best from
each one. As we saw in the above section (about IR detectors) not all wavelengths can be
detected with the same material, so we cannot have a universal sensor for the full UV
spectrum, but several ones made of different semiconductor materials.
28
UV photons strikes the
sensitive material of the
photocathode, which in
turn, release a photo-
electron (figure 47). This
photo-electron enter into a
narrow and curved
semiconductive channel.
During its path, toward the
anode, it impact the inner
curved walls of the channel several times, every time a bend is encountered. After each
impact, more electrons are released due to a near total internal reflection effect. Each
electron, hit the wall at the subsequent bending, and so on. This process continue until all
electrons are trapped by the anode electrode, which is held at a high positive voltage. This
electron avalanche has a gain up to 100 million, depending on the number of bends in the
channel. The quantum efficiency is low (< 30%), but sensitivity is very high. The channel
can be made very thin, so if we
place several channels, side by side,
we can detect not only photons
coming from a unique specific
point, but we are able to recreate the
image of the emitting object. A
device with this characteristics is
called a microchannel array (figure
48).
A semiconductor microchannel
array consist of a thin plate pierced with thin holes (channels) and each channel is, at all
practical effect, a miniature photomultiplier tube with a diameter of about 25 microns. The
top surface is maintained negatively charged, with respect to the bottom surface, by the
application of a high voltage. This surface is also coated with some photoemitting material
especially designed to have its peak sensitivity at the interested wavelength. The flux of
accelerated electrons, spread out from the bottom, where they are directed to some more
conventional detector. The bottom of the plate is an array of anode electrodes, positively
charged. Electrons generated inside each channel are shifted out serially (like in normal
CCD) and are available on the output of an amplifier. This way we can know the exact
position of each detected photon.
Amplification (multiplication) factor, can be incremented by facing the anode array
outputs of one device to the cathode (channel entrance) of a second one.
The advantages of a microchannel array are many, and they are used on all UV space
telescopes, the largest ones are the detectors of the GALEX space telescope. Some of the
best features of microchannel arrays are:
• Very high gain,
• Compact size,
• Fast time response.
• Two dimensional detection.
• High spatial resolution(25 microns).
29
• Stable operation even in the presence of magnetic fields.
• High sensitivity to high energetic particles and photons, make them a suitable
choice for gamma rays, x rays, UV and neutrons.
• Low power consumption.
• High sensitivity (depending on the cathode material).
• Low dark current.
Their quantum efficiency is low, about 20 to 30%.
In figure 49-a there is a picture of a microchannel array made by Hamamatsu, while, in
figure 49-b and 49-c there are pictures of both UV detectors onboard the GALEX space
Telescope.
30
and gas (figure 51-a) let pass only a small quantity of the light generated by bright star
clusters that lies behind and inside those clouds. If we observe in infrared light (figure 51-
b), thousand of hot stars and many young star clusters, clearly appears. Energetic UV light
from hot stars heat dust particles and, as a consequence, they glow, emitting IR light. Only
brightest star clusters are visible in figure 51-a. Complementing these images with others,
in different wavelengths as for example far UV and X-ray, we can argue the age of those
star clusters and their evolution stage. If their brightness are greater in UV and Xray, than
in IR, they are young, instead, if they glow more in red and IR, they are old.
The advantage from space observation is the possibility to take diffraction limit
pictures, because of the absence of blurring atmosphere. For example, the gathering light
power of the Spitzer telescope is les than that of the Mt. Palomar Schmidt telescope; but the
resolution of figure 52a (from Spitzer space telescope) is much more than that of figure 52-
31
b and 52-c, which are digitized images of the Palomar Digital Sky Survey (DSS). These
images are made using red sensitive photographic plates. These images show the enormous
difference in sensitivity and quantum efficiency of CCDs over photographic plates.
Hydrogen gas emitted from the galaxy nucleus is not visible in figure 52-b and it is barely
visible in figures 52-c. The two sets of images (figures 52 and 53) can tell us the story of
M82, we can argue that it collided with a closer galaxy (M82, not shown in pictures) and as
a result a huge star formation begins. Due to the high brightness of these stars, we know
that they are young (we can also calculate their age), so the collision was not so far in time.
It is though that the interaction with M82 was 600 million years ago, because most of the
stars observed are young and massive due to their strong UV light emission.
In figure 53-a we can see two superimposed images of the galaxy M82 in near
(yellow) and far (blue) UV light (GALEX space telescope). Figure 53-b is the same galaxy,
but the source is an UV telescope onboard the Astro1 satellite. Figure 53-c is a scanned
image with a blue sensitive photographic plate, that belongs to the Palomar Digital Sky
Survey. Hot gas flowing from the nucleus is basely visible.
CONCLUSIONS
Our knowledge of the Universe is strictly related to photon detectors for every band of the
electromagnetic spectrum. When Galileo point his telescope to the heavens, the Universe,
suddenly increase its dimension, but astronomical observations still were subjective,
depending on the eye of the observer, no matter of the telescope dimension.
Photographic plates let astronomers to permanently record observations in a
objective mode, only instrumental defects and operation processes, can affect images.
Precise measurements of colours, brightness, size and position of stars and galaxies
are possible with CCD and PMT.
From the advent of Astrophotography, in the middle of the XIX century, until today,
astronomers and engineers worked together to develop devices capable to detect visible
light, UV, IR, etc. Thank to this cooperation, we can now observe the Universe in every
wavelength of the electromagnetic spectrum.
32
Astronomy, is today, inconceivable without electronics. Silicon based sensors,
computers, satellites, robotic systems, are all tools used by astronomers to increment their
knowledge of the Universe.
We are able to take pictures of the very early Universe, of objects 13 billion light
years away, thank to the precise understanding and application of the photoelectric effect
on semiconductors materials. The invention of the transistor and the integration of millions
of them into a tiny silicon chip favoured the development of the charge coupled device
(CCD). Today CCDs are the light sensor for excellence for a wide range of wavelengths of
the electromagnetic spectrum (from gamma rays to far infrared). They are now capable to
detect almost every incoming photon and convert it to a measurable electric current.
Very large scale integration technologies help to the development of larger sensors,
with many millions of pixels, incrementing the resolution and the field of view of
astronomical images, taking the full advantage of the optics of modern large telescopes.
Excellent linearity and wide spectral response are features that let astronomers to
measure photometric properties of stars and galaxies, giving clues to understand how they
born, evolve and die.
Bulky and high power consuming vacuum tube technologies are in the course to be
replaced with tiny and low power silicon sensors having the same, and many times, a much
better performance.
Even when actual state of the art technology give us a near perfect sensor, many
features will be improved in the future: flatness of spectral response, greater quantum
efficiency, fast read out speed, low noise, large pixel count, selective read out, are
characteristics that surely we will see in the next generation of photon detectors.
REFERENCES
CCD section
Atomic structure and semiconductors technologies
[1] Energy Bands:
http://www.tpub.com/neets/book7/24c.htm
[2] Solid State band theory:
http://www.chemistry.adelaide.edu.au/external/soc-rel/content/bands.htm
[3] Photoelectric effect:
http://zebu.uoregon.edu/text/photoe.txt
[4] Physics, Charles Sturt University:
http://hsc.csu.edu.au/physics/core/implementation/9_4_3/943net.html#net2
[5] Drakos N., 1999, Physics 1501 Modern Technology:
http://theory.uwinnipeg.ca/mod_tech/node1.html
[6] Bordes N., 1999, Photonic devices, Australian Photonics CRC,
http://oldsite.vislab.usyd.edu.au/photonics/devices/index.html
[7] Wikipedia 2006, Semiconductors: http://en.wikipedia.org/wiki/Semiconductors
PTE, Periodic Table of Elements:
http://www.dayah.com/periodic/Images/periodic%20table.png
[8] Hepburn C.J., Britney’s guide to semiconductor physics, the basics of
semiconductors:
http://britneyspears.ac/lasers.htm
CCD fundamentals and history
33
[9] Aikens R., 1991, Charge Coupled devices for quantitative electronic imaging,
IAPPP communication No 44, Jun-Aug 1991.
[10] Richmond M, Introduction to CCDs:
http://spiff.rit.edu/classes/phys445/lectures/ccd1/ccd1.html
[11] Tulloch S., 2006-1, Introduction to CCDs:
http://www.ing.iac.es/~smt/CCD_Primer/Activity_1.ppt
[12] Fairchild Imaging, Fairchild History:
http://www.fairchildimaging.com/main/history.htm
[13] Evolving towards the perfect CCD:
http://zebu.uoregon.edu/ccd.html
[14] Peterson C., 2001, How it works: the charge-coupled device or CCD:
http://www.jyi.org/volumes/volume3/issue1/features/peterson.html
[15] Massey D., 2005, Bell System Memorials - the transistor:
http://www.bellsystemmemorial.com/belllabs_transistor.html
[16] Ferreri W., Fotografia Astronomica, Il Castello, 1977
34
[32] Kodak Image Sensor Solutions:
http://www.kodak.com/US/en/dpq/site/SENSORS/name/ISSHome
[33] Buil C., 1991. CCD Astronomy, Willmann.Bell Inc., 1991, ISBN 0943396298.
[34] Kitchin C.R.,1998, Astrophysical Techniques, IOP Publishing Ltd, 1998, ISBN
0750304987.
[35] Howell S., Handbook of CCD Astronomy, Cambridge, 2000.
[36] [DSS] Palomar Digital Sky Atals: http://archive.stsci.edu/dss/index.html
[37] Bartali R., 2003, Do photographic plates still have a place in professional
Astronomy?.
[38] Kodak technical literature (CCD, photographic films and filters); www.kodak.com
[39] Ilford technical literature (film): www.ilford.com
[40] Agfa technical literature (film): www.agfa.com
[41] Texas Instruments technical literature (CCD): www.ti.com
[42] http://www.pinnipedia.org/optics/vignetting.html
[43] http://www.astrocruise.com/geg.htm
[44] http://www.chartchambers.com/whyln2.html
[45] Atmel technical literature (CCD): www.atmel.com
[46] Pfanhauser W., Application notes Roper Scientific gmbh, 2006:
http://www.roperscientific.de/theory.html
[47] Hoffaman A., Mega Pixel detector arrays: visible to 28 micron, Proceedings SPIE
vol. 5167, 2004.
[48] II-VI Inc, Optics manufacturing, 2006: http://www.iiviinfrared.com/opticsfab.html
[49] Chaisson, AT405, 2004: http://138.238.143.191/astronomy/Chaisson/AT405/HTML/
[50] Acreo AB, Infrared Detector Arrays for Thermal Imaging
Tutorial "Infrared Detectors", 2004:
http://www.acreo.se/upload/Publications/Tutorials/TUTORIALS-INFRARED-2.pdf
[51] Teledyne Scientific and Imaging, Infrared and visible FPA, 2006:
http://www.teledyne-si.com/infrared_visible_fpas/index.html
[52] Carruthers G, Electronic Imaging: http://138.238.143.191/astronomy/topics.htm
[53] Clampin M, UV-Optical CCD, STSI, 2001
[54] Bonanno G., New development in CCD technology for the UV-EUV spectral
range, Catania Astrophysical Observatory, 1995.
[55] Galaxy Evolution Explorer, Home page: http://www.galex.caltech.edu/
[56] Spitzer space telescope, home page: http://www.spitzer.caltech.edu/
[57] Hubble Space Telescope, home page: http://hubblesite.org/
[58] UV Astronomy, Wikipedia, 2006: http://en.wikipedia.org/wiki/UV_astronomy
[59] Electro optical component Inc, Silicon Carbide detectors, 2006:
http://www.eoc-inc.com/UV_detectors_silicon_carbide_photodiodes.htm
[60] Timothy J.G., Optical detectors for spectroscopy, 1983, 1983PASP..95..810T:
http://adsabs.harvard.edu/cgi-bin/nph-
bib_query?bibcode=1983PASP...95..810T&db_key=AST
[61] O Connell R.W., Introduction to Ultraviolet Astronomy, 2006:
http://www.astro.virginia.edu/class/oconnell/astr511/UV-astron-f01.html
[62] Sheppard S.T., Cooper J.A., Melloch M.R., Silicon Carbide Charge Coupled Devices,:
http://www.ecn.purdue.edu/WBG/Device_Research/CCDs/Index.html
[63] Cree Research Inc., Silicon Carbide Semiconductors, 2003:
http://www.mdatechnology.net/techsearch.asp?articleid=174
35
[75] Optical Society of America, Optics Infobase, 2006:
http://www.opticsinfobase.org/ocisdirectory/040_5250.cfm
[76] Sakaki N, et al., Development of multianode photomultipliers for the EUSO focal
surface detector, International Cosmic Ray conference, 2003:
http://euso.riken.go.jp/publication/icrc28_233.pdf#search=%22PHOTOMULTIPLIERS%2
2
[77] Breskin A., Ion-induced effects in GEM & GEM/MHSP gaseous
photomultipliers for the UV and the visible spectral range, 2004
http://arxiv.org/ftp/physics/papers/0502/0502132.pdf
[78] Casolino M., Space applications of Silicon photomultipliers: ground
characterizations and measurements on board the
International Space Station with the Lazio experiment, 2006:
http://www.cosis.net/abstracts/COSPAR2006/03209/COSPAR2006-A-03209-
1.pdf?PHPSESSID=41d280d7162dda45323d561244363f44#search=%22PHOTOMULTIP
LIERS%22
[79] Barral J., Study of silicon photomultipliers, 2004:
http://www.stanford.edu/~jbarral/Downloads/StageOption-
Rapport.pdf#search=%22PHOTOMULTIPLIERS%22
[82] University of Pisa, Physics Department, Silicon Photomultiplier, 1995:
http://www.df.unipi.it/~fiig/research_sipm.htm
[83] Piemonte C., SiPM: status of the development, 2006:
http://sipm.itc.it/intro/device.html
[84] Ninkovic J., The avalanche drift diode: A back illuminated silicon
photomultiplier, 2006:
http://www.hll.mpg.de/twiki/bin/view/Avalanche/AvalancheDriftDiode
IMAGE CREDITS
Figure 1a, 1b
Atomic energy bands: http://www.tpub.com/neets/book7/24c.htm
Figure 2
CCD geometry (adapted from): http://www.ing.iac.es/~smt/CCD_Primer/Activity_2.ppt
Figure 3
Pixel structure: Kitchin C.R.,1998, Astrophysical Techniques, IOP Publishing Ltd, 1998,
ISBN 0750304987.
Figure 3b
(adapted from): http://www.ing.iac.es/~smt/CCD_Primer/Activity_2.ppt
Figure 4
Pixel size and well capacity relationship: Bartali R., 2006
Figure 5
Silicon absorption depth graphic (adapted from): Howell S., Handbook of CCD Astronomy,
Cambridge, 2000.
Figure 6A, 6B
Linear CCD: http://www.fairchildimaging.com/products/fpa/ccd/linear/ccd_191.htm
Figure 6B
Matrix CCD: http://www.fairchildimaging.com/products/fpa/ccd/area/ccd_3041.htm
Figure 7
36
CCD rain-buckets analogy:
http://www.microscopyu.com/articles/digitalimaging/ccdintro.html
Figure 8
First astronomical CCD image: http://zebu.uoregon.edu/ccd.html
Figure 9
Galaxy M51: http://hubblesite.org/gallery/wallpaper/pr2005012a/800_wallpaper
Figure 10
Image sensors quantum efficiency: Howell S., Handbook of CCD Astronomy, Cambridge,
2000.
Figure 11a, 11b, 11c
Front and back side CCD: Bartali R., 2006
Figure 12
Back and front illuminated CCD comparison: http://www.site-inc.com
Figure 13a
Optoelectronics Databook, 1984, Texas Instruments:
http://www.ti.com
Figure 13b
Deep Depletion CCD (pixel structure):
http://www.ing.iac.es/~smt/redsense/deep_depletion.PDF
CCD42-90 CCD Datasheet, Marconi Applied Technology (QE graph):
http://www.marconitech.com
Figure 14a
Crab Nebula in red light: http://archive.stsci.edu/cgi-bin/dss_form
Figure 14b
Crab Nebula in blue light: http://archive.stsci.edu/cgi-bin/dss_form
Figure 14c
Crab Nebula VLT:
http://www.eso.org/outreach/press-rel/pr-1999/phot-40f-99-normal.jpg
Figure 14d
Crab Nebula HST: http://hubblesite.org/gallery/wallpaper/pr2005037a/800_wallpaper
Figure 15
Dark frames examples:
http://www.frazmtn.com/~bwallis/drk_tmp.htm
Figure 16
Bias frames examples:
http://www.eso.org/projects/odt/Fors1/images/bias.jpg
http://www.carleton.edu/departments/PHAS/astro/pages/knowledgebase/biasdark.html
Figure 17a, 17b
Flat field frame example:
http://www.highenergyastro.com/CVFUN.html
http://www.mso.anu.edu.au/observing/detectors/imager.php
Figure 18
Example of raw image: Bartali 2003
Figure 19
Example of science image: Bartali R., Rosner A., 2003
Figure 20
CCD imaging, basic steps: Bartali R., 2006
37
Figure 21a
Hot pixel: Bartali R., 2003
Figure 21b
Bright column: Tulloch S., Use of a CCD camera.
Figure 22
Dark Streacks: HET609 CDRom, 2006
Figure 23
Blooming example: Bartali R., 2003
Figure 24a
Example of glowing: Buil C., CCD Astronomy
Figure 24b
CCD output amplifier:
http://spiff.rit.edu/classes/phys445/lectures/ccd1/structure_1.gif
Figure 25
Cosmic rays: http://spider.ipac.caltech.edu/staff/kaspar/obs_mishaps/images/cr.html
Figure 26
Frame transfer CCD: http://www.sinogold.com/images/tc237face.gif
Figure 27
Example of an overexposed image: Bartali R., 2003
Figure 28
Oversampled stellar image:
http://sctscopes.net/Photo_Basics/CCD_Camera/Choosing_a_CCD_Camera/CCD_Paramet
ers/ccd_parameters.html
Figure 29
Undersmpled stellar image:
http://sctscopes.net/Photo_Basics/CCD_Camera/Choosing_a_CCD_Camera/CCD_Paramet
ers/ccd_parameters.html
Figure 30
Vignetting: http://www.astrocruise.com/geg.htm
Figure 31
Peltier module: An Introduction to Thermoelectrics, Tellurex Corporation, 2006.
Figure 32
Cooling Peltier module: An Introduction to Thermoelectrics, Tellurex Corporation, 2006.
Figure 33
Multistage module: Melcor Thermal Solutions, Melcor Corporation.
Figure 34
Liquid Nitrogen cooler: top – Tellurex Corporation.
Bottom – Buil C., CCD Astronomy, Willmann Bell, 1991.
Figure 35
UBVRI filters:
http://outreach.atnf.csiro.au/education/senior/astrophysics/photometry_colour.html
Figure 36
Eye spectral response:
http://www.marine.maine.edu/~eboss/classes/SMS_491_2003/sound/em-spectrum_human-
eye_asu_380x300.gif
Figure 37
100 million pixel CCD:
38
http://www.dalsasemi.com/news/news.asp?itemID=252
Figure 38
Mosaic camera:
http://www.cfht.hawaii.edu/Instruments/Imaging/CFH12K/images/CFH12K-FP_lr.jpg
Figure 39
Mosaic image: http://www.astro.indiana.edu/~vanzee/SMUDGES/field.html
Figure 40-a
Orion nebula in visible band from Harvard Observatory:
http://138.238.143.191/astronomy/Chaisson/AT405/HTML/AT40506.htm
Figure 40-b
Orion nebula in the IR band from NASA:
http://138.238.143.191/astronomy/Chaisson/AT405/HTML/AT40506.htm
Figure 41-a, 41-b
Si PIN sensor: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 42-a, 42-b
HgCdTe sensors (SW): Hoffman A, Mega pixel detector arrays from visible to 25 micron,
2004
Figure 43
HgCdTe sensors (MW): Hoffman A, Mega pixel detector arrays from visible to 25 micron,
2004
Figure 44-a, 44-b
InSb sensors: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 45-a, 45-b
Si:AsIBC sensors: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 46
SiC response: http://www.eoc-inc.com/UV_detectors_silicon_carbide_photodiodes.htm
Figure 47
Channel multiplier, adapted from:
http://www.olympusmicro.com/primer/digitalimaging/concepts/photomultipliers.html
Figure 48
Microchannel structure: Hamamatsu photomultiplier tubes, Hamamatsu Corp., 2006.
Figure 49-a
Microchannel detector: Hamamatsu photomultiplier tubes, Hamamatsu Corp., 2006.
Figure 49-b, 49-c
GALEX UV detectors: http://www.galex.caltech.edu/
Figure 50
M82 HST:
http://hubblesite.org/newscenter/archive/releases/2006/14/image/a/format/large_web
Figure 51-a
M82 visible light (HST):
http://www.seds.org/messier/m/m082.html
Figure 51b-a
M82 IR ligh (HST)t:
http://www.seds.org/messier/m/m082.html
Figure 52-a
M82 Spitzer:
http://hubblesite.org/newscenter/archive/releases/2006/14/image/i/format/large_web
39
Figure 52-b
M82 IR DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_ir&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=15.0
&w=15.0&f=gif&c=none&fov=NONE&v3
Figure 52-c
M82 red DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_red&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=15.
0&w=15.0&f=gif&c=none&fov=NONE&v3
Figure 53-a
M82 GALEX: http://www.galex.caltech.edu/GALLERY/GALEX-M82.jpg
Figure 53-b
M82 ASTRO1:
http://www.seds.org/messier/Pics/More/m82a1uv.jpg
Figure 53-c
M82 blue DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_blue&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=1
5.0&w=15.0&f=gif&c=none&fov=NONE&v3=
40