Вы находитесь на странице: 1из 40

CHARGE COUPLED DEVICES

TECHNOLOGY
Roberto Bartali

ABSTRACT
The aim of this project is the description and the application of Charge Coupled
Devices as light detectors of different wavelengths: from UV to near infrared part of the
EM spectrum. The reader is firstly introduced to the technologies involved, then, follows, a
detailed description of how these sensors works and their advantages and drawbacks,
sometimes with the comparison to other technologies. With such a detailed description, the
reader will be able to understand and select the right sensor for some specific observing
purpose; even more he/she will be able to operate it correctly.
This work is directed to all people who want to use the state of the art technology in
image detection and wish to know, also, what there is behind and inside the black-box,
called imaging device, he/she is placing on the prime focus of a telescope.

Key words: CCD, Image Processing, Instrumentation.

1 - INTRODUCTION
Astronomy is a science based on light detection, with this in mind we can recognize
five “Periods in the History of Astronomy” each one characterized by the kind of light
observed or the sensor used, but not all these periods are well delimited, like, for example,
geologic eras; some times they overlap each other.
• Period 1. From prehistory to second half of XIX century. The only sensor
available is the human eye. All data collected by the naked eye is recorded on
cavern’s walls, monolithic monuments, stones, paper, etc. During this period the
knowledge of electromagnetic (EM) spectrum is reduced to visible light.
• Period 2. From the application of photography (near 1870) to radio wave
detection. The main sensor is the photographic plate and the EM spectrum is a
little wider, including near UV light. The greatest collecting power of telescopes
and the capability to integrate photons for hours, expand the known Universe
which includes now several millions of stars and galaxies.
• Period 3. From radio wave detection (1932) to gamma and x ray detection (near
1964). The EM spectrum includes now radio waves. It is clear that celestial
objects emit light of many different wavelengths, not just visible.
• Period 4. From high energy photons detection to the application of CCD to
Astronomy (1974). The most energetic radiations of the EM spectrum are
available to astronomers. Gamma rays and X rays emitted by stars are also
observable and measurable.
• Period 5. From the CCD to now. An enormous increment in sensitivity and
efficiency of the sensors expands the limit of the known Universe almost to its
full size. Full EM spectrum is observable thanks to space telescopes capable to
detect photons from extremely energetic gamma rays to UV, bands impossible
to study from the surface of our planet due to their absorption by the
atmosphere. Infrared light is almost fully blocked by the atmosphere, telescopes

1
must kept very cool, placed at high altitude or on very cold places like
Antarctica, so space telescopes orbiting far from the planet are a better solution.
Our actual knowledge of the Universe is based on the development and
enhancement of photon sensing and measuring technologies. As more sensitive and large
are sensors, more distant objects, in space and time, can be observed and studied. As
spectral response of detectors increase, more detailed observations of some particular
phenomenon can be performed. Silicon based sensors are now available for almost all
wavelengths: from gamma rays to far IR We will describe in this work the technology of
light detection from UV to near IR part of the EM spectrum.

2 – CHARGE COUPLED DEVICES (CCD)

2.1 - CCD BASICS


Charge Coupled Devices (CCD) are electronic imaging sensors based on the
Photoelectric Effect (PE). Like any other electronic device they are mainly fabricated using

some semiconductor material, like Silicon (Si) and


Germanium (Ge).
A semiconductor atom has four electrons
in the valence band [1][2], if we can input some,
and enough energy, one or more electrons are
forced to jump to the conduction band, converting
the semiconductor atom into a conductor one,
because electrons in the conduction band can
freely move. But this is possible only if there is
enough energy, otherwise, after a few
nanoseconds, the electron return to its normal
energy state (back to the valence band). This
excitation energy depends on the size of the band-gap between the valence and the
conduction band (figure 1a). The valence band is the outermost filled energy band in the
atom, instead, the conduction band is the innermost energy level that can be occupied by
energized electrons, so the difference between a conductor, a semiconductor and an
insulator material is the width of the valence-conduction band separation (figure 1b).
To convert the semiconductor material to a conductor or an insulator we have to
dope it, this is the process of introducing some impurities (atoms of different material) into
it [Wikipedia, 2006]. To convert a semiconductor into a conductor, we have to dope it with

2
some material whose atoms have more than 4 electrons in the valence band (called donors),
normally we can use the group 15 atoms listed in the Periodic Table of Elements [PTE].
These atoms, all, contain 5 electrons in their valence band and are: Nitrogen, Phosphorous,
Arsenic, Antimony and Bismuth. The extra electron (the fifth) can be moved around easily
in the presence of an external electric field. Semiconductors with impurities like those listed
are called n-type materials.
Conversely, to reduce the
conductivity of a semiconductor,
making it to behave like an
insulating material, we have to
dope it with some atoms with less
than 4 electrons in their valence
band. This way, there is an excess
of positive charge (a hole) and
electrons can easily fill that energy
level, so there is no electric current
flow possible. These materials are
called acceptors and they are listed
in the periodic table of elements as group 13 atoms. All, they have 3 electrons in the
valence band, there are five atoms sharing this property: Boron, Aluminium, Gallium,
Indium and Thallium. Semiconductors doped with some of the above materials are called p-
type materials. The right combination of both (P and N types) in the right place
(semiconductor geometry) gives us a working electronic device (figure 2).
After the above introduction about semiconductors, we can now see the structure of
a CCD and how it works. Basically, the structure of a pixel in the CCD is, as we can see in
figures 2, 3 and 3b, conformed by a bulk P-type silicon substrate and a thin N-type layer
above it. Another thin isolating oxide
layer, separating the electrodes from the
N-type silicon, prevents the trapping of
electrons by the electrodes. This
structure is really a small capacitor. A
positive voltage on one electrode, induce
an electric field which is able to create
electron-holes immediately below it,
holes are moved down, deeper in the P-
type silicon, this way a depletion zone is
generated. When photons arrive and
penetrate the surface of the CCD, they
can produce the so called photo-
electrons, when they are absorbed by silicon atoms. These photo-electrons are confined in
the depletion zone below the positive electrode, this area is called the pixel well. Electrodes
on each side of the well, are negatively biased, or with a voltage much less positive, so they
can repel electrons (both, photo generated and thermal electrons), preventing their diffusion
and recombination in the bulk P-type silicon. The pixel well is then a storage area and, until
the chip is exposed to light, it is filled by photo generated electrons. Each pixel is isolated
from it neighborough by a thin insulating region called the channel stop (figure 3B), this

3
prevents the overflow of electrons from one pixel to the next, otherwise, we can loose the
spatial resolution and the final reconstructed image became overexposed (blooming effect).
Photons, coming
from the object of
interest, strikes the
surface of the sensor and
are able to penetrate in the
silicon up to certain
depth, depending on their
wavelength (figure 5).
High energy photons
(shorter wavelengths) are
absorbed near the surface, instead, lower energy ones (longer wavelengths), can travel more
and are absorbed deeper in the silicon (figure 2). When one photon (or more) is absorbed
by the semiconductor atom, the latter frees an electron. The freed electron, that is generated
in some part of the p-type silicon, is moved (by the
electric field of the most positive electrode) and
stored in the well. This process continues until the
device is exposed to the incoming light, but there is
only a limited quantity of room, depending on the
thickness of the p-type silicon, the voltage applied to
the electrode and to the size of the pixel. Largest
pixels can accumulate more photoelectrons than
small ones (figure 4). The quantity of possible
electrons stored, is called the well capacity.
As we have seen, only photons with some range of
wavelengths are absorbed by the device (figure 5),
but it depends on the kind of semiconductors used
and on the physical structure of the CCD, so, that
graph is only representative. We can see (figure 2)
that short wavelength photons, less than 400 nm. are
reflected by the surface (not absorbed) and long
wavelength, more than 1000 nm., simply pass
through the semiconductor (in other words, it is
transparent to that radiation).

4
When the exposure time ends, applying the right voltages to electrodes, at a very
precise sequence, we can move all the electrons stored in each pixel well to the adjacent
one in a process called shifting. This process must be repeated as many times as the number
of rows and columns are
present in the CCD. Electrons
are transported this way from
the upper row, downside
(vertical shifting) until they
reach the lowest row, here
they are discharged into
another shift register and they
are moved horizontally
(horizontal shifting) until they
reach the charge node where
they are measured and converted to a voltage, then they are sent to the output amplifier.
This voltage is available on the output pin of the CCD for its subsequent processing.
This analogue signal (voltage) is proportional to the number of electrons and must be
transformed to a digital number by an analogue to digital converter circuit (ADC), some
times an interface, like an emitter follower or an amplifier, is connected from the CCD
output to the ADC. Now the information from each pixel can be introduced in the
computer, stored and processed in order to reconstruct the image of the object. This image,
thanks to the Internet, can be shared world-wide to scientific community and to general
people for subsequent analysis.
There are two kinds of CCD, depending on the pixels arrangement: a single line or a
matrix. A linear CCD has only one row of pixels (figure 6A) and a matrix CCD has an n by
m array of sensitive picture elements (figure 6B).
Linear CCD are used sporadically in Astronomy,
an example is the camera onboard of the Viking
Mars Lander. Some imaging techniques like Time
Integration Delay and Drift Scan can use linear
arrays. For most imaging tasks in Astronomy, a
matrix CCD is better.
To better understand how a CCD works,
we can make a rain-bucked analogy (figure 7).
Rain drops are photons, buckets are storage wells
and conveyor belts represent shift registers.
During the exposure time, each bucket is filled
with rain drops. When exposure ends, each bucket
is emptied and fills the adjacent one; conveyor
belts, move buckets vertically and horizontally
toward the last one, which act as the charge node
and is where the water (electrons) is weighted and
made available to the external circuit.

5
2.2 - DEVELOPMENT OF THE CCD
The silicon technology era begin with the invention of the transistor in 1948
[Massey 2005]. Soon the property of silicon to detect light, converting photons to electrons,
was discovered; but very high fabrication cost of semiconductors, avoided the further

development of silicon light sensors. During the decade of 1960, some devices were
fabricated for military and industrial purposes, but, their low efficiency and the need for
very expensive and complex electronics, led these devices mainly as a “Technological
curiosities”. The invention of integrated circuits, increase the interest in silicon based
imaging devices because the complexity and size of the support electronic circuits were
dramatically reduced. In the late of the 60th and in the beginning the decade of 1970, many
photosensitive transistors were integrated in a linear image array, a short time after, a
matrix image sensor was built. 1974 was the year when the first CCD was placed at the
focal plane of a telescope, it was a 100 by 100 pixel sensor made by Fairchild [Fairchild
Imaging]. In figure 8 we can see the first astronomical image ever taken (the full Moon).
That year, a new revolution in Astronomy begins, like the one when Galileo Galilei in 1610
observed for the first time through an eyepiece of a telescope.
Resolution and sensitivity of the first
CCD sensors were poor, if we compare images
in figure 8 and 9, it is clear the difference, the
image of M51 galaxy taken recently by the
Hubble Space Telescope is order of magnitude
better. To reach this kind of quality, a little
more than 30 years of development was
needed.
Astronomy is the science of collecting
light from distant objects and, many physical
processes can be identified and investigated
because each one emits light at different
wavelength. Until the advent of solid state
imaging, the only way to take pictures of the
Universe was using films and photographic

6
plates. Physical and chemical properties of silver halide grains (used in photography), make
them very sensitive to blue and near UV part of the electromagnetic spectrum [Ferreri
1977]. Photographic plates, are much more efficient than the eye because they can integrate
the image during a long exposure time (figure 10). Silicon is much more sensitive than
silver in the red and near IR region of the electromagnetic spectrum, this, and a better
quantum efficiency (capacity of converting photons to electrons), give us the possibility to
imaging, for example, hydrogen emission lines with a very short exposure time (10 to 100
times less).

2.3 - FRONT AND BACK ILLUMINATED CCD


Observing figure 5, we can see that photons of different energy can penetrate into
silicon, until they are absorbed, up to some depth. It is clear that we can expose the
semiconductor device to light on both sides (figures 11a, 11b, 11c). If we let the light
(represented by
black arrows) to
strike the sensitive
area from the
electrode side, as
depicted in figure
2, 3 and 11a, we
have a Front
Illuminated CCD.
Obviously the electrodes must be transparent to the wavelength of interest and very thin. If
light reach the silicon from the opposite side (figure 11b and 11c), electrodes do not
interfere, in this case we have a Back Illuminated CCD. Both types have advantages and
drawbacks one respect to the other, now we will describe both in some details.

2.3.1 - Front Illuminated CCD


These are the first developed devices, they are also extensively used in consumer
electronics (digital still and video cameras) and in industrial control, because they are less
complex and cheap. A typical structure of a pixel is depicted in figures 2 and 3. With this in
mind and, comparing that structure with data in
figure 5, we can see that only photons with
relatively long wavelength can be able to reach
the photo-electrons collecting area, because they
have to pass through electrodes and the isolating
oxide layer. UV and most of blue photons are
absorbed near the surface of the pixel or they are
even reflected back. This gives to the CCD a great
sensitivity on the red because the thickness of the
sensitive area is typically 600 or more microns.
The overall quantum efficiency is good, but not as
high as the one of a Back side illuminated CCD
(figure 12). Most photo-electrons collected are
formed below the positive biased electrode. Low
end CCD of this type are manufactured by Texas
Instruments, Kodak, Atmel, Thomson, Sony,

7
Fairchild among others. Professional CCD are manufactured by Marconi, SITe, EEV,
among others.

2.3.2 - Back Illuminated CCD


These kinds of sensors are especially designed to improve the sensitivity in the blue
part of the spectrum and to have a much more quantum efficiency (figure 12) respect to the
front illuminated ones.
Photons enters the CCD from the back, so they do not encounter any obstacle, but to reach
the storage well, they have to travel a great distance, so the probability to recombine with
holes is very high. To avoid this, the silicon substrate must be reduced by a process called
“thinning” (figure 11c). This is a very expensive and difficult task, because we have to
reduce to the minimum possible the depth of the silicon and it must be as uniform as
possible, typically that thickness is less than 15 micron. A small difference in the
dimensions means that the sensitivity and the efficiency may be variable. Most short
wavelength photons are collected and stored easily, but the CCD became semi-transparent
or fully transparent to long wavelength light, because those photons can pass through the
sensitive area without interact with silicon atoms. The available area for photon interaction
is greater than the available in a front illuminated CCD, so the quantity of photons collected
is much more and results in very high quantum efficiency. Current technology can reach
values of 90% or more (at the centre of the visible spectrum). This means that almost every
incoming photon can release a photo-electron. These type of sensors are very expensive.
There are another two problems with a back illuminated CCD: fringes and fragility.
Fringes are due to multiple internal reflections of photons and depend on the wavelength
and the depth of the silicon. This is very difficult to avoid and make the CCD almost
unusable for spectroscopy. Due to
the reduced depth, handling and
mounting of a back illuminated
CCD is difficult and must be dome
with extreme care, otherwise the
chip can be broken or bended.
Even taking into account all
the intrinsic problems they have,
this kind of CCD are the most used
in professional Astronomy.
Amateur astronomers and reduced
budget observatories and
institutions, uses front illuminated
CCD due to their lower cost.
Marconi, Loral, SITe, EEV are
some of the manufacturers of back
side illuminated CCD for
professional uses.
In order to avoid or reduce
problems generated by thinning the
CCD, maintaining blue sensitivity
and improve red and near infrared

8
response, two techniques were developed: deep depletion (figure 13a) and virtual phase
(figure 13b).
Deep depletion are essentially a back side illuminated CCD, but the sensitive silicon is
thicker than the one of a thinned CCD, but less than of a front side illuminated CCD. This
gives to the red incoming photons a better chance to be absorbed, the pixel well is also
wider. To avoid the recombination in their path toward the storage well, a higher voltage is
applied to electrodes. Blue photons can be directed easily to the storage well, by the
enhanced electric field produced by the electrode.
A virtual phase CCD, is a front illuminated CCD, but, instead to have three electrodes, it
has just one. This reduces the blocking area encountered by photons. Blue light can, then,
produce photo-electrons which can be directed to the well because they are not absorbed by
the electrode structures, like in a normal front side CCD. This is the approach developed by
Texas Instruments. Figure 13a and 13b, represent the spectral response of a deep depletion
and a virtual phase CCD respectively. The horizontal axis is approximately of the same
scale. We can see a nearly flat response from 350 to 800 nm in Texas Instruments
technology, but deep depletion CCD has more quantum efficiency in the visible region of
the spectrum.

2.4 - CCD AS IMAGING DEVICES


Astronomers used photographic plates as the main imaging system for about 150
years, a great effort to enhance their properties was done by manufacturers like Kodak,
Agfa and Ilford. The increased necessity of higher resolving power, longer exposure times
for capturing fainter objects and wider spectral response, was filled by some products like
Kodak 103, III, II and technical Pan series of films and plates; FP and Delta series by
Ilford and APX serie by Agfa. Results obtained were almost spectacular and plates were
used until a few years ago. As an example we have the Palomar Digital Sky Survey, an
almost full sky atlas that contains thousands of plates [DSS]. In Palomar Sky Survey, two
different plates were used, a Kodak IIIaF for images in red light and the Kodak IIaJ for blue
light. These images are in use today and are available, on the WEB to every one, thanks to
the fact that they were digitised. Many new discoveries and a lot of science can be done
today by the analysis of Palomar images.
There is a lot of discussion about the obsolescence of photographic plates and films
in front of electronic detectors. There are many advantages and disadvantages of one
technology respect to the other, but it seems that in Astronomy, CCD are the main and, in a
short period of time, will be, the unique imaging sensors.
In the table below (table 1), we will show the main differences between both types of
imaging systems: photographic plates and CCD [Bartali 2003].

9
Table 1: CCD and Photographic plates properties comparison.

FEATURE TO BE COMPARED FILM CCD


Type of reaction Chemical, physical Physical
Quantum Efficiency <10% >80%
Resolution 10 to 25 micron 6 to 24 micron
Pixel matrix size 1200x1800 (24x36 format) 512x512 to 8192x8192
350 to 650 nm 400 to 900 nm
Spectral response can be extended from 250 to 950 can be extended from 100 to 1100
nm nm
Linearity poor excellent
Time from the end of the exposure >30 minutes 10 seconds to 10 minutes (for
to image (minutes to hours) larger format CCD)
Dynamic range <16 bits >=16 bits (65536 gray levels)
Equipment cost Low High
Auto guiding during exposure Not possible Yes
Direct image processing Only after scanning the image Yes
Remotely image acquisition Not possible Yes
Automatic image capturing (no
Not possible Yes
need for operator)
Need for cooling system No Yes
Special chemical and physical
processes and procedures for Yes No need
sensitivity increment
Special environment for
Yes No
developing
Interferometric telescope
Not possible Yes
connection capability
Automatic correction of images
Not possible Yes
with adaptive and active optics
Automatic protection against
Not possible Yes
saturation
Antiblooming capability Not possible Yes
Reciprocity effect failure Yes No
Loss of sensitivity if not cooled Yes but low Yes but high
Binning capability Only after scanning the image Yes
Resolution increasing Not possible Yes
Special chemical treatments before
Yes No need
exposure
Cooling of the image support
Yes Not apply
before and after the exposure
Yes but moderate Yes but high
Cooling during the exposure
(Around -20ºC) (-10 to – 110 ºC)
Almost infinite
Duration of the image until
decades (disregarding storage technology
degradation or unusability
obsolescence)
Cost for sharing or duplicate Very low
Very high
images (near zero)
Easy Complex
Image acquisition
(few steps) (many steps)
Image processing requirements Moderate Very complex
Exposure time to reach the same Very long Short
limit magnitude (many Hours) (Minutes to a few hours)

10
As we can see in figure 14, there is a great difference between a photographic plate
image (figure 14-a, 14-b), a CCD image from Earth surface (figure 14-c) and a CCD image
from space (figure 14-d) of the same object.

Imaging with a CCD is not a straightforward task. Until the final image is ready to
be analysed or printed for the public, many intermediate images must be acquired.
First of all, we have to take a Dark Frame; this is an image with the shutter closed
and with the same exposure time as the science
image and at the same temperature, this way only
the thermal generated signal is available. To have a
better figure of the dark (thermal) signal we have
to take several images and average them; this is
called a Master Dark. A good Master Dark is not
necessarily made each night if the telescope and
CCD conditions do not change. If the CCD is
cooled to very low temperatures (100ºC below
zero) dark current is so low that is negligible, so
there is no need for a dark frame, but at higher
temperature we must have it, because the dark
current level is important (figure 15).
A zero exposure time image, called Bias
Frame, is then taken. In this image we have all the
electrons generated by the internal electronics,
electroluminescence( figure 24a), pixel defects like hot spots (figure 21) and black pixels
(figure 22), bad columns and cosmic rays (figure 16, 25). A good Bias is the average of
several images (master Bias). A professional grade CCD shows an almost uniform bias
frame (figure 16a), instead a less quality CCD shows many defects (figure 16b).The Flat
Frame is an exposure of a very
uniform illuminated source. This can
be done by taking a picture of an
evenly illuminated screen inside the
dome, a light box placed over the
telescope, the twilight sky or a
starless patch of the sky [Bartali
2005]. Exposure time for the flat
frame must be short enough to
optimize the very expensive
telescope time (we want to take as

11
many as possible science images, not spend all the time taking calibration frames), but long
enough to reach almost a 30 to 50 % of the full well capacity (half saturation). The
temperature of the CCD
during flat frame imaging
must be the same as that of
the raw image. Flats must
also be corrected with dark
and bias frames. Averaging a
set of Flats is a normal
technique to have a Master
Flat. Flat frames should be
taken each observing session.
Flat frames shows basically
the difference in sensitivity of
pixels, all defects due to dust
in the optics (CCD and
telescope), vignetting and fringes (figure 17a and 17b). Finally, the raw image is the
exposure of the object of interest (figure 18). It must be taken with the telescope and the
CCD at the same condition of temperature and exposure time of the auxiliary frames: dark,

flat and bias (remembering that bias is a zero exposure frame). Some times to reduce the
noise, several exposures (with much less exposure time) of the same object are averaged

12
together in a technique called Stacking. Now, all the images are stored in the memory of the
computer and are ready to be processed. As we can see in figure 20 we have firstly to
correct the raw image and the flat frame. To do this, we have to subtract dark frames (or
Master dark) and bias (or Master bias) from the Flat (or Master Flat) and form Raw frames.
Then, divide the resulting corrected Raw by the corrected Flat. The resulting image (figure
19) is called a Science Image and is theoretically free of defects and ready to be further
analysed. A great difference is shown between raw and science images
Even when, at first sight, it
seems to be a very difficult
task, taking an image with a
CCD is relatively easy due to
the increased (and increasing)
computer power and
automation technologies.
Today images can be taken and
retrieved, just by sending a
command to a remote control
computer and the telescope
camera (remotely operated and
robotic telescopes.

2.5 - SOURCES OF CCD ERRORS


Until now, we have talked about how good is a CCD, but we have to talk, also,
about the many problems arises when we want to uses them. Thanks to the digital form of
the image, all errors can be corrected using specifically developed software routines.
There are many sources of problems that arise when using a CCD, some are intrinsic,
derived from the fabrication process and some others are due to external factors during the
operation. Now we will explain both cases and we also give the better, or possible, solution.

2.5.1 - MANUFACTURING PROCESS INDUCED ERRORS


It is nearly impossible to manufacture a perfect electronic component due to some
impurities in the material or in the atmosphere where the crystal is manipulated, machine
imperfections, and many other factors. This is especially true for CCD. In the following list,
we can explain these possible problems and the way we can try to eliminate or reduce them
to minimum.

Problem: Sensitivity difference from pixel to pixel.


Pixels are not identical in their spectral response nor in sensitivity (figure 17a,b),
this is because of the many procedures during manufacturing process. Doping each pixel
with exactly the same amount of atoms, growing the crystal uniformly is not as easy as it
may appear. During fabrication, many chemical etching phases are needed, so if a very
little error or imperfection arise, it is transferred and perhaps, incremented at each step.
Solution: If the Flat Field exposure is done properly, we can virtually eliminate non
uniformity, but this is true only in theory because it is very difficult to obtain a perfect Flat.
Practically we can reach a satisfactory unification of spectral response and sensitivity.

13
Problem: Hot spots
Some pixels can be near their saturation value, or at some specific value, even when
they are not exposed to light, in this case we call them hot spots; they appears in the raw
image as bright dots like stars (figure 21-a). They appear in all calibration frames and, of
course, in the raw image. If
a star light falls exactly
over a hot spot, no usable
information is available.
Possibly, a certain number
of photo-electrons increase
the level of that pixel, and,
in this case it is possible to
rescue some information,
but it is not reliable. Bright
columns can also appears (figure 21-b) produced by the leakage of charges during vertical
shifting, some electrons are left behind and increase the charge accumulated in the pixel
well of the row below the defective pixel.
Solution: subtract or setting to zero the value of that pixel. If we know that a star fall over
that pixel, a second image with a slight position shift is taken and subtracted from the other.

Problem: bad, buried or black spots


Some individual pixels or a group of adjacent ones,
called bad pixels, can never be used to collect photo-electrons
because they have no sensitivity, nor the capacity to collect
photo-electrons. There is also the possibility that the well is not
formed because the electrode is not connected to the rest of the
circuit or its resistivity is higher than normal, so the positive
voltage (applied to electrodes) is lower than the required. If a
star falls over a black spot, we will never see it in the image. If
we have a black pixel, the final image shows a black streak
(dark column) starting at the defective pixel position (figure
22), because of the vertical shifting (downward). As of the
above case, this defect is visible on every image (raw and calibration), so it is easy to locate
and cancel.
Solution: Taking a second image of the same field, but with some position shift, help to
cancel the effect, after image subtraction.

14
Problem: Blooming
If photo-electrons number are near the full well
capacity, there is the risk of an overflow of some of them
into adjacent pixels because the insulating regions (the
separation between pixels) are very small, this effect is
known as blooming. The raw image contains vertical
white streaks. The length and wide of the streak depends
on the exposure time and on the brightness of the star,
both can produce too many photo-electron and an
overflow occurs (figure 23). Tis overflow can extends
over many pixels.
Solution: reducing the exposure time to the minimum
possible to have a good signal to noise ratio. Taking more
images of the same field with some position shift and then
subtract one from the other, this way objects hided by the
overexposed or bloomed pixels can be visible again. CCD with great well capacity are less
affected by this problem. If, for example, a very faint object is too near a much brighter
one, like a satellite close to the parent planet, two sets of images must be taken. One, over-
exposing the planet in order to enhance the visibility of the satellite and a second, with a
much shorter exposure, to pick up planet details (surely satellites are not visible on this
image), then stacking both images, give us all objects visible.

Problem: Transport inefficiency


After the exposure, the pixel well content must be shifted (transported) vertically
downward until it reaches the horizontal shift register, from where it is horizontally shifted
toward the charge node. But, some imperfections in the geometry of the CCD, impurities in
the semiconductor materials, non uniformity of electrode resistivity, among other factors,
create a non perfectly uniform displacement of photo-electrons from one pixel to the next.
Some are left during each transfer stage. Some photoelectrons can also be captured by
atoms and a recombination occurs. This is more visible if the well potential is near its full
capacity or the well is too large, because lateral photoelectrons can easily recombine. A
professional CCD has a 99.9999% of transport efficiency.
Solution: unfortunately there is no possible solution for this problem until we buy a better
device. With a small CCD, the effect is almost negligible, but in large CCD it can be a
serious problem. To avoid the effect, and to slower the readout time, some manufacturers
opted to divide the CCD into two or four symmetrical areas. Each one has its proper
horizontal shift register, this way each pixel can travel only a 25% of the full path.

Problem: Glowing
There are two main causes for glowing in a CCD: radioactivity and heat produced
by the output amplifier. The former is because some material used to manufacture the CCD
or the glass window, may be weakly radioactive, so this radioactivity can induce the release
of some photoelectrons producing a glow in some section of the image (figure 24-a). The
position of the internal output amplifier is very important, because it is one of the parts
which is working at higher temperature due to it complexity. If it is placed under, or too
close to the imaging array, it can produce more dark current on those closer pixels (figure
24-b).

15
Solution: Calibration frames can
eliminate this problem, dark
frame shows glowing like the raw
images, so, when they are
subtracted, glowing disappears.
Unfortunately, if it rise because
of radioactivity, it is not always
visible. Another solution may be
to turning off the CCD and wait a
few seconds, then, turn on again
and re-expose the image. For this
reason, manufacturers place the
output amplifier as far as they can from the image area. Turning off the output amplifier
during exposure time (procedure available for most CCD) eliminate the problem if it is
thermal, but, if is produced by material radioactivity, there is no way to anticipate when and
where it can be appears.

Problem: Reflection
CCD electrodes are metallic and metals are very reflective, even when they are
treated to be semitransparent to let photons been able to pass through, so some quantity of
photons can never reach the sensitive area because they are reflected off the sensor. The
protecting glass window over the sensitive area is also reflective.
Solution: antireflective coating over the glass windows and electrodes, help to reduce the
number of reflected photons; this is done during the manufacturing process.

Problem: poor blue and UV sensitivity


High energy photons, like UV and blue ones, are absorbed near the surface of the
CCD, in other words they can not travel too deep inside silicon. In a front illuminated CCD
the well is placed under the electrodes, so photons, in order to interact with silicon and
release photo-electrons, must pass through the electrodes. Most of them are absorbed before
they can reach the usable sensitive area.
Solution: If the application needs a high blue or near UV response, the solution is to select
a thinned, a deep depletion or a virtual phase CCD. Another solution it is to coat the device
with some material that absorb short wavelength light and re-emit photons at longer
wavelengths. Normally UV is converted to yellow-green light, so these photons can be
collected by the CCD, because they falls in its most sensitive region. Some material used
are Coronene and Lumigen [Kitchin 1998, Howell 2000].

Problem: transparent to IR radiation:


Long wavelength photons, like IR ones, are absorbed very deep inside the CCD.
These lower energy photons can also pass undisturbed through the CCD, making it
completely transparent above some wavelengths, like in thinned CCD (figure 2, 11)
. The sensitive area must be thick but, if the photon is absorbed too much far from the well,
the probability for the new released photo-electron to be absorbed by other atom is very
high an then it can never reach the well, so it is not stored.
Solution: Thick or front side illuminated CCD are best for imaging red objects, but, as we
saw above, they have poor response to blue light. If the application asks for a better flat

16
spectral response, a compromise (to have both responses, in the blue and the red) is to use a
deep depletion CCD or a virtual phase device.

Problem: Internal reflections (fringes)


Thinned devices suffers from multiple internal reflections at some wavelength. This
is because the thickness of the silicon substrate is proportional to the wavelength of the
incident light (figure 17-b).
Solution: Flat field frames can help to reduce this effect, most of the time they are able to
cancel it. If the CCD is to be used for spectroscopy, the resulting image is modulated and
this interference pattern occult spectral lines, so it is not useful for that application. If the
device is used for spectroscopy, it is better to select a front illuminated CCD.

2.6.2 - OPERATION INDUCED ERRORS


During the operation, imaging exposure, read out process, camera assembly, mounting
and maintenance, we can have a series of problems as we can see in the following list. A
possible set of solutions is also given.

Problem: Dark current


Atoms in the CCD, like in any other device, are bouncing because they are not at the
temperature of the absolute zero (-273.15ºC). This vibration is the cause of some heat
generated by friction. During collisions, some electrons are released and they can be
attracted by the positive potential of the well, this way they are stored and mixed with
photo-electrons generated by the radiation coming from the object of interest. There is no
way to identify and separate ones from the others, so the raw image is the sum of internally
generated electrons and those generated by incoming photons. The number of thermally
generated electrons (called dark electrons) is proportional to CCD temperature,
environment temperature and exposure times. The raw image show some (not necessarily)
uniform background illumination that masks fainter objects (figure 15).
Solution: The only way to reduce to the minimum dark current is to work at lower
temperature. This implies the addition, to the CCD, of a cooling system, either a
thermoelectric or liquid Nitrogen ones. But there is a lower limit for cooling (approximately
110ºC below zero); at lower temperature, electron mobility is greatly reduced, so
accumulation and transport process is affected.

Problem: Cosmic rays


Cosmic rays are very energetic particles (mainly
protons) generated by stars in an enormous quantity. Because
of their high energy, they can pass through almost any
material loosing just a part of their energy. When they reach
the sensitive area of the CCD, they can produce an avalanche
of electrons which are stored in the well. This way they are
added to those generated by incoming photons, but there is
no way to distinguish them from ones produced by a cosmic
ray. The raw image show white spots like stars (figure 25).
Where and when cosmic rays strikes the CCD is absolutely
unpredictable. The number of collected cosmic rays is
proportional to the exposure time and the CCD area.

17
Solution: The analysis of the Point Spread Function (PSF) of the image can show if it is a
real star or a cosmic ray effect. PSF of a star tend to be Gaussian, instead a cosmic ray has a
sharper profile. The probability to have cosmic rays at the same position in all frames
(calibration and raw) is nearly zero, so during the processing phase, they can easily
eliminated. If a real star falls under a cosmic ray, the only way to see it is to re-expose the
image.

Problem: Smear or blurring


If the CCD is not covered by a shutter after the
exposure time ends, during the time needed for the
transport of charges, incident photons continues to strikes
the sensitive area, so more photoelectrons are added to each
pixel. But previously accumulated photo-electrons are not
in the same place, because they are now in the shifting
process, so the image is distorted, smeared or blurred. This
effect is proportional to the size of the CCD because larger
is the pixel quantity, higher is the readout time.
Solution: Closing the shutter immediately after the
exposure time elapsed, eliminate this problem. Some
manufacturers divide the CCD into two separated regions, one is used to collect light and
the other (masked by an opaque layer of aluminium) is used as a temporally storage area.
When the exposure ends, the image is shifted at high speed from the collecting area to the
protected area, and then, at slower speed, it can be shifted out from the CCD. During this
time, a new image can be integrated in the collecting area. This configuration is called
frame transfer CCD (figure 26). There are two other solutions: shifting at higher speed, but
this increase the noise, or use devices with more than one horizontal shift registers. In this
case the CCD is divided into (for example) four regions, each one with its proper shift
register. Each fourth part of the image is read simultaneously, so the time needed for the
read out process is reduced four times.

Problem: Over and under exposition.


If we over expose, we can easily reach the full well capacity of the pixel (saturation)
this way the image is just a white spot. Also, an overflow to nearest pixels is possible. Fine
details of an extended object like planet surface are
occulted (figure 27).
Under expose, means that the image has not enough
dynamic range, so low contrast details are not visible. The
worst case is when we want to take a picture of a faint
object near a much brighter one, this is a very difficult
task. We can not have both at the same time on the same
frame.
Solution: In this case, we can expose two times the same
object, the first with an extended period of time, to
capture fainter object, and, the second with a much
shorter exposure time, to capture the brighter object, late, during the image processing
phase, both images are added (stacked) together. To avoid over or under expositions, we
can take some test images.

18
Problem: Focusing
Focusing an image on a CCD is not an easy task; any small deviation can produce
poor quality images. The full resolution of the optical system can not be achieved and some
precise observation like those needed for photometry and astrometry can not be done. If we
are exposing the CCD with the addition of filters, the correct focus is not necessarily the
same, due to the transmission properties of the filters and its mechanical construction (filter
thickness) and mounting.
Solution: Taking test images and process them until the best focus is obtained. We have to
re-focus the system each time we use a different filter, in addition, all calibration frames
must be taken for each filter. Several methods are used for focusing purposes, but, for all,
the process is the same: take a short exposure, process the image and move the focuser until
the sharpest image is obtained. If we know haw much the system is unfocused, we can
apply some mathematical algorithm to correct it for.

Problem: Over sampling and/or undersampling.


If the star image is spread over
too many pixels (figure 28) its brightness
is lower than the real and it is difficult to
determine the exact position of the
centroid, because too many pixels in the
centre are at the same value.
The opposite, of the above, is
when the image occupy just one or two
pixels, no available information is
obtained from the image. The bright point may be a star or just a cosmic ray hit.
Undersampled image is when, a less than necessary, number of pixels are covered by the
star image (figure 29), so the star is not a circle but an irregular staircase. Atmospheric
blurring can also cause an over-sampled image.
Solution: Depending on the field of view, pixel size and exposure time, it is necessary to
obtain star images with almost nine pixels in order to obtain a Gaussian like point spread
function. A correct sampled image of a star can be 2 or 2.5 times the pixel scale in order to
obey the Nyquist criterion.

Problem: Mounting
CCDs are very fragile devices, they must be manipulated with extreme care,
especially large format thinned ones. During mounting, a small bend can change their
optical and electronic properties, in the worst case, a prolonged fatigue, can produce a
rupture. If they are not mounted correctly, the telescope focal plane may be not remain
parallel to the CCD surface, so the image can be out of focus and a point like star will be
seen as a comet.
Solution: mounting and manipulating with extreme care and only by skilled people.

Problem: Dust
Dust particles can be present over all optical surfaces, from the primary mirror to
the CCD window. It is impossible to avoid them. In the flat frame image they are like
unfocused donuts with different diameter depending on their size (figure17-a, 17-b).

19
Solution: Flat field frames show dust particles, but they disappears during image
processing.

Problem: Vignetting
This is a very common aberration in astronomical images, it consists of a
progressively darkening from the centre to the corner (figure 30). It is caused by the
obstruction of the image cone, formed by the
objective lens or the primary mirror, by some
mechanical part (filter, focuser, mirror support, CCD
support, etc), so the full image is formed on the focal
plane. It is also caused by the poor alignment of the
CCD respect to the focal plane. If the CCD size do not
match the dimension of the image on focal plane,
vignetting appears.
Solution: first of all we have to know or calculate the
image size on the focal plane of the telescope, short
focal length produce smaller images than long focal
lengths telescopes. Second, we have to know the exact shape of the image cone, but this is
difficult in large telescopes due to the great quantities of secondary and auxiliary mirrors
presents. Also, each of these mirrors and they support, must match the size of the image
cone. Filters dimensions and filter support can play a significant role in vignetting, so they
have to be placed as close as possible to the CCD in order to avoid any light obstruction.

3 - COOLING SYSTEMS

Dark current (electrons generated by thermal excitation) in CCD is a limiting factor


to reach very faint magnitude, because it represent a bias level added to normal sky
background. As we saw earlier, the only way to reduce or eliminate as much as possible the
dark level, is to reduce the temperature of the sensor. There are two main techniques for
obtaining this: thermoelectric and liquid nitrogen cooling. The former is much less
expensive but it is able to reduce the temperature only by 30 to 50 degrees below ambient.
The latter can reach temperature of 100ºC and 110ºC below zero.

3.1 - THERMOELECTRIC COOLING


This is a technique based on the Peltier effect discovered more than a century ago
(figure 31). An electric current applied to the Peltier junction increase the temperature on
one side and reduces it to the other side. The cold side is attached to the CCD, so the sensor
can be cooled. To avoid the destruction of the Peltier device, we must extract as much heat
as possible from the hot side. This is done, in the simplest way, with a cooling fan, but a
high efficiency heat sink and forced ventilation works much better (figure 32). If the
temperature differential is pushed near the limit of the capacity of the device, a liquid heat
exchanger can also be applied, so water flowing help to take the heat off the sensor and the
Peltier device. Normally with a Peltier cooler we can reach 30ºC below zero, to have more
cooling, a technique called stacking is used. This is when we place the cold side of one
module in contact to the hot side of a second Peltier junction, this way it is possible to
lower the temperature to some 50 degree below zero (figure 33).

20
The electronic system interface must be able to maintain the desired temperature as
constant as possible (0.1ºC or better) during the exposure time.

This cooling method is simple in operation and installation, we only need a programmable
current regulated power supply, a temperature sensor placed as near as possible to the CCD
and a control electronic circuit. Power consumption and overall dimensions are small. Due
to the relatively low cost of such a system, it is mainly used by amateur astronomers.

3.2 - LIQUID NITROGEN COOLING


Modern giant telescopes require very sophisticated ancillary equipment to get
results as good as expected from a so much expensive instrument. CCD sensors for
professional Astronomy are very sensitive and, need to be cooled to the lowest possible
temperature to avoid dark currents. But, due to semiconductor properties and the atom
behave, the lowest temperature at which we can cool a CCD is 110ºC below zero. At lower
temperature the mobility of
electrons are greatly
reduced, so they can not be
transported from one pixel
to another, efficiently, by
shift registers.
Nitrogen is in
gaseous form at ambient
temperature, but at very low
temperature it became
liquid. A very expensive
and sophisticated electric
and hydraulic system is
needed to operate a liquid
Nitrogen Cooler (figure 34).
The nitrogen is placed in a
reservoir thermally isolated
from the outside
environment and it is placed
inside a chamber
maintained in vacuum. This chamber is made of glass or aluminium. Cold is conducted to
the sensor by a copper wick. Sometimes, the external circuit is also contained and cooled
inside the dewar to prevent noise generation. A tube, with a two way valve, is used to both

21
refill and exhaust. Heat generated by the sensor, is enough to boil nitrogen and it is then
expulsed through this tube. A glass window, not in contact with the CCD, protect the sensor
and prevent the formation of frost. A separated chamber, between the glass window and the
CCD is held at very low pressure, near a perfect vacuum. To maintain the temperature at
constant level during exposure time, there is also an electrical heater (normally a resistor),
this way, temperature variations are limited to +/- 0.1ºC or less.
The process of lowering the temperature take some time, no more than a few degree
per minutes, otherwise a mechanical stress, due to different expansion coefficient of
materials, can destroy the sensor. The CCD must be maintained as long as possible at low
temperature, precisely to avoid stresses, but this imply the constant refill of liquid nitrogen
in the dewar, normally every 24 hours.
The following table (table 2) summarizes characteristics of both cooling systems.

Table 2: Cooling system comparison.

Characteristic to be evaluated Thermoelectric cooling Liquid Nitrogen Cooling


Temperature reached 10ºC to 50ºC below ambient 100ºC to 130ºC below ambient
0.1ºC to 0.01ºC 0.1ºC to 0.01ºC
Temperature resolution and
Depending on the control Depending on the control
stability
electronics electronics.
Time to reach lowest operation
Less than 15 minutes More than 1 hour approximately
temperature
UV, Visible, near IR and IR
Main application Visible spectrum observations
spectrum observations
Cost Low, less than 300 US$ High, thousands US$
Implementation easy complex
Portability yes no
Main Use Amateur and small observatories professional
Small, less than 2 cubic
Overall Dimensions Bulk, tens of cubic decimetres
decimetres
Maintenance Virtually free periodic
Weight Low, less than 1 kg High, tens of kg
Vacuum chamber, pump,
fan, heat sink, power supply,
Auxiliary equipments sensors, control electronics,
control electronics, sensors
hydraulic and pneumatic circuits
Operation simple complex
Power consumption Low, less than 300 W High, in the order of kW
Installation Easy Need a skilled technician
Temperature, vacuum, flow,
Sensors used Current, temperature
current
Microcontroller based (one or
Control electronics Several microcontroller cards
two cards)
Remote controlled operation possible Possible with some restrictions
Failure probability Very low possible

22
4 – CCD APPLICATIONS
In this section we will describe the main features of CCDs specifically designed for the
detection of three wavelength range, in this case Visible (400 to 700 nm), Infrared (800nm
to 24 micron) and Ultraviolet (10nm to 380nm).

4.1 - CCD APPLICATION FOR VISIBLE SPECTRUM


CCD spectral sensitivity in the visible and near IR spectrum is very high and makes
them ideal for normal photography and obviously for astronomy. Hydrogen atomic
emission falls in
the red, so this is a
great advantage
over photographic
plates and films.
Their application
on the focal plane
of a telescope is
straightforward,
because no needs
of special filters or
coatings over the
optics are needed. But if we have to produce near true colour images of celestial objects, a
set of filters that matches the eye perception is needed. In Astronomy a special set of filters
called UBV and UBVRI are used and their spectral response are depicted in figure 35. With
these filters it is possible to do precise photometry and crate colour magnitude diagrams,
fundamental for calculating temperature and evolution of stars. Unfortunately, not all
manufacturers offers filters with identical response. In figure 36 we can appreciate the eye
spectral response and we can easily compare it with a typical CCD response (figure 10, 12,
13a, 13b).
From the
first 100x100 pixel
array, some 30 years
ago, we have now
devices with 100
million of photosites
(figure 37) in a sigle
chip. This CCD
offers a sensible
surface of 95 by 95
mm. These kind of
devices try to solve
one of the most restricting problem of their application in modern large telescopes:
extremely large field of view.
CCDs pixel size, ranging from 4 to 24 microns wide, let the user to obtain images of
different resolutions, small pixels are useful where very fine details, like planetary surface
or double star measurements, are needed. Instead, for large and wide field survey of

23
galaxies and to search for asteroids, mid or large size pixels can be used. Large telescopes
have an image area, measured at the focal plane, of several centimetres wide. To get full
advantage of that, for wide field imaging, we have to form a mosaic of several devices
(figure 38), but there are, many times, restrictions in their fabrication, because of the
internal control circuit position, so, at most we can have a three size buttable CCD. In other
words, a mosaic camera offers the possibility to have almost any number of CCDs and
billions of pixels available, but the resulting image is not continuous, it shows gaps (figure
39). To avoid these gaps it is necessary to take a second image of the same field, but with
the camera rotated 90 degree or the telescope position slightly shifted from the original one.
This imply to double the time needed to obtain the image and, a lot more processing time.
Computer memory, processing software and power needs are also incremented.

4.2 - CCD FOR IR SPECTRUM APPLICATION


IR Astronomy is a relatively new science because infrared light is almost fully
blocked by the atmosphere, just a few windows in the spectrum are available. But, to
observe these bands, a special detector and telescope technology shuld to be implemented.
Heat emitted by the environment and instrumentation can be higher than IR radiation we
receive from stars, so we have to cool, as much as possible, every part of the telescope
(including detectors and electronics) and to observe from high altitude places or very cold
ones, like the Antarctic continent. The better place to do IR astronomy is obviously from
space, where low temperature and no blocking atmosphere are present.
Telescope optics for IR observations are different from the used for the visible. For
visible wavelengths mirrors are coated with a thin layer of aluminium and, to prevent
oxidation, a layer of quartz is deposited over the aluminium. Optics for IR are coated with
different materials, depending on the wavelength of interest, cost, performance and
application. Some of this materials are [II-VI Inc, 2006]:
• Zinc Selenide (ZnSe)
• Zinc Sulfide (ZnS)
• Germanium (Ge)
• Gallium Arsenide (GaAs)
• Silicon (Si)
• Cadmium Telluride (CdTe)
• Copper (Cu)
• Aluminum (Al)
• Molybdenum (Mo)
• Gold (Au)
Gold and copper are the most used for they high reflectivity in the IR.
Observing in the IR is very important because we can see different phenomena, like
protostars in the development phase, protoplanetary disks and in general all objects
occulted inside and behind thick dust nebulae. This is because shorter wavelength photons
can not pass through dust, but they produce the heating of dust particles when they strike
them. This way dust particles reemit photons in the IR spectrum (figure 40). Dust act like a
screen and let us see the scene behind it in indirect form. Cool and low mass objects like
red dwarf stars, methane dwarfs, brown dwarfs, hot Jupiter exoplanets and, in general, all
objects with temperature lower than 2,000ºC emits photons with too low energy to be seen

24
in the visible (blackbody peak emission depends on the object temperature, lower the
temperature, longer the wavelength emitted).
CCDs for IR observation are more difficult to build because different materials,
other than Silicon, are needed in
order to have the required spectral
sensitivity. Another difficult
consists of their capability to trap
photons with much less energy
than those of visible light. For this
reason, until a few years ago, IR
CCDs were built with a reduced
number and relatively large size
pixels. Today we have IR CCDs
large enough (but less than those
for the visible spectrum) and we
can build mosaic cameras like the one depicted in figure 38.
For visible light observation, CCD temperature must be maintained as low as
possible to avoid dark current generation and it is independent to the wavelength observed,
but, for IR detection, the working temperature depends also on the wavelength and the
sensitive materials used.
We will briefly describe most common material used and their performance for IR
CCD arrays [Hoffman, 2004]. IR sensors are made in hybrid form, they have some
electronics circuit inside, normally the read out interface, to avoid the increment of noise.
This approach is similar to CMOS imaging sensor technology, widely used in consumer
and industrial applications, but not used in professional Astronomy (with some exceptions).

4.2.1 – “Si PIN” DETECTORS


These are broad band
detectors operating from the visible
to the near IR with the highest
quantum efficiency in the IR. Even
when they are hybrid sensors, the fill
factor is nearly 100% due to the
position of electronic circuits outside
the sensitive area. They are radiation
tolerant, better than normal CCDs.
Large available format are
1024x1024 pixels. In figure 41 we can see the spectral response (figure 41-a) and an
example of these sensors (figure 41-b). These detectors can work at relatively high
temperature (100 to 300 degrees Kelvin).

25
4.2.2 – “HgCdTe” DETECTORS
Some of the better, and most used, materials for IR detection are Mercury (Hg),
Cadmium (Cd) and Tellurium (Te). Depending on their cut-off wavelength they are
classified as short wave and mid wave detectors.
• Short wave: their spectral
response is very uniform
from 0.85 to 3.2 micron
with sharp cut-on and cut-
off curve; they reach a 70%
to 80% of quantum
efficiency, but if coated
with antireflective coating
they can reach up to 95%. Modifying the quantity of Cadmium, Zinc and Tellurium
in the substrate, the cut-on wavelength can be extended to the visible spectrum. The
operating temperature is about 100ºK. In figure 42 there are a graph of their spectral
response and an image of them.
• Mid Wave: their spectral response is very
uniform up to 5.2 micron with sharp cut-off
curve; they reach an 80% of quantum
efficiency, but if coated with antireflective
coating they can reach up to 95%. The
operating temperature is about 70ºK. In figure
43 there is a graph of the spectral response of a
CCD made by Raytheon.

4.2.3 – “InSb” DETECTORS


Indium (In) and
Antimonium (Sb) based
sensors are widely used for
IR light detection. They
works at very low
temperature, about 30ºK. The
advantage of these material is
that they are sensitive over
the full visible and the IR, up
to 5 microns (figure 44-a). If coated, with one or several layers of antireflective materials,
their quantum efficiency can be very high, near 100% at 1 micron. In figure 44-b there is an
example of a 2k x 2k pixels sensor. To maintain low dark current and noise, the module is
mounted on a metallic pedestal and several capacitors and resistors are bonded on the board
itself. This kind of sensors were developed for NASA IR SPITZER space telescope.

26
4.2.4 – “Si:As IBC” DETECTORS
Detection of the longest IR wavelength up to 25 micron (figure 45-a) needs a very
different combination of materials like Silicon (Si), Arsenic (As), Boron (B) and
Carbon(C). Noise and operating temperature must be maintained extremely low, their
working temperature are about
10ºK. Current state of the art
sensors arrays are made of 2k x
2k pixels (figure 45-b), but the
largest array tested is of one
million pixels (Raytheon). Their
spectral response is very linear
and reach 80% with one layer of
antireflective coating (solid line
in figure 45-a). The dashed line
represent the spectral response without the antireflective coating. This kind of sensors are
selected for the James Webb Space Telescope and are operating in the SPITZER space
telescope.
To avoid interference from heat generated by the telescope, control electronics,
motors, etc, a telescope for the IR must be maintained as cool as possible; the air inside the
dome and the telescope structure must be maintained at the same temperature of the air
outside the observatory, to do this, several hours before observing session, a series of
ventilation windows on the dome are opened and closed by a control computer.

4.3 - CCD FOR UV SPECTRUM APPLICATION


Fortunately for our body, but not so much for astronomical observation, UV light is
almost completely absorbed by the atmosphere (in the ozone layer), so the only way to
observe physical phenomena that produce short wavelength light is from space. But,
observing from space, there are many factors that reduce the life of the detector and related
instrumentation, degrading, with time, their performance, the most important of that is
radiation.
Exploring in the UV, we can understand how stars and galaxies forms and their
evolution. Temperature, chemical composition and density of the interstellar medium and
study of hot young stars are also a field for UV Astronomy. The combination of IR and
visible spectrum observations, with UV, give us a full vision of the evolution of the
Universe.
Only radiation around 300 to 400 nm can reach the surface of our planet, but UV
Astronomy needs the detection of the full UV range, from 10nm (extreme UV) to 380 nm
(near UV).
UV spectrum is divided into 4 bands:
• Near: 320nm to 400nm.
• Mid: 200nm to 320nm.
• Far: 91.2nm to 200nm.
• Extreme: 10nm to 91.2nm.
Detection of short UV photons is very difficult because they are absorbed by silicon
near its surface, so front illuminated devices are impossible to use and back illuminated
CCD have several problems like fringes, so a different approach, with hybrid technology is

27
used. The best technology available today, is the so called microchannel detector. These
detectors are an hybrid between a CCD and a photomultiplier tube, taking the best from
each one. As we saw in the above section (about IR detectors) not all wavelengths can be
detected with the same material, so we cannot have a universal sensor for the full UV
spectrum, but several ones made of different semiconductor materials.

4.3.1 - SENSORS FOR NEAR UV


Photon penetration in silicon based CCD, depends on their wavelength (energy),
near UV photons can be detected with back illuminated CCD, deep depletion CCD and
Virtual phase CCD, but for all of these technologies, the quantum efficiency is low at
shorter wavelengths. The addition of some coating substances like Coronene and Lumigen
can improve the probability of detection because they act as light wavelength down
converters. These substances absorb UV light and emit photons with much lower energy,
normally in the range of 500 to 550 nm. Quantum efficiency of CCD at these wavelengths
is very high, but we have the
problem of their high sensitivity to
visible light too, so UV
photoelectrons are mixed with
yellow-green ones produced by the
visible component of the incoming
light. This is the major drawback,
because we want to sense UV
radiation alone.
A better solution is to use
sensors made of Silicon Carbide
(SiC) which are completely blind,
in other words transparent, to all wavelengths greater than 380 nm, making them more
suitable for UV (figure 46). This property became from the fact that the band-gap of SiC
semiconductors is wider than the Silicon (Si), Gallium (Ga) and Arsenide (As)
semiconductors. Due to this, we need to give much more energy to excite an electron and
send it to the conduction band. Visible and IR light photons can not excite electrons in the
valence band of SiC semiconductors because they have not enough energy, instead, UV
photons carry the right quantity of energy. SiC semiconductors can, also, operate at higher
voltage, temperature and radiation levels, these last two qualities are very useful for space
based telescopes. The sensitivity of SiC sensors in the near UV is 10 thousand more than
the one of Ga-As or Si sensors.

4.3.2 - MICROCHANNEL TECHNOLOGY SENSORS FOR MID AND FAR UV


These are sensors based on the electron multiplication effect, they can be fabricated
using vacuum tubes or semiconductor technologies. For Earth based observations, both are
used, but for UV space based Astronomy, the latter is the best choice due to their lower
power consumption and reduced size.
To understand how they works, it is better to explain the channel vacuum tube
multiplier. To find a more detailed description about photomultiplier tubes, there is a
complete section devoted to them, below (section 5).

28
UV photons strikes the
sensitive material of the
photocathode, which in
turn, release a photo-
electron (figure 47). This
photo-electron enter into a
narrow and curved
semiconductive channel.
During its path, toward the
anode, it impact the inner
curved walls of the channel several times, every time a bend is encountered. After each
impact, more electrons are released due to a near total internal reflection effect. Each
electron, hit the wall at the subsequent bending, and so on. This process continue until all
electrons are trapped by the anode electrode, which is held at a high positive voltage. This
electron avalanche has a gain up to 100 million, depending on the number of bends in the
channel. The quantum efficiency is low (< 30%), but sensitivity is very high. The channel
can be made very thin, so if we
place several channels, side by side,
we can detect not only photons
coming from a unique specific
point, but we are able to recreate the
image of the emitting object. A
device with this characteristics is
called a microchannel array (figure
48).
A semiconductor microchannel
array consist of a thin plate pierced with thin holes (channels) and each channel is, at all
practical effect, a miniature photomultiplier tube with a diameter of about 25 microns. The
top surface is maintained negatively charged, with respect to the bottom surface, by the
application of a high voltage. This surface is also coated with some photoemitting material
especially designed to have its peak sensitivity at the interested wavelength. The flux of
accelerated electrons, spread out from the bottom, where they are directed to some more
conventional detector. The bottom of the plate is an array of anode electrodes, positively
charged. Electrons generated inside each channel are shifted out serially (like in normal
CCD) and are available on the output of an amplifier. This way we can know the exact
position of each detected photon.
Amplification (multiplication) factor, can be incremented by facing the anode array
outputs of one device to the cathode (channel entrance) of a second one.
The advantages of a microchannel array are many, and they are used on all UV space
telescopes, the largest ones are the detectors of the GALEX space telescope. Some of the
best features of microchannel arrays are:
• Very high gain,
• Compact size,
• Fast time response.
• Two dimensional detection.
• High spatial resolution(25 microns).

29
• Stable operation even in the presence of magnetic fields.
• High sensitivity to high energetic particles and photons, make them a suitable
choice for gamma rays, x rays, UV and neutrons.
• Low power consumption.
• High sensitivity (depending on the cathode material).
• Low dark current.
Their quantum efficiency is low, about 20 to 30%.
In figure 49-a there is a picture of a microchannel array made by Hamamatsu, while, in
figure 49-b and 49-c there are pictures of both UV detectors onboard the GALEX space
Telescope.

4.4 – COMPARING CCD OBSERVATIONS


To fully understand how the Universe work, it is necessary to observe each
phenomenon in as many wavelengths as possible. This is done by the comparison of images
taken by different instruments on Earth and from space. Each instrument can show us
different details, depending on the combination of the image detector and its wavelength
range. To show an example of this,
we have selected the starburst galaxy
M82 in Ursa Major.
Figure 50 is a full size image
of that galaxy, covering the full
visible spectral range; this image,
taken by the Hubble Space Telescope,
is a false colour image, where each
colour represents the light of some
specific wavelength. This image is a
combination of four images taken
with 4 filters with the central
wavelength at 435nm (blue), 555nm
(green), 658nm (red) and 814nm (red-
orange) respectively. To better
understand the difference in images
through filters, we will enlarge a
section, of figure 50, of the left side of
the central region. This way, we can have a better idea of the difference between a visible
light image and a picture of the same area, but in the infrared. Very dense clouds of dust

30
and gas (figure 51-a) let pass only a small quantity of the light generated by bright star
clusters that lies behind and inside those clouds. If we observe in infrared light (figure 51-

b), thousand of hot stars and many young star clusters, clearly appears. Energetic UV light
from hot stars heat dust particles and, as a consequence, they glow, emitting IR light. Only
brightest star clusters are visible in figure 51-a. Complementing these images with others,
in different wavelengths as for example far UV and X-ray, we can argue the age of those
star clusters and their evolution stage. If their brightness are greater in UV and Xray, than
in IR, they are young, instead, if they glow more in red and IR, they are old.
The advantage from space observation is the possibility to take diffraction limit
pictures, because of the absence of blurring atmosphere. For example, the gathering light

power of the Spitzer telescope is les than that of the Mt. Palomar Schmidt telescope; but the
resolution of figure 52a (from Spitzer space telescope) is much more than that of figure 52-

31
b and 52-c, which are digitized images of the Palomar Digital Sky Survey (DSS). These
images are made using red sensitive photographic plates. These images show the enormous
difference in sensitivity and quantum efficiency of CCDs over photographic plates.
Hydrogen gas emitted from the galaxy nucleus is not visible in figure 52-b and it is barely
visible in figures 52-c. The two sets of images (figures 52 and 53) can tell us the story of
M82, we can argue that it collided with a closer galaxy (M82, not shown in pictures) and as
a result a huge star formation begins. Due to the high brightness of these stars, we know
that they are young (we can also calculate their age), so the collision was not so far in time.
It is though that the interaction with M82 was 600 million years ago, because most of the
stars observed are young and massive due to their strong UV light emission.
In figure 53-a we can see two superimposed images of the galaxy M82 in near
(yellow) and far (blue) UV light (GALEX space telescope). Figure 53-b is the same galaxy,
but the source is an UV telescope onboard the Astro1 satellite. Figure 53-c is a scanned

image with a blue sensitive photographic plate, that belongs to the Palomar Digital Sky
Survey. Hot gas flowing from the nucleus is basely visible.

CONCLUSIONS
Our knowledge of the Universe is strictly related to photon detectors for every band of the
electromagnetic spectrum. When Galileo point his telescope to the heavens, the Universe,
suddenly increase its dimension, but astronomical observations still were subjective,
depending on the eye of the observer, no matter of the telescope dimension.
Photographic plates let astronomers to permanently record observations in a
objective mode, only instrumental defects and operation processes, can affect images.
Precise measurements of colours, brightness, size and position of stars and galaxies
are possible with CCD and PMT.
From the advent of Astrophotography, in the middle of the XIX century, until today,
astronomers and engineers worked together to develop devices capable to detect visible
light, UV, IR, etc. Thank to this cooperation, we can now observe the Universe in every
wavelength of the electromagnetic spectrum.

32
Astronomy, is today, inconceivable without electronics. Silicon based sensors,
computers, satellites, robotic systems, are all tools used by astronomers to increment their
knowledge of the Universe.
We are able to take pictures of the very early Universe, of objects 13 billion light
years away, thank to the precise understanding and application of the photoelectric effect
on semiconductors materials. The invention of the transistor and the integration of millions
of them into a tiny silicon chip favoured the development of the charge coupled device
(CCD). Today CCDs are the light sensor for excellence for a wide range of wavelengths of
the electromagnetic spectrum (from gamma rays to far infrared). They are now capable to
detect almost every incoming photon and convert it to a measurable electric current.
Very large scale integration technologies help to the development of larger sensors,
with many millions of pixels, incrementing the resolution and the field of view of
astronomical images, taking the full advantage of the optics of modern large telescopes.
Excellent linearity and wide spectral response are features that let astronomers to
measure photometric properties of stars and galaxies, giving clues to understand how they
born, evolve and die.
Bulky and high power consuming vacuum tube technologies are in the course to be
replaced with tiny and low power silicon sensors having the same, and many times, a much
better performance.
Even when actual state of the art technology give us a near perfect sensor, many
features will be improved in the future: flatness of spectral response, greater quantum
efficiency, fast read out speed, low noise, large pixel count, selective read out, are
characteristics that surely we will see in the next generation of photon detectors.

REFERENCES
CCD section
Atomic structure and semiconductors technologies
[1] Energy Bands:
http://www.tpub.com/neets/book7/24c.htm
[2] Solid State band theory:
http://www.chemistry.adelaide.edu.au/external/soc-rel/content/bands.htm
[3] Photoelectric effect:
http://zebu.uoregon.edu/text/photoe.txt
[4] Physics, Charles Sturt University:
http://hsc.csu.edu.au/physics/core/implementation/9_4_3/943net.html#net2
[5] Drakos N., 1999, Physics 1501 Modern Technology:
http://theory.uwinnipeg.ca/mod_tech/node1.html
[6] Bordes N., 1999, Photonic devices, Australian Photonics CRC,
http://oldsite.vislab.usyd.edu.au/photonics/devices/index.html
[7] Wikipedia 2006, Semiconductors: http://en.wikipedia.org/wiki/Semiconductors
PTE, Periodic Table of Elements:
http://www.dayah.com/periodic/Images/periodic%20table.png
[8] Hepburn C.J., Britney’s guide to semiconductor physics, the basics of
semiconductors:
http://britneyspears.ac/lasers.htm
CCD fundamentals and history

33
[9] Aikens R., 1991, Charge Coupled devices for quantitative electronic imaging,
IAPPP communication No 44, Jun-Aug 1991.
[10] Richmond M, Introduction to CCDs:
http://spiff.rit.edu/classes/phys445/lectures/ccd1/ccd1.html
[11] Tulloch S., 2006-1, Introduction to CCDs:
http://www.ing.iac.es/~smt/CCD_Primer/Activity_1.ppt
[12] Fairchild Imaging, Fairchild History:
http://www.fairchildimaging.com/main/history.htm
[13] Evolving towards the perfect CCD:
http://zebu.uoregon.edu/ccd.html
[14] Peterson C., 2001, How it works: the charge-coupled device or CCD:
http://www.jyi.org/volumes/volume3/issue1/features/peterson.html
[15] Massey D., 2005, Bell System Memorials - the transistor:
http://www.bellsystemmemorial.com/belllabs_transistor.html
[16] Ferreri W., Fotografia Astronomica, Il Castello, 1977

CCD fabrication and operation


[17] Lesser M., 2006, CCD Glossary, The University of Arizona Imaging Technology
Laboratory:
http://www.itl.arizona.edu/Education/glossary.html
[18] Brock k., 2001, Photodiodes, SPIE’s OE Magazine, august 2001.
[19] CCD University, 2006, Apogee Instruments Inc.:
http://www.ccd.com/ccdu.html
[20] Pavesi, 2003, A primer on photodiode technology:
http://science.unitn.it/~semicon/pavesi/tech2.pdf
[21] Tulloch S. (2006-2), Use of CCD cameras:
http://www.ing.iac.es/~smt/CCD_Primer/Activity_2.ppt
[22] Tulloch S. (2006-3), Advanced CCD techniques:
http://www.ing.iac.es/~smt/CCD_Primer/Activity_3.ppt
[23] Tulloch S. (2006-4), Low light level CCD:
http://www.ing.iac.es/~smt/CCD_Primer/LLLCCD.ppt
[24] Tulloch S. (2005), Latest CCD developments:
http://www.ing.iac.es/~smt/CCD_Primer/CCDlectureNov2005.ppt
[25] CCD fundamentals, Princeton Instruments Acton, 2005
http://www.piacton.com/support/library.aspx
[26] Abramowitz M., Davidson M.W., 2004, Concepts in digital imaging technology:
http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/concepts.html
[27] Davenhall A.C., Privett G.J., Taylor M.B., 2001, The 2-D CCD data reduction
cookbook:
http://star-www.rl.ac.uk/star/dvi/sc5.htx/sc5.html#stardoccontents
[28] Wong H.S. et al, TDI charge coupled devices : design and applications, IBM
research and Development, 1992.
[29] Rabinowitz D., Drift scanning, Michelson Summer Workshop, Caltech, 2005
[30] Gehrels T., CCD Scanning, 1986acm, 1986.
Technical aspects of Drift Scanning, ESO Imaging Survey, 1997.
[31] Gibson B., Hickson P., Time delay integration CCD readout technique: image
deformation, 1992MNRAS..258..543G

34
[32] Kodak Image Sensor Solutions:
http://www.kodak.com/US/en/dpq/site/SENSORS/name/ISSHome
[33] Buil C., 1991. CCD Astronomy, Willmann.Bell Inc., 1991, ISBN 0943396298.
[34] Kitchin C.R.,1998, Astrophysical Techniques, IOP Publishing Ltd, 1998, ISBN
0750304987.
[35] Howell S., Handbook of CCD Astronomy, Cambridge, 2000.
[36] [DSS] Palomar Digital Sky Atals: http://archive.stsci.edu/dss/index.html
[37] Bartali R., 2003, Do photographic plates still have a place in professional
Astronomy?.
[38] Kodak technical literature (CCD, photographic films and filters); www.kodak.com
[39] Ilford technical literature (film): www.ilford.com
[40] Agfa technical literature (film): www.agfa.com
[41] Texas Instruments technical literature (CCD): www.ti.com
[42] http://www.pinnipedia.org/optics/vignetting.html
[43] http://www.astrocruise.com/geg.htm
[44] http://www.chartchambers.com/whyln2.html
[45] Atmel technical literature (CCD): www.atmel.com
[46] Pfanhauser W., Application notes Roper Scientific gmbh, 2006:
http://www.roperscientific.de/theory.html
[47] Hoffaman A., Mega Pixel detector arrays: visible to 28 micron, Proceedings SPIE
vol. 5167, 2004.
[48] II-VI Inc, Optics manufacturing, 2006: http://www.iiviinfrared.com/opticsfab.html
[49] Chaisson, AT405, 2004: http://138.238.143.191/astronomy/Chaisson/AT405/HTML/
[50] Acreo AB, Infrared Detector Arrays for Thermal Imaging
Tutorial "Infrared Detectors", 2004:
http://www.acreo.se/upload/Publications/Tutorials/TUTORIALS-INFRARED-2.pdf
[51] Teledyne Scientific and Imaging, Infrared and visible FPA, 2006:
http://www.teledyne-si.com/infrared_visible_fpas/index.html
[52] Carruthers G, Electronic Imaging: http://138.238.143.191/astronomy/topics.htm
[53] Clampin M, UV-Optical CCD, STSI, 2001
[54] Bonanno G., New development in CCD technology for the UV-EUV spectral
range, Catania Astrophysical Observatory, 1995.
[55] Galaxy Evolution Explorer, Home page: http://www.galex.caltech.edu/
[56] Spitzer space telescope, home page: http://www.spitzer.caltech.edu/
[57] Hubble Space Telescope, home page: http://hubblesite.org/
[58] UV Astronomy, Wikipedia, 2006: http://en.wikipedia.org/wiki/UV_astronomy
[59] Electro optical component Inc, Silicon Carbide detectors, 2006:
http://www.eoc-inc.com/UV_detectors_silicon_carbide_photodiodes.htm
[60] Timothy J.G., Optical detectors for spectroscopy, 1983, 1983PASP..95..810T:
http://adsabs.harvard.edu/cgi-bin/nph-
bib_query?bibcode=1983PASP...95..810T&db_key=AST
[61] O Connell R.W., Introduction to Ultraviolet Astronomy, 2006:
http://www.astro.virginia.edu/class/oconnell/astr511/UV-astron-f01.html
[62] Sheppard S.T., Cooper J.A., Melloch M.R., Silicon Carbide Charge Coupled Devices,:
http://www.ecn.purdue.edu/WBG/Device_Research/CCDs/Index.html
[63] Cree Research Inc., Silicon Carbide Semiconductors, 2003:
http://www.mdatechnology.net/techsearch.asp?articleid=174

35
[75] Optical Society of America, Optics Infobase, 2006:
http://www.opticsinfobase.org/ocisdirectory/040_5250.cfm
[76] Sakaki N, et al., Development of multianode photomultipliers for the EUSO focal
surface detector, International Cosmic Ray conference, 2003:
http://euso.riken.go.jp/publication/icrc28_233.pdf#search=%22PHOTOMULTIPLIERS%2
2
[77] Breskin A., Ion-induced effects in GEM & GEM/MHSP gaseous
photomultipliers for the UV and the visible spectral range, 2004
http://arxiv.org/ftp/physics/papers/0502/0502132.pdf
[78] Casolino M., Space applications of Silicon photomultipliers: ground
characterizations and measurements on board the
International Space Station with the Lazio experiment, 2006:
http://www.cosis.net/abstracts/COSPAR2006/03209/COSPAR2006-A-03209-
1.pdf?PHPSESSID=41d280d7162dda45323d561244363f44#search=%22PHOTOMULTIP
LIERS%22
[79] Barral J., Study of silicon photomultipliers, 2004:
http://www.stanford.edu/~jbarral/Downloads/StageOption-
Rapport.pdf#search=%22PHOTOMULTIPLIERS%22
[82] University of Pisa, Physics Department, Silicon Photomultiplier, 1995:
http://www.df.unipi.it/~fiig/research_sipm.htm
[83] Piemonte C., SiPM: status of the development, 2006:
http://sipm.itc.it/intro/device.html
[84] Ninkovic J., The avalanche drift diode: A back illuminated silicon
photomultiplier, 2006:
http://www.hll.mpg.de/twiki/bin/view/Avalanche/AvalancheDriftDiode

IMAGE CREDITS
Figure 1a, 1b
Atomic energy bands: http://www.tpub.com/neets/book7/24c.htm
Figure 2
CCD geometry (adapted from): http://www.ing.iac.es/~smt/CCD_Primer/Activity_2.ppt
Figure 3
Pixel structure: Kitchin C.R.,1998, Astrophysical Techniques, IOP Publishing Ltd, 1998,
ISBN 0750304987.
Figure 3b
(adapted from): http://www.ing.iac.es/~smt/CCD_Primer/Activity_2.ppt
Figure 4
Pixel size and well capacity relationship: Bartali R., 2006
Figure 5
Silicon absorption depth graphic (adapted from): Howell S., Handbook of CCD Astronomy,
Cambridge, 2000.
Figure 6A, 6B
Linear CCD: http://www.fairchildimaging.com/products/fpa/ccd/linear/ccd_191.htm
Figure 6B
Matrix CCD: http://www.fairchildimaging.com/products/fpa/ccd/area/ccd_3041.htm
Figure 7

36
CCD rain-buckets analogy:
http://www.microscopyu.com/articles/digitalimaging/ccdintro.html
Figure 8
First astronomical CCD image: http://zebu.uoregon.edu/ccd.html
Figure 9
Galaxy M51: http://hubblesite.org/gallery/wallpaper/pr2005012a/800_wallpaper
Figure 10
Image sensors quantum efficiency: Howell S., Handbook of CCD Astronomy, Cambridge,
2000.
Figure 11a, 11b, 11c
Front and back side CCD: Bartali R., 2006
Figure 12
Back and front illuminated CCD comparison: http://www.site-inc.com
Figure 13a
Optoelectronics Databook, 1984, Texas Instruments:
http://www.ti.com
Figure 13b
Deep Depletion CCD (pixel structure):
http://www.ing.iac.es/~smt/redsense/deep_depletion.PDF
CCD42-90 CCD Datasheet, Marconi Applied Technology (QE graph):
http://www.marconitech.com
Figure 14a
Crab Nebula in red light: http://archive.stsci.edu/cgi-bin/dss_form
Figure 14b
Crab Nebula in blue light: http://archive.stsci.edu/cgi-bin/dss_form
Figure 14c
Crab Nebula VLT:
http://www.eso.org/outreach/press-rel/pr-1999/phot-40f-99-normal.jpg
Figure 14d
Crab Nebula HST: http://hubblesite.org/gallery/wallpaper/pr2005037a/800_wallpaper
Figure 15
Dark frames examples:
http://www.frazmtn.com/~bwallis/drk_tmp.htm
Figure 16
Bias frames examples:
http://www.eso.org/projects/odt/Fors1/images/bias.jpg
http://www.carleton.edu/departments/PHAS/astro/pages/knowledgebase/biasdark.html
Figure 17a, 17b
Flat field frame example:
http://www.highenergyastro.com/CVFUN.html
http://www.mso.anu.edu.au/observing/detectors/imager.php
Figure 18
Example of raw image: Bartali 2003
Figure 19
Example of science image: Bartali R., Rosner A., 2003
Figure 20
CCD imaging, basic steps: Bartali R., 2006

37
Figure 21a
Hot pixel: Bartali R., 2003
Figure 21b
Bright column: Tulloch S., Use of a CCD camera.
Figure 22
Dark Streacks: HET609 CDRom, 2006
Figure 23
Blooming example: Bartali R., 2003
Figure 24a
Example of glowing: Buil C., CCD Astronomy
Figure 24b
CCD output amplifier:
http://spiff.rit.edu/classes/phys445/lectures/ccd1/structure_1.gif
Figure 25
Cosmic rays: http://spider.ipac.caltech.edu/staff/kaspar/obs_mishaps/images/cr.html
Figure 26
Frame transfer CCD: http://www.sinogold.com/images/tc237face.gif
Figure 27
Example of an overexposed image: Bartali R., 2003
Figure 28
Oversampled stellar image:
http://sctscopes.net/Photo_Basics/CCD_Camera/Choosing_a_CCD_Camera/CCD_Paramet
ers/ccd_parameters.html
Figure 29
Undersmpled stellar image:
http://sctscopes.net/Photo_Basics/CCD_Camera/Choosing_a_CCD_Camera/CCD_Paramet
ers/ccd_parameters.html
Figure 30
Vignetting: http://www.astrocruise.com/geg.htm
Figure 31
Peltier module: An Introduction to Thermoelectrics, Tellurex Corporation, 2006.
Figure 32
Cooling Peltier module: An Introduction to Thermoelectrics, Tellurex Corporation, 2006.
Figure 33
Multistage module: Melcor Thermal Solutions, Melcor Corporation.
Figure 34
Liquid Nitrogen cooler: top – Tellurex Corporation.
Bottom – Buil C., CCD Astronomy, Willmann Bell, 1991.
Figure 35
UBVRI filters:
http://outreach.atnf.csiro.au/education/senior/astrophysics/photometry_colour.html
Figure 36
Eye spectral response:
http://www.marine.maine.edu/~eboss/classes/SMS_491_2003/sound/em-spectrum_human-
eye_asu_380x300.gif
Figure 37
100 million pixel CCD:

38
http://www.dalsasemi.com/news/news.asp?itemID=252
Figure 38
Mosaic camera:
http://www.cfht.hawaii.edu/Instruments/Imaging/CFH12K/images/CFH12K-FP_lr.jpg
Figure 39
Mosaic image: http://www.astro.indiana.edu/~vanzee/SMUDGES/field.html
Figure 40-a
Orion nebula in visible band from Harvard Observatory:
http://138.238.143.191/astronomy/Chaisson/AT405/HTML/AT40506.htm
Figure 40-b
Orion nebula in the IR band from NASA:
http://138.238.143.191/astronomy/Chaisson/AT405/HTML/AT40506.htm
Figure 41-a, 41-b
Si PIN sensor: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 42-a, 42-b
HgCdTe sensors (SW): Hoffman A, Mega pixel detector arrays from visible to 25 micron,
2004
Figure 43
HgCdTe sensors (MW): Hoffman A, Mega pixel detector arrays from visible to 25 micron,
2004
Figure 44-a, 44-b
InSb sensors: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 45-a, 45-b
Si:AsIBC sensors: Hoffman A, Mega pixel detector arrays from visible to 25 micron, 2004
Figure 46
SiC response: http://www.eoc-inc.com/UV_detectors_silicon_carbide_photodiodes.htm
Figure 47
Channel multiplier, adapted from:
http://www.olympusmicro.com/primer/digitalimaging/concepts/photomultipliers.html
Figure 48
Microchannel structure: Hamamatsu photomultiplier tubes, Hamamatsu Corp., 2006.
Figure 49-a
Microchannel detector: Hamamatsu photomultiplier tubes, Hamamatsu Corp., 2006.
Figure 49-b, 49-c
GALEX UV detectors: http://www.galex.caltech.edu/
Figure 50
M82 HST:
http://hubblesite.org/newscenter/archive/releases/2006/14/image/a/format/large_web
Figure 51-a
M82 visible light (HST):
http://www.seds.org/messier/m/m082.html
Figure 51b-a
M82 IR ligh (HST)t:
http://www.seds.org/messier/m/m082.html
Figure 52-a
M82 Spitzer:
http://hubblesite.org/newscenter/archive/releases/2006/14/image/i/format/large_web

39
Figure 52-b
M82 IR DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_ir&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=15.0
&w=15.0&f=gif&c=none&fov=NONE&v3
Figure 52-c
M82 red DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_red&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=15.
0&w=15.0&f=gif&c=none&fov=NONE&v3
Figure 53-a
M82 GALEX: http://www.galex.caltech.edu/GALLERY/GALEX-M82.jpg
Figure 53-b
M82 ASTRO1:
http://www.seds.org/messier/Pics/More/m82a1uv.jpg
Figure 53-c
M82 blue DSS: http://archive.stsci.edu/cgi-
bin/dss_search?v=poss2ukstu_blue&r=09+55+52.19&d=%2B69+40+48.8&e=J2000&h=1
5.0&w=15.0&f=gif&c=none&fov=NONE&v3=

40

Вам также может понравиться