Вы находитесь на странице: 1из 84

How Cameras Work

Photography is undoubtedly one of the most important inventions in history


-- it has truly transformed how people conceive of the world. Now we can
"see" all sorts of things that are actually many miles -- and years -- away
from us. Photography lets us capture moments in time and preserve them
for years to come.
The basic technology that makes all of this possible is fairly simple. A still
film camera is made of three basic elements: an optical element (the lens),
a chemical element (the film) and a mechanical element (the camera body
itself). As we'll see, the only trick to photography is calibrating and
combining these elements in such a way that they record a crisp,
recognizable image.
There are many different ways of bringing everything together. In this article, we'll
look at a manual single-lens-reflex (SLR) camera. This is a camera where the
photographer sees exactly the same image that is exposed to the film and can
adjust everything by turning dials and clicking buttons. Since it doesn't need any
electricity to take a picture, a manual SLR camera provides an excellent illustration
of the fundamental processes of photography.
The optical component of the camera is the lens. At its simplest, a lens is just a
curved piece of glass or plastic. Its job is to take the beams of light bouncing off of
an object and redirect them so they come together to form areal image -- an image
that looks just like the scene in front of the lens.
But how can a piece of glass do this? The process is actually very simple.
As light travels from one medium to another, it changes speed. Light travels more
quickly through air than it does through glass, so a lens slows it down.
When light waves enter a piece of glass at an angle, one part of the wave will reach
the glass before another and so will start slowing down first. This is something like
pushing a shopping cart from pavement to grass, at an angle. The right wheel hits
the grass first and so slows down while the left wheel is still on the pavement.
Because the left wheel is briefly moving more quickly than the right wheel, the
shopping cart turns to the right as it moves onto the grass.

The effect on light is the same -- as it enters the glass at an angle, it bends in one
direction. It bends again when it exits the glass because parts of the light wave
enter the air and speed up before other parts of the wave. In a standard converging,
or convex lens, one or both sides of the glass curves out. This means rays of light
passing through will bend toward the center of the lens on entry. In a double
convex lens, such as a magnifying glass, the light will bend when it exits as well as
when it enters.

Cameras: Focus
PREV NEXT

We've seen that a real image is formed by light moving through a convex lens. The nature
of this real image varies depending on how the light travels through the lens. This light path
depends on two major factors:

The angle of the light beam's entry into the lens

The structure of the lens

The angle of light entry changes when you move the object closer or farther away from
the lens. You can see this in the diagram below. The light beams from the pencil point enter
the lens at a sharper angle when the pencil is closer to the lens and a more obtuse angle
when the pencil is farther away. But overall, the lens only bends the light beam to a certain
total degree, no matter how it enters. Consequently, light beams that enter at a sharper
angle will exit at a more obtuse angle, and vice versa. The total "bending angle" at any
particular point on the lens remains constant.

As you can see, light beams from a closer point converge farther away
from the lens than light beams from a point that's farther away. In other

words, the real image of a closer object forms farther away from the lens
than the real image from a more distant object.
You can observe this phenomenon with a simple experiment. Light a candle
in the dark, and hold a magnifying glass between it and the wall. You will
see an upside down image of the candle on the wall. If the real image of
the candle does not fall directly on the wall, it will appear somewhat blurry.
The light beams from a particular point don't quite converge at this point. To
focus the image, move the magnifying glass closer or farther away from the
candle.

This is what you're doing when you turn the lens of a camera to focus it -you're moving it closer or farther away from the film surface. As you move
the lens, you can line up the focused real image of an object so it falls
directly on the film surface.
You now know that at any one point, a lens bends light beams to a certain
total degree, no matter the light beam's angle of entry. This total "bending
angle" is determined by the structure of the lens.

This effectively reverses the path of light from an object. A light source -- say a
candle -- emits light in all directions. The rays of light all start at the same point -the candle's flame -- and then are constantly diverging. A converging lens takes
those rays and redirects them so they are all converging back to one point. At the
point where the rays converge, you get a real image of the candle. In the next
couple of sections, we'll look at some of the variables that determine how this real
image is formed.

Camera Lenses
PREV NEXT

A standard 50 mm lens doesn't significantly shrink or magnify the image.

In the last section, we saw that at any one point, a lens bends light beams to a certain total
degree, no matter the light beam's angle of entry. This total "bending angle" is determined
by the structure of the lens.
A lens with a rounder shape (a center that extends out farther) will have a more acute
bending angle. Basically, curving the lens out increases the distance between different
points on the lens. This increases the amount of time that one part of the light wave is
moving faster than another part, so the light makes a sharper turn.

Increasing the bending angle has an obvious effect. Light beams from a
particular point will converge at a point closer to the lens. In a lens with a
flatter shape, light beams will not turn as sharply. Consequently, the light
beams will converge farther away from the lens. To put it another way, the
focused real image forms farther away from the lens when the lens has a
flatter surface.
Increasing the distance between the lens and the real image actually
increases the total size of the real image. If you think about it, this makes
perfect sense. Think of a projector: As you move the projector farther away

from the screen, the image becomes larger. To put it simply, the light beams
keep spreading apart as they travel toward the screen.
The same basic thing happens in a camera. As the distance between the
lens and the real image increases, the light beams spread out more,
forming a larger real image. But the size of the film stays constant. When
you attach a very flat lens, it projects a large real image but the film is only
exposed to the middle part of it. Basically, the lens zeroes in on the middle
of the frame, magnifying a small section of the scene in front of you. A
rounder lens produces a smaller real image, so the film surface sees a
much wider area of the scene (at reduced magnification).
Professional cameras let you attach different lenses so you can see the
scene at various magnifications. The magnification power of a lens is
described by its focal length. In cameras, the focal length is defined as the
distance between the lens and the real image of an object in the far
distance (the moon for example). A higher focal length number indicates a
greater image magnification.
Different lenses are suited to different situations. If you're taking a picture of
a mountain range, you might want to use a telephoto lens, a lens with an
especially long focal length. This lens lets you zero in on specific elements
in the distance, so you can create tighter compositions. If you're taking a
close-up portrait, you might use a wide-angle lens. This lens has a much
shorter focal length, so it shrinks the scene in front of you. The entire face
is exposed to the film even if the subject is only a foot away from the
camera. A standard 50 mm camera lens doesn't significantly magnify or
shrink the image, making it ideal for shooting objects that aren't especially
close or far away.

Lenses in the Lens


A camera lens is actually several lenses combined into one unit. A single converging lens
could form a real image on the film, but it would be warped by a number of aberrations.
One of the most significant warping factors is that different colors of light bend differently
when moving through a lens. This chromatic aberration essentially produces an image
where the colors are not lined up correctly.

Cameras compensate for this using several lenses made of different materials. The lenses
each handle colors differently, and when you combine them in a certain way, the colors are
realigned.
In a zoom lens, you can move different lens elements back and forth. By changing the
distance between particular lenses, you can adjust the magnification power -- the focal
length -- of the lens as a whole.

Cameras: Recording Light


PREV NEXT

The chemical component in a traditional camera is film. Essentially, when you expose film
to a real image, it makes a chemical record of the pattern of light.
It does this with a collection of tiny light-sensitive grains, spread out in a chemical
suspension on a strip of plastic. When exposed to light, the grains undergo a chemical
reaction.

Once the roll is finished, the film is developed -- it is exposed to other


chemicals, which react with the light-sensitive grains. In black and white
film, the developer chemicals darken the grains that were exposed to light.
This produces a negative, where lighter areas appear darker and darker
areas appear lighter, which is then converted into a positive image in
printing.
Color film has three different layers of light-sensitive materials, which
respond, in turn, to red, green and blue. When the film is developed, these
layers are exposed to chemicals that dye the layers of film. When you
overlay the color information from all three layers, you get a full-color
negative.
For an in-depth description of this entire process, check out How
Photographic Film Works.
So far, we've looked at the basic idea of photography -- you create a real
image with a converging lens, and you record the light pattern of this real
image on a layer of light-sensitive material. Conceptually, this is all that's
involved in taking a picture. But to capture a clear image, you have to
carefully control how everything comes together.

Obviously, if you were to lay a piece of film on the ground and focus a real
image onto it with a converging lens, you wouldn't get any kind of usable
picture. Out in the open, every grain in the film would be completely
exposed to light. And without any contrasting unexposed areas, there's no
picture.
To capture an image, you have to keep the film in complete darkness until
it's time to take the picture. Then, when you want to record an image, you
let some light in. At its most basic level, this is all the body of a camera is -a sealed box with a shutter that opens and closes between the lens and
film. In fact, the term camera is shortened from camera obscura, literally
"dark room" in Latin.
For the picture to come out right, you have to precisely control how much
light hits the film. If you let too much light in, too many grains will react, and
the picture will appear washed out. If you don't let enough light hit the film,
too few grains will react, and the picture will be too dark. In the next
section, we'll look at the different camera mechanisms that let you adjust
the exposure.

What's in a Name?
As it turns out, the term photography describes the photographic process quite accurately.
Sir John Herschel, a 19th century astronomer and one of the first photographers, came up
with the term in 1839. The term is a combination of two Greek words -- photos meaning light
and graphein meaning writing (or drawing). The term camera comes from camera obscura,
Latin for "dark room." The camera obscura was actually invented hundreds of years before
photography. A traditional camera obscura was a dark room with light shining through a lens
or tiny hole in the wall. Light passed through the hole, forming an upside-down real image
on the opposite wall. This effect was very popular with artists, scientists and curious
spectators.

Cameras: The Right Light


PREV NEXT

The plates in the iris diaphragm fold in on each other to shrink the aperture and expand out to make it wider.

In the last section, we saw that you need to carefully control the film's exposure to light, or
your picture will come out too dark or too bright. So how do you adjust this exposure level?
You have to consider two major factors:

How much light is passing through the lens

How long the film is exposed

To increase or decrease the amount of light passing through the lens, you have to change
the size of theaperture -- the lens opening. This is the job of the iris diaphragm, a series of
overlapping metal plates that can fold in on each other or expand out. Essentially, this
mechanism works the same way as the iris in your eye -- it opens or closes in a circle, to
shrink or expand the diameter of the lens. When the lens is smaller, it captures less light,
and when it is larger, it captures more light.

The length of exposure is determined by the shutter speed. Most SLR cameras use
a focal plane shutter. This mechanism is very simple -- it basically consists of two
"curtains" between the lens and the film. Before you take a picture, the first curtain
is closed, so the film won't be exposed to light. When you take the picture, this
curtain slides open. After a certain amount of time, the second curtain slides in
from the other side, to stop the exposure.

When you click the camera's shutter


release, the first curtain slides open,
exposing the film. After a certain amount
of time, the second shutter slides closed,
ending the exposure. The time delay is

controlled by the camera's shutter speed


knob.
When you click the camera's shutter release, the first curtain slides open,
exposing the film. After a certain amount of time, the second shutter slides
closed, ending the exposure. The time delay is controlled by the camera's
shutter speed knob.
This simple action is controlled by a complex mass of gears, switches and springs,
like you might find inside a watch. When you hit the shutter button, it releases a
lever, which sets several gears in motion. You can tighten or loosen some of the
springs by turning the shutter speed knob. This adjusts the gear mechanism,
increasing or decreasing the delay between the first curtain opening and the second
curtain closing. When you set the knob to a very slow shutter speed, the shutter is
open for a very long time. When you set the knob to a very high speed, the second
curtain follows directly behind the first curtain, so only a tiny slit of the film frame
is exposed at any one time.
The ideal exposure depends on the size of the light-sensitive grains in the film. A
larger grain is more likely to absorb light photons than a smaller grain. The size of
the grains is indicated by a film's speed, which is printed on the canister. Different
film speeds are suited to different types of photography -- 100 ISO film, for
example, is optimal for shots in bright sunlight, while 1600 film should only be
used in relatively low light.

Inside a manual SLR camera, you'll find an intricate puzzle of gears and springs. Click on each picture
for a high-resolution close-up shot.

As you can see, there's a lot involved in getting the exposure right -- you have to
balance film speed, aperture size and shutter speed to fit the light level in your

shot. Manual SLR cameras have a built-in light meter to help you do this. The
main component of the light meter is a panel of semi-conductor light sensors that
are sensitive to light energy. These sensors express this light energy as electrical
energy, which the light meter system interprets based on the film and shutter speed.
Now, let's see how an SLR camera body directs the real image to the viewfinder
before you take the shot, and then directs it to the film when you press the shutter
button.

SLR Cameras vs. Point-and-Shoot


PREV NEXT

There are two types of consumer film cameras on the market -- SLR cameras and "pointand-shoot" cameras. The main difference is how the photographer sees the scene. In a
point-and-shoot camera, the viewfinder is a simple window through the body of the camera.
You don't see the real image formed by the camera lens, but you get a rough idea of what is
in view.
In an SLR camera, you see the actual real image that the film will see. If you take the lens
off of an SLR camera and look inside, you'll see how this works. The camera has a slanted
mirror positioned between the shutter and the lens, with a piece of translucent glass and a
prism positioned above it. This configuration works like a periscope -- the real image
bounces off the lower mirror on to the translucent glass, which serves as a projection
screen. The prism's job is to flip the image on the screen, so it appears right side up again,
and redirect it on to the viewfinder window.

When you click the shutter button, the camera quickly switches the mirror out of
the way, so the image is directed at the exposed film. The mirror is connected to
the shutter timer system, so it stays open as long as the shutter is open. This is why
the viewfinder is suddenly blacked out when you take a picture.

The mirror in an SLR camera directs the real image to the viewfinder. When you hit the shutter button,
the mirror flips up so the real image is projected onto the film.

In this sort of camera, the mirror and the translucent screen are set up so they
present the real image exactly as it will appear on the film. The advantage of this
design is that you can adjust the focus and compose the scene so you get exactly
the picture you want. For this reason, professional photographers typically use SLR
cameras.
These days, most SLR cameras are built with both manual and automatic controls,
and most point-and-shoot cameras are fully automatic. Conceptually, automatic
cameras are pretty much the same as fully manual models, but everything is
controlled by a central microprocessor instead of the user. The central
microprocessor receives information from the autofocus system and the light
meter. Then it activates several small motors, which adjust the lens and open and
close the aperture. In modern cameras, this a pretty advanced computer system.

Automatic point-and-shoot camera use circuit boards and electric motors, instead of gears and springs.

In the next section, we'll look at the other end of the spectrum -- a camera design
with no complex machinery, no lens and barely any moving parts.

Homemade Cameras
PREV NEXT

As we've seen in this article, even the most basic, completely manual SLR is a complex,
intricate machine. But cameras are not inherently complex -- in fact, the basic elements are
so simple you can make one yourself with only a few inexpensive supplies.
The simplest sort of homemade camera doesn't use a lens to create a real image -- it
gathers light with a tiny hole. These pinhole cameras are easy to make and a lot of fun to
use -- the only hard part is that you have to develop the film yourself.

A pinhole camera is simply a box with a tiny hole in one side and some film
or photographic paper on the opposite size. If the box is otherwise "lighttight," the light coming through the pinhole will form a real image on the
film. The scientific principle behind this is very simple.
If you were to shine a flashlight in a dark room, through a tiny hole in a wide
piece of cardboard, the light would form a dot on the opposite wall. If you
moved the flashlight, the light dot would also move -- light beams from the
flashlight move through the hole in a straight line.
In a larger visual scene, every particular visible point acts like this flashlight.
Light reflects off each point of an object and travels out in all directions. A

small pinhole lets in a narrow beam from each point in a scene. The beams
travel in a straight line, so light beams from the bottom of the scene hit the
top of the piece of film, and vice-versa. In this way, an upside down image
of the scene forms on the opposite side of the box. Since the hole is so
small, you need a fairly long exposure time to let enough light in.
There are a number of ways to build this sort of camera -- some
enthusiasts have even used old refrigerators and cars as light-tight boxes.
One of the most popular designs uses an ordinary cylinder oatmeal box,
coffee can, or similar container. Its easiest to use a cardboard container
with a removable plastic lid.
You can build this camera in a few simple steps:
1. The first thing to do is paint the lid black, inside and out. This helps light-proof
the box. Be sure to use flat black paint, rather than glossy paint that will reflect
more light.
2. Cut a small hole (about the size of a matchbox) in the center of the canister
bottom (the nonremovable side).
3. Cut out a piece of heavy-duty aluminum foil, or heavy black paper, about
twice the size of the hole in the bottom of the canister.
4. Take a No. 10 sewing needle and carefully make a hole in the center of the
foil. You should only insert the needle halfway, or the hole will be too big. For
best results, position the foil between two index cards and rotate the needle as
you push it through.
5. Tape the foil over the hole in the bottom of the canister, so the pinhole is
centered. Attach the foil securely, with black tape, so light only shines through
the pinhole.
6. All you need for the shutter is a piece of heavy black paper large enough to
cover most of the cannister bottom. Tape one side of the paper securely to the
side of the cannister bottom, so it makes a flap over the pinhole in the
middle. Tape the other side of the flap closed on the other side of the
pinhole. Keep the flap closed until you are ready to take a picture.

7. To load the camera, attach any sort of film or photographic paper to the
inside of the canister lid. Of course, for the film to work, you must load it and
develop it in complete darkness. With this camera design, you won't be able to
simply drop the film off at the drug store -- you'll have to develop it yourself or get
someone to help you.

Choosing a good camera design, film type and exposure time is largely a
matter of trial and error. But, as any pinhole enthusiast will tell you, this
experimentation is the most interesting thing about making your own
camera. To find out more about pinhole photography and see some great
camera designs, check out some of the sites listed on the next page.
Throughout the history of photography, there have been hundreds of
different camera systems. But amazingly, all these designs -- from the
simplest homemade box camera to the newest digital camera -- combine
the same basic elements: a lens system to create the real image, a lightsensitive sensor to record the real image, and a mechanical system to
control how the real image is exposed to the sensor. And when you get
down to it, that's all there is to photography!
For more information on cameras, light, film and related topics, check out
the links on the next page.

Need a Tripod for Your New Camera?


Read Tripod Reviews and compare prices at Consumer Guide Products before you buy.

Why do people have red eyes in


flash photographs?
We've all see photographs where the people in the picture have spooky red
eyes. These are photos taken at night with a flash. Where do the red eyes
come from?
The red color comes from light that reflects off of the retinas in our eyes. In
many animals, including dogs, cats and deer, the retina has a special
reflective layer called the tapetum lucidum that acts almost like a mirror at

the backs of their eyes. If you shine a flashlight or headlights into their eyes
at night, their eyes shine back with bright, white light. Here is
what Encyclopedia Britannica has to say about the tapetum lucidum:
Humans don't have this tapetum lucidum layer in their retinas. If you shine
a flashlight in a person's eyes at night, you don't see any sort of reflection.
The flash on a camera is bright enough, however, to cause a reflection off
of the retina -- what you see is the red color from the blood vessels
nourishing the eye.
Many cameras have a "red eye reduction" feature. In these cameras, the
flash goes off twice -- once right before the picture is taken, and then again
to actually take the picture. The first flash causes people's pupils to
contract, reducing "red eye" significantly. Another trick is to turn on all the
lights in the room, which also contracts the pupil.
Another way to reduce or eliminate "red eye" in pictures is to move the
flash away from the lens. On most small cameras, the flash is only an inch
or two away from the lens, so the reflection comes right back into the lens
and shows up on the film. If you can detach the flash and hold it several
feet away from the lens, that helps a lot. You can also try bouncing the flash
off the ceiling if that is an option.

How Camera Flashes Work


If you've read How Cameras Work, you know that it takes a lot of light to
expose a vivid image onto film. For most indoor photography, where there
is relatively little ambient light, you either need to expose the film for a
longer period of time or momentarily increase the light level to get a clear
picture. Increasing the exposure time doesn't work well for most subjects,
because any quick motion, including the movement of the camera itself,
makes for a blurry picture.
Electronic flashes are a simple, cheap solution to this inherent problem in
photography. Their sole purpose is to emit a short burst of bright light when

you release the shutter. This illuminates the room for the fraction of a
second the film is exposed.
In this article, we'll find out exactly how these devices carry out this important task. As we'll
see, a standard camera flash is a great demonstration of how basic electronic components
can work together in a simple circuit.

Making a Flash
PREV NEXT

A typical camera flash tube, removed from its housing, looks like a miniature neon light.

A basic camera flash system, like you would find in apoint-and-shoot camera, has three
major parts.

A small battery, which serves as the power supply

A gas discharge tube, which actually produces the flash

A circuit (made up of a number of electrical components), which connects the


power supply to the discharge tube

The two components on the ends of the system are very simple. When you hook up a
battery's two terminals to a circuit, the battery forces electrons to flow through the circuit
from one terminal to the other. The moving electrons, or current, provides energy to the
various things connected to the circuit (see How Batteries Workfor more information).

The discharge tube is a lot like a neon light or fluorescent lamp. It consists of a
tube filled with xenon gas, with electrodes on either end and a metal trigger
plate at the middle of the tube.

The tube sits in front of the trigger plate.

The trigger plate is hidden by reflective material, which directs the flash light forward.

The basic idea is to conduct electrical current -- to move free electrons -- through
the gas in the tube, from one electrode to the other. As the free electrons move,
they energize xenon atoms, causing the atoms to emit visible light photons
(see How Light Works for details on how atoms generate photons).
You can't do this with the gas in its normal state, because it has very few free
electrons -- that is, nearly all the electrons are bonded to atoms, so there are almost
no charged particles in the gas. To make the gas conductive, you have to introduce
free electrons into the mix.

Another camera flash tube design: In this curved tube, the trigger plate is attached directly to the glass
on the tube.

This is the metal trigger plate's job. If you briefly apply a high
positive voltage (electromotive force) to this plate, it will exert a strong attraction
on the negatively charged electrons in the atoms. If this attraction is strong enough,
it will pull the electrons free from the atoms. The process of removing an atom's
electrons is called ionization.
The free electrons have a negative charge, so once they are free, they will move
toward the positively charged terminal and away from the negatively charged
terminal. As the electrons move, they collide with other atoms, causing these atoms
to lose electrons as well, further ionizing the gas. The speeding electrons collide
with xenon atoms, which become energized and generate light (see How
Fluorescent Lamps Work for more information).
To accomplish this, you need relatively high voltage (electrical "pressure"). It takes
a couple hundred volts to move electrons between the two electrodes, and you need
a few thousand volts to introduce enough free electrons to make the gas
conductive.
A typical camera battery only offers 1.5 volts, so the flash circuit needs to boost the
voltage substantially. In the next section, we'll find out how it does this.

The Boost
PREV NEXT

In the last section, we saw that a flash circuit needs to turn a battery's low voltage into a
high voltage in order to light up a xenon tube. There are dozens of ways to arrange this sort
of step-up circuit, but most configurations contain the same basic elements. All of these
components are explained in other HowStuffWorks articles:

Capacitors - Devices that store energy by collecting charge on plates (see How
Capacitors Work)

Inductors - Coiled lengths of wire that store up energy by generating magnetic


fields (see How Inductors Work)

Diodes - Semiconductor devices that let current flow freely in only one direction
(see How Semiconductors Work)

Transistors - Semiconductor devices that can act as electrically controlled


switches or amplifiers (see How Amplifiers Work)

The diagram below shows how all of these elements come together in a basic flash circuit.

Taken in its entirety, this diagram may seem a little overwhelming, but if we
break it down into its component parts, it isn't that complicated.

Let's start with the heart of the circuit, the main transformer, the device that
actually boosts the voltage. The transformer consists of two inductors in
close proximity to each other (for example, one might be wound around the
other, with both might be wound around an iron core).

If you've read How Electromagnets Work, you know that passing current
through a coiled length of wire will generate a magnetic field. If you've
read How Inductors Work, you know that a fluctuating magnetic field,
generated by fluctuating electric current, will cause a voltage change in a
conductor. The basic idea of a transformer is to run current through one
inductor (the primary coil) to magnetize another conductor (the secondary
coil), causing a change in voltage in the second coil.
If you vary the size of the two inductors -- the number of loops in each coil
-- you can boost (or reduce)voltage from the primary to the secondary. In a
step-up transformer like the one in the flash circuit, the secondary coil has
many more loops than the primary coil. As a result, the magnetic field and
(by extension) voltage are greater in the secondary coil than in the primary
coil. The trade-off is that the secondary coil has weaker current than the
primary coil. (Check out this site for more information.)
To boost voltage in this way, you need a fluctuating current, like the AC
current (alternating current) in your house. But a battery puts out
constant DC current (direct current), which does not fluctuate. The

inductor's magnetic field only changes when DC current initially passes


through it. In the next section, we'll find out how the flash circuit handles
this problem.

Master and Slave


Professional photographers often set up flashes all around a subject to achieve better
lighting effects. In this arrangement, one master flash may be triggered by the camera
shutter, while other flashes are triggered by the master. Some slave flash designs use the
master flash's light itself as a trigger. The slave flash has a small light sensor that triggers
the flash circuit when it detects a sudden pulse of light.

Oscillator and Capacitor


PREV NEXT

In the last section, we saw that transformers need fluctuating current to work properly. The
flash circuit provides this fluctuation by continually interrupting the DC current flow -- it
passes rapid, short pulses of DC current to continually fluctuate the magnetic field.
The circuit does this with a simple oscillator. The oscillator's main elements are the primary
and secondary coils of the transformer, another inductor (the feedback coil), and
a transistor, which acts as an electrically controlled switch.

When you press the charging button it closes the charging switch so that a short
burst of current flows from the battery through the feedback coil to the base of the
transistor. Applying current to the base of the transistor allows current to flow from

the transistor collector to the emitter -- it makes the transistor briefly conductive
(see How Amplifiers Work for details).
When the transistor is "switched on" in this way, a burst of current can flow from
the battery to the primary coil of the transformer. The burst in current causes a
change in voltage in the secondary coil, which in turn causes a change in voltage in
the feedback coil. This voltage in the feedback coil conducts current to the
transistor base, making the transistor conductive again, and the process repeats.
The circuit keeps interrupting itself in this way, gradually boosting voltage through
the transformer. This oscillating action produces the high-pitch whine you hear
when a flash is charging up.

Flash capacitor from a regular point-and-shoot camera

The high-voltage current then passes through adiode, which acts as a rectifier -- it
only lets current flow one way, so it changes the fluctuating current from the
transformer back into steady direct current.
The flash circuit stores this high-voltage charge in a large capacitor. Like a battery,
the capacitor holds the charge until it's hooked up to a closed circuit.
The capacitor is connected to the two electrodes on the flash tube at all times, but
unless the xenon gas is ionized, the tube can't conduct the current, so the capacitor
can't discharge.
The capacitor circuit is also connected to a smaller gas discharge tube by way of a
resistor. When the voltage in the capacitor is high enough, current can flow through
the resistor to light up the small tube. This acts as an indicator light, telling you
when the flash is ready to go.

The capacitor in a typical camera flash circuit can store a lot of juice. We charged this one up and then
discharged it by connecting the two terminals. Check out this short video to see what happened. (Kids,
don't try this at home!)

The flash trigger is wired to the shutter mechanism. When you take a picture, the
trigger closes briefly, connecting the capacitor to a second transformer. This
transformer boosts the 200-volt current from the capacitor up to between 1,000 and
4,000 volts, and passes the high-voltage current onto the metal plate next to the
flash tube. The momentary high voltage on the metal plate provides the necessary
energy to ionize the xenon gas, making the gas conductive. The flash lights up in
synch with the shutter opening.

Different electronic flashes may have more complex circuitry than this, but most
work in the same basic way. It's simply a matter of boosting battery voltage to
trigger a small gas discharge lamp.
For much more information on camera flashes, including flashes that "read" the
subject in front of them, check out the links on the next page.

How Photographic Film Works


People have been using camera and film for more than 100 years, both for
still photography and movies. There is something magical about the
process -- humans are visual creatures, and a picture really does paint a
thousand words for us!
Despite its long history, film remains the best way to capture still and
moving pictures because of its incredible ability to record detail in a very
stable form. In this article, you'll learn all about how film works, both inside
your camera and when it is developed, so you can understand exactly what
is going on!

The Basics
PREV NEXT

What does it really mean when you "take" a picture with a camera? When you click the
shutter, you have frozen a moment in time by recording the visible light reflected from the
objects in the camera's field of view. In order to do that, the reflected light causes a
chemical change to the photographic film inside the camera. Thechemical record is very
stable, and can be subsequently developed, amplified and modified to produce a
representation (a print) of that moment that you can put in your photo album or your wallet,
or that can be reproduced millions of times in magazines, books and newspapers. You can
even scan the photograph and put it on a Web site.
To understand the whole process, you'll learn some of the science behind photography -exposing the image, processing the image, and producing a print of the image. It all starts
with an understanding of the portion of the electromagnetic spectrum that human eyes are
sensitive to: light.

Light and Energy

PREV NEXT

Energy from the sun comes to the Earth in visible and invisible portions of
the electromagnetic spectrum. Human eyes are sensitive to a small portion of that
spectrum that includes the visible colors -- from the longest visible wavelengths of light
(red) to the shortest wavelengths (blue).
Microwaves, radio waves, infrared, and ultraviolet waves are portions of the invisible
electromagnetic spectrum. We cannot see these portions of the spectrum with our eyes, but
we have invented devices (radios,infrared detectors, ultraviolet dyes, etc.) that let us detect
these portions as well.

Light is neither a wave nor a particle, but has properties of both. Light can
be focused like a wave, but its energy is distributed in discrete packets
called photons. The energy of each photon is inversely related to the
wavelength of the light -- blue light is the most energetic, while red light has
the least energy per photon of exposure. Ultraviolet light (UV) is more
energetic, but invisible to human eyes. Infrared light is also invisible, but if it
is strong enough our skin detects it as heat.
It is the energy in each photon of light that causes a chemical change to the
photographic detectors that are coated on the film. The process whereby
electromagnetic energy causes chemical changes to matter is known
as photochemistry. By carefully engineering materials, they can be
chemically stable until they are exposed to radiation (light). Photochemistry
comes in many different forms. For example, specially formulated plastics
can be hardened (cured) by exposure to ultraviolet light, but exposure to
visible light has no effect. When you get a sun tan, a photochemical
reaction has caused the pigments in your skin to darken. Ultraviolet rays
are particularly harmful to your skin because they are so energetic.

Inside a Roll of Film


PREV NEXT

If you were to open a 35-mm cartridge of color print film, you would find a long strip of
plastic that has coatings on each side. The heart of the film is called the base, and it starts
as a transparent plastic material (celluloid) that is 4 thousandths to 7 thousandths of an inch
(0.025 mm) thick. The back side of the film (usually shiny) has various coatings that are
important to the physical handling of the film in manufacture and in processing.
It is the other side of the film that we are most interested in, because this is where the
photochemistry happens. There may be 20 or more individual layers coated here that are
collectively less than one thousandth of an inch thick. The majority of this thickness is taken
up by a very special binder that holds the imaging components together. It is a marvelous,
and ubiquitous material called gelatin. A specially purified version ofedible gelatin is used
for photography -- yes, the same thing that makes Jell-O jiggly holds film together, and has
done so for more than 100 years! Gelatin comes from animal hides and bones. Thus, there
is an important link between a cow, a hamburger and a roll of film that you might not have
appreciated.

Some of the layers coated on the transparent film do not form images.
They are there to filter light, or to control the chemical reactions in the
processing steps. The imaging layers contain sub-micron sized grains
ofsilver-halide crystals that act as the photon detectors. These crystals
are the heart of photographic film. They undergo a photochemical reaction
when they are exposed to various forms of electromagnetic radiation -light. In addition to visible light, the silver-halide grains can be sensitized to
infrared radiation.
Silver-halide grains are manufactured by combining silver-nitrate and halide
salts (chloride, bromide and iodide) in complex ways that result in a range
of crystal sizes, shapes and compositions. These primitive grains are then
chemically modified on their surface to increase their light sensitivity.
The unmodified grains are only sensitive to the blue portion of the
spectrum, and they are not very useful in camera film. Organic molecules
known as spectral sensitizers are added to the surface of the grains to
make them more sensitive to blue, green and red light. These molecules

must adsorb (attach) to the grain surface and transfer the energy from a
red, green, or blue photon to the silver-halide crystal as a photo-electron.
Other chemicals are added internally to the grain during its growth process,
or on the surface of the grain. These chemicals affect the light sensitivity of
the grain, also known as its photographic speed (ISO or ASA rating).

Film Options
PREV NEXT

When you purchase a roll of film for your camera, you have a lot of choices. Those products
that have the word "color" in their name are generally used to produce color prints that you
can hold in your hand and view by reflected light. The negatives that are returned with your
prints are the exposures that were made in your camera. Those products that have the
word "chrome" in their name produce a color transparency (slides) that requires some form
of projector for viewing. In this case, the returned slides are the actual film that was exposed
in your camera.
Once you decide on prints or slides, the next major decision is the film speed. Generally,
the relative speed rating of the film is part of its name (MYColor Film 200, for example). ISO
and ASA speed ratings are also generally printed somewhere on the box. The higher the
number, the "faster" the film. "Faster" means increased light sensitivity. You want a faster
film when you're photographing quickly moving objects and you want them to be in focus, or
when you want to take a picture in dimly lit surroundings without the benefit of additional
illumination (such as a flash).

When you make film faster, the trade-off is that the increased light
sensitivity comes from the use of larger silver-halide grains. These larger
grains can result in a blotchy or "grainy" appearance to the picture,
especially if you plan to make enlargements from a 35-mm negative.
Professional photographers may use a larger-format negative to reduce the
degree of enlargement and the appearance of grain in their prints. The
trade-off between photographic speed and graininess is an inherent part of
conventional photography. Photographic-film manufacturers are constantly
making improvements that result in faster films with less grain.
A slow-speed film is desirable for portrait photography, where you can
control the lighting of the subject, the subject is stationary, and you are

likely to want a large print from the negative. The finer silver-halide grains
in such film produce the best results.
The advanced amateur photographer might encounter additional film
designations such as tungsten balanced or daylight balanced. A
tungsten-balanced film is meant to be used indoors where the primary
source of light is from tungsten filament light bulbs. Since the visible
illumination coming from a light bulb is different than from the sun
(daylight), the spectral sensitivity of the film must be modified to produce a
pleasing picture. This is most important when using a transparency film.

Film Speed
Film comes with an ASA (American Standards Association) or ISO (International Standards
Organization) rating that tells you its speed. The ISO and ASA scales are identical. Here
are some of the most common film speeds:

ISO 100 - good for outdoor photography in bright sunlight

ISO 200 - good for outdoor photography or brightly lit indoor photography

ISO 400 - good for indoor photography

ISO 1000 or 1600 - good for indoor photography where you want to avoid using a
flash

Taking a Picture: Film Speed

PREV NEXT

The first step after loading the film is to focus the image on the surface of film. This
is done by adjusting glass or plastic lenses that bend the reflected light from the
objects onto the film. Older cameras required manual adjustment, but today's
modern cameras use solid-state detectors to automatically focus the image, or else

they are fixed-focus (no adjustment possible).


Next, the proper exposure must be set. The film speed is the first factor, and most
of today's cameras automatically sense which speed film is being used from the
markings that are on the outside of a 35-mm cartridge. The next two factors are
interdependent, since the exposure to the film is the product of light
intensity and exposure time. The light intensity is determined by how much
reflected light is reaching the film plane. You used to have to carry a light meter to

set the camera exposure, but most of today's cameras have built-in exposure
meters. In addition to the brightness of the scene, the larger the diameter of the
camera lens, the more light will be gathered. Obviously, the trade-off here is the cost
of the camera and the resulting size and weight. If there is too much light reaching
the film plane for the exposure-time setting, the lens can be "stepped down"
(reduced in diameter) using the f-stop adjustment. This is just like the iris in your eye
reacting to bright sunlight.

Photographic film has a limited exposure latitude. If it is


underexposed, it will not detect all the reflected light from a scene.
The resulting print appears to be muddy black and lacks detail. If it is
over-exposed, all of the silver-halide grains are exposed so there is
no discrimination between lighter and darker portions of the scene.
The print appears to be washed out, with little color intensity.
There is an advantage to having a faster film in your camera. It allows
you to have a smaller aperture setting for the same exposure time.
This smaller aperture diameter produces a larger depth of
field. Depth of fielddetermines how much of the subject matter in
your print is in focus. Sometimes, you may want to have a limited
depth of field, so only the primary object is in focus and the
background is out of focus

Taking a Picture: Exposure Chemistry

PREV NEXT

So, either manually or automatically, you now have an image that is focused on the
film surface, and the proper exposure has been set through a combination of film
speed, aperture settings (f-stop) and exposure time (usually fractions of a second,
from one thirtieth to one one-thousandth of a second). Say cheese and push the
button. What happened? While outwardly unexciting, the moment of exposure is

when a lot of photochemistry happens.


By opening the camera's shutter for a fraction of a second, you formed a latent
image of the visible energy reflected off the objects in your viewfinder. The brightest
portion of your picture exposed the majority of the silver-halide grains in that
particular part of the film. In other parts of the image, less light energy reached the
film, and fewer grains were exposed.

When a photon of light is absorbed by the spectral sensitizer sitting


on the surface of a silver-halide grain, the energy of an electron is
raised into the conduction band from the valence band, where it can
be transferred to the conduction band of the silver-halide-grain
electronic structure. A conduction-band electron can then go on to
combine with a positive hole in the silver-halide lattice and form a
single atom of silver. This single atom of silver is unstable. However,
if enough photoelectrons are present at the same time in the crystal
lattice, they may combine with enough positive holes to form a stable
latent-image site. It is generally felt that a stable latent-image site is
at least two to four silver atoms per grain. A silver-halide grain
contains billions of silver-halide molecules, and it only takes two to
four atoms of uncombined silver to form the latent-image site.
In color film, this process happens separately for exposure to the
red, green and blue portions of the reflected light. There is a separate
layer in the film for each color: Red light forms a latent image in the
red-sensitive layer of the film; green light forms a latent image in the
green-sensitive layer; blue light forms a latent image in the bluesensitive layer. The image is called "latent" because you can't detect
its presence until the film is processed. The true photoefficiency of a
film is measured by its performance as a photon detector. Any
photon that reaches the film but does not form a latent image is lost
information. Modern color films generally take from 20 to 60 photons
per grain to produce a developable latent image.

Developing Film: Black & White

PREV NEXT

When you deliver a roll of exposed film to the photo processor, it contains the latent
images of the exposures that you made. These latent images must be amplified and
stabilized in order to make a color negative that can then be printed and viewed by

reflected light.
Before we cover the development of a color negative film, it might be best to step
back and process a black-and-white negative. If you used black-and-white film in
your camera, the same latent-image formation process would have occurred, except
the silver-halide grains would have been sensitized to all wavelengths of visible light

rather than to just red, green or blue light. In black-and-white film, the silver-halide
grains are coated in just one or two layers, so the development process is easier to
understand. Here is what happens:

In the first step of processing, the film is placed in developing agent that is
actually a reducing agent. Given the chance, the reducing agent will convert all
the silver ions into silver metal. Those grains that have latent-image sites will
develop more rapidly. With the proper control of temperature, time and agitation,
grains with latent images will become pure silver. The unexposed grains will
remain as silver-halide crystals.

The next step is to complete the developing process by rinsing the film with
water, or by using a "stop" bath that arrests the development process.

The unexposed silver-halide crystals are removed in what is called the fixing
bath. The fixer dissolves only silver-halide crystals, leaving the silver metal
behind.

In the final step, the film is washed with water to remove all the processing
chemicals. The film strip is dried, and the individual exposures are cut into
negatives.

When you are finished, you have a negative image of the original scene. It
is a negative in the sense that it is darkest (has the highest density of
opaque silver atoms) in the area that received the most light exposure. In
places that received no light, the negative has no silver atoms and is clear.
In order to make it a positive image that looks normal to the human eye, it
must be printed onto another light-sensitive material (usually photographic
paper).
In this development process, the magic binder gelatin played an important
part. It swelled to allow the processing chemicals to get to the silver-halide
grains, but kept the grains in place. This swelling process is vital for the
movement of chemicals and reaction products through the layers of a
photographic film. So far, no one has found a suitable substitute for gelatin
in photographic products.

Developing Film: Color

PREV NEXT

This figure shows a magnified cross-section of a color negative film exposed to yellow light and then
processed. In the additive system, yellow is red plus green. On the film, therefore, the red-sensitive and
green-sensitive layers have formed cyan and magenta dyes, respectively.

If your film were a color negative type (that gives you a print when returned from the photo
processor), the processing chemistry is different in several major ways.
The development step uses reducing chemicals, and the exposed silver-halide grains
develop to pure silver. Oxidized developer is produced in this reaction, and the oxidized
developer reacts with chemicals called couplers in each of the image-forming layers. This
reaction causes the couplers to form a color, and this color varies depending on how the
silver-halide grains were spectrally sensitized. A different color-forming coupler is used in
the red-, green- and blue-sensitive layers. The latent image in the different layers forms a
different colored dye when the film is developed.

Red-sensitive layers form a cyan-colored dye.

Green-sensitive layers form a magenta-colored dye.

Blue-sensitive layers form a yellow-colored dye.

The development process is stopped either by washing or with a stop bath.


The unexposed silver-halide grains are removed using a fixing solution.
The silver that was developed in the first step is removed by bleaching
chemicals.

The negative image is then washed to remove as much of the chemicals


and reaction products as possible. The film strips are then dried.
The resultant color negatives look very bizarre. First, unlike your black-andwhite negative, it contains no silver. In addition to being a color opposite
(negative), the negatives have a strange orange-yellow hue. They are a
color negative in the sense that the more red exposure, the more cyan dye
is formed. Cyan is a mix of blue and green (or white minus red). The overall
orange hue is the result of masking dyes that help to correct imperfections
in the overall color reproduction process. The green-sensitive image layers
contain magenta dye, and the blue-sensitive image layers contain yellow
dye.
The colors formed in the color negative film are based on the subtractive
color formation system. The subtractive system uses one color (cyan,
magenta or yellow) to control each primary color. The additive color system
uses a combination of red, green, and blue to produce a color.
Your television is an additive system. It uses small dots of red, green, and
blue phosphor to reproduce a color. In a photograph, the colors are layered
on top of each other, so a subtractive color reproduction system is required.

Red is controlled by Cyan dye

Green is controlled by Magenta dye

Blue is controlled by Yellow dye

Making the Prints: Black & White

PREV NEXT

Color negatives are not very satisfying to look at. They are small, and the colors are
strange to say the least. In order to make a color print, the negatives must be used to

expose the color print paper.


Color print paper is a high-quality paper that is specially made for this application. It
is made waterproof by extruding plastic layers on both sides. The face side is then
coated with light-sensitive silver-halide grains that are spectrally sensitized to red,
green and blue light. Since the exposure conditions for a color print paper are
carefully controlled, the paper's layer structure is much simpler than that of the color
negative film. Once again, gelatin plays a key part as the primary binder that holds

the image-forming grains and the color-forming components (couplers) together in


very thin, individual layers on the paper surface.

Let's start with a black-and-white negative and make a print. You have the
choice of an enlargement or a direct-contact print. If you want a larger size
print than the original negative, you will need an enlarger, which is basically
a projector with a lens for focusing the image and a controlled light source.
The negative is placed in the enlarger, and it is projected onto a flat surface
that holds the paper. The image is carefully examined to ensure that it is in
focus. If not, adjustment can be made to the lens and projection length. Once
the size of the image and its focus are satisfactory, all the lights are shut off,
and the black-and-white paper is placed onto the flat surface. The paper is
exposed for a specified amount of time using the light from the enlarger. A
latent image is formed in the exposed silver grains. This time, the densest
areas of the negative receive the least amount of light, and therefore become
the brightest and most reflective parts of the prints. The development
process is much the same as for the black-and-white negative film, except
the paper is much larger than the film, and agitation of the processing
chemicals becomes more critical and more difficult. The final image is
actually developed silver, and by carefully washing the prints to remove all
the unwanted materials, these prints can last a very long time.

Making the Prints: Color

PREV NEXT

This figure shows a magnified cross-section of a color negative film exposed to white light and
then processed. White light passes through the film to form blue light, which activates the bluesensitive layer on the color print paper to create yellow dye.

Prints from color negatives are usually done by a large central lab that handles
printing and processing for many local drug stores and supermarkets, or they may be
done in-house using a mini-lab. The mini-lab is set up to do one roll of film at a time,
whereas the product houses splice many rolls together and handle a high volume of
pictures on a semi-continuous basis. In either case, the steps are the same as
already discussed for generating a black-and-white negative image. The major
difference comes in the printing process, where long rolls of color paper are preloaded into a printer. The roll of negatives is loaded, and the printer operator works in
normal lights to preview each negative and make adjustments to the color balance.
The color balance is adjusted by adding subtractive color filters to make the print
more pleasing, particularly when it has been exposed incorrectly. There is only so
much correction that can be done, so don't expect miracles. Once a full roll of paper
is exposed, or a single roll of film has been printed (in the case of a mini-lab), the
paper is processed.

Here are the steps in developing the color print paper after it is exposed:

1. The latent-image sites are developed, and oxidized developer molecules


combine with the color-forming couplers to create a silver image and a
dye image. The reaction is stopped by a washing step.
2. The silver image and any remaining unexposed silver halide is removed
in a combined bleach-plus-fix solution (called the BLIX).
3. The print is then carefully washed to remove any residual chemicals.
4. The print is dried.
Once again, the gelatin binder swells to allow the processing chemicals access to
the silver-halide grains, and allows fresh water to rinse out the by-products. The
colored image should contain no residual silver.
As a final example of color printing process, let's take a look at our negative that
was exposed to a pure yellow object. When the resultant negative is placed in the
printer, and white light is shown through the negative onto the color paper, here is
what happens. The white light exposure is the equivalent of a color print exposure.
Only blue light gets through the color negative and exposes the color paper. The
exposed color paper then forms yellow dye in the blue-sensitive layer, and the
original color is reproduced.
If you've made it this far, you are to be congratulated! Photography isn't as easy as
it seems, but then again, that is what makes it so remarkable. The ability to capture
and record individual photons of light and turn them into a lasting memory requires
many steps. If any one of them goes wrong, the entire result may be lost. On the
other hand, when all the stuff works, the results are truly astounding.
For more information on photographic film and related topics, check out the links
on the next page.

How Instant Film Works


In 1947, an inventor named Edwin Land introduced a remarkable
innovation to the world -- a film that developed itself in a matter of minutes.

This new instant camera technology was a huge success for Land's
company, the Polaroid Corporation. In 1949, Polaroid made more than $5
million in camera sales alone! Over the proceeding 50 years, the company
carved out its own special niche, selling millions of instant cameras and
more than a billion rolls of instant film.
In this article, we'll find out what's actually happening inside instant film
while you're waiting for the image to appear. While it may seem like magic,
the process is really very simple.
Instant camera film is pretty much the same thing as regular camera film,
with a few extra elements. Before we get to those crucial additions, let's
briefly examine film photography in general.
The basic idea of film is to capture patterns of light using special chemicals.
The camera briefly exposes the film to the light coming from a scene
(typically for a small fraction of a second), and where the light hits the film,
it starts off a chemical reaction.
Normal film consists of a plastic base that is coated with particles of
a silver compound. When this compound is exposed to a large number
of light photons, it forms silver atoms. Black-and-white film has one layer
of silver compound, while color film has three layers. In color film, the top
layer is sensitive to blue light, the next layer is sensitive to green and the
bottom layer is sensitive to red. When you expose the film, the sensitive
grains at each layer react to light of that color, creating a chemical record of
the light and color pattern.
To turn this into a picture, you have to develop the film using more
chemicals. One chemical developer turns the exposed particles into
metallic silver. The film is then treated with three different dye
developerscontaining dye couplers. The three dye colors are:

Cyan (a combination of green and blue light)

Magenta (a combination of red and blue light)

Yellow (a combination of green and red light)

Each of these dye-coupler types react with one of the color layers in the
film. In ordinary print film, the dye couplers attach to particles that have
been exposed. In color slide film, the dye couplers attach to the nonexposed areas.
Developed color film has a negative image -- the colors appear opposite of
the colors in the original scene. In slide film, the two dyes that attach to the
unexposed area combine to form the color captured at the exposed layer.
For example, if the green layer is exposed, yellow and cyan dye will attach
on either side of the green layer, but the magenta dye will not attach at the
green layer. The yellow and cyan combine to form green. (For more indepth information on the entire process, see How Cameras Work and How
Photographic Film Works.)
The instant-camera developing process combines colors in the same basic
way as slide film, but the developing chemicals are already present in the
film itself. In the next section, we'll see how the developers are combined
with the color layers to form the picture.

Pictures in an Instant
PREV NEXT

In the last section, we saw that instant camera film has three layers that are sensitive to
different colors of light. Underneath each color layer, there is a developer
layercontaining dye couplers. All of these layers sit on top of a black base layer, and
underneath the image layer, thetiming layer and the acid layer. This arrangement is a
chemical chain reaction waiting to be set in motion.
The component that gets the reaction going is thereagent (as in re-agent). The reagent is a
mix ofopacifiers (light-blockers), alkali (acid neutralizers),white pigment and other
elements. It sits just above the light-sensitive layers and just below the image layer.

Before you take the picture, the reagent material is all collected in a blob at
the border of the plastic film sheet, away from the light-sensitive material.
This keeps the film from developing before it has been exposed. After you
snap the picture, the film sheet passes out of the camera, through a pair of
rollers. (In another configuration, often used by professional photographers,
the reagent and developer are coated on a separate sheet which is
pressed up against the film sheet for a set amount of time.)
The rollers spread the reagent material out into the middle of the film sheet,
just like a rolling pin spreading out dough. When the reagent is spread in
between the image layer and the light-sensitive layers, it reacts with the
other chemical layers in the film. The opacifier material stops light from
filtering onto the layers below, so the film isn't fully exposed before it is
developed.
The reagent chemicals move downward through the layers, changing the
exposed particles in each layer into metallic silver. The chemicals then
dissolve the developer dye so it begins to diffuse up toward the image
layer. The metallic silver areas at each layer -- the grains that were
exposed to light -- grab the dyes so they stop moving up.

Only the dyes from the unexposed layers will move up to the image layer.
For example, if the green layer is exposed, no magenta dye will make it to
the image layer, but cyan and yellow will. These colors combine to create a
translucent green film on the image surface. Light reflecting off the white
pigment in the reagent shines through these color layers, the same way
light from a bulb shines through a slide.
At the same time that these reagent chemicals are working down through
the light-sensitive layers, other reagent chemicals are working through the
film layers above. The acid layer in the film reacts with the alkali and
opacifiers in the reagent, making the opacifiers become clear. This is what
finally makes the image visible. The timing layer slows the reagent down on
its path to the acid layer, giving the film time to develop before it is exposed
to light.
One of the coolest things about instant photography, watching the image
slowly come together, is caused by this final chemical reaction. The image
is already fully developed underneath, but the opacifiers clearing up
creates the illusion that it is forming right before your eyes.
For more information about instant film and photography in general, check
out the links on the next page.

Art Film
When the image finally forms on an instant photo, the developer dye hasn't dried completely
-- it's the same basic consistency as wet ink. You can make some really cool pictures by
spreading the dye around with a pencil or Q-tip. Make a self-portrait that's half photo, half
painting!
Another option is to press the photo onto a sheet of paper to make a print. Or you can
press it against your skin to make a photo-realistic temporary tattoo. Check out this site for
more information.

How Digital Cameras Work


In the past twenty years, most of the major technological breakthroughs in
consumer electronics have really been part of one larger breakthrough.
When you get down to it, CDs, DVDs, HDTV, MP3s and DVRs are all built
around the same basic process: converting conventional analog
information (represented by a fluctuating wave) into digital information
(represented by ones and zeros, or bits). This fundamental shift in
technology totally changed how we handle visual and audio information -- it
completely redefined what is possible.
The digital camera is one of the most remarkable instances of this shift
because it is so truly different from its predecessor. Conventional
camerasdepend entirely on chemical and mechanical processes -- you
don't even need electricity to operate them. On the other hand, all digital
cameras have a built-in computer, and all of them record images
electronically.
The new approach has been enormously successful. Since film still
provides better picture quality, digital cameras have not completely
replaced conventional cameras. But, as digital imaging technology has
improved, digital cameras have rapidly become more popular.

In this article, we'll find out exactly what's going on inside these amazing
digital-age devices.

Digital Camera Basics


PREV NEXT

Let's say you want to take a picture and e-mail it to a friend. To do this, you need the image
to be represented in the language that computers recognize -- bits and bytes. Essentially, a
digital image is just a long string of 1s and 0s that represent all the tiny colored dots -or pixels -- that collectively make up the image. (For information on sampling and digital
representations of data, see this explanation of the digitization of sound waves.
Digitizing light waves works in a similar way.)
If you want to get a picture into this form, you have two options:

You can take a photograph using a conventional film camera, process


the film chemically, print it onto photographic paper and then use a digital
scanner to sample the print (record the pattern of light as a series of pixel
values).

You can directly sample the original light that bounces off your subject,
immediately breaking that light pattern down into a series of pixel values -- in
other words, you can use a digital camera.

At its most basic level, this is all there is to a digital camera. Just like
a conventional camera, it has a series of lenses that focus light to create an
image of a scene. But instead of focusing this light onto a piece of film, it
focuses it onto a semiconductor device that records light electronically. A
computer then breaks this electronic information down into digital data. All
the fun and interesting features of digital cameras come as a direct result of
this process.
In the next few sections, we'll find out exactly how the camera does all this.

Cool Facts

With a 3-megapixel camera, you can take a higher-resolution picture than most
computer monitors can display.

You can use your Web browser to view digital pictures taken using the JPEG
format.

The first consumer-oriented digital cameras were sold by Kodak and Apple in
1994.

In 1998, Sony inadvertently sold more than 700,000 camcorders with a limited
ability to see through clothes.

CCD and CMOS: Filmless Cameras

PREV NEXT

A CMOS image sensor

Instead of film, a digital camera has a sensor that converts light into electrical

charges.
The image sensor employed by most digital cameras is a charge coupled
device (CCD). Some cameras use complementary metal oxide
semiconductor (CMOS) technology instead. Both CCD and CMOS image sensors
convert light into electrons. If you've read How Solar Cells Work, you already
understand one of the pieces of technology used to perform the conversion. A
simplified way to think about these sensors is to think of a 2-D array of thousands or
millions of tiny solar cells.

Once the sensor converts the light into electrons, it reads the value
(accumulated charge) of each cell in the image. This is where the
differences between the two main sensor types kick in:

A CCD transports the charge across the chip and reads it at one corner of the
array. An analog-to-digital converter (ADC) then turns each pixel's value into a
digital value by measuring the amount of charge at each photosite and
converting that measurement to binary form.

CMOS devices use several transistors at each pixel to amplify and move the
charge using more traditional wires.

Differences between the two types of sensors lead to a number of pros and
cons:

A CCD sensor
PHOTO COURTESY DALSA

CCD sensors create high-quality, low-noise images. CMOS sensors are generally
more susceptible to noise.

Because each pixel on a CMOS sensor has several transistors located next to it,
the light sensitivity of a CMOS chip is lower. Many of the photons hit the
transistors instead of the photodiode.

CMOS sensors traditionally consume little power. CCDs, on the other hand, use
a process that consumes lots of power. CCDs consume as much as 100 times
more power than an equivalent CMOS sensor.

CCD sensors have been mass produced for a longer period of time, so they are
more mature. They tend to have higher quality pixels, and more of them.

Although numerous differences exist between the two sensors, they both
play the same role in the camera -- they turn light into electricity. For the

purpose of understanding how a digital camera works, you can think of


them as nearly identical devices.

Digital Camera Resolution


PREV NEXT

The size of an image taken at different resolutions


PHOTO COURTESY MORGUEFILE

The amount of detail that the camera can capture is called the resolution, and it is
measured in pixels. The more pixels a camera has, the more detail it can capture and the
larger pictures can be without becoming blurry or "grainy."
Some typical resolutions include:

256x256 - Found on very cheap cameras, this resolution is so low that the
picture quality is almost always unacceptable. This is 65,000 total pixels.
640x480 - This is the low end on most "real" cameras. This resolution is
ideal for e-mailing pictures or posting pictures on a Web site.
1216x912 - This is a "megapixel" image size -- 1,109,000 total pixels -good for printing pictures.
1600x1200 - With almost 2 million total pixels, this is "high resolution."
You can print a 4x5 inch print taken at this resolution with the same
quality that you would get from a photo lab.

2240x1680 - Found on 4 megapixel cameras -- the current standard -- this


allows even larger printed photos, with good quality for prints up to
16x20 inches.
4064x2704 - A top-of-the-line digital camera with 11.1 megapixels takes
pictures at this resolution. At this setting, you can create 13.5x9 inch
prints with no loss of picture quality.
High-end consumer cameras can capture over 12 million pixels. Some professional
cameras support over 16 million pixels, or 20 million pixels for large-format
cameras. For comparison, Hewlett Packard estimates that the quality of 35mm film
is about 20 million pixels [ref].
Next, we'll look at how the camera adds color to these images.

How Many Pixels?


You may have noticed that the number of pixels and the maximum resolution don't
quite compute. For example, a 2.1-megapixel camera can produce images with a
resolution of 1600x1200, or 1,920,000 pixels. But "2.1 megapixel" means there
should be at least 2,100,000 pixels.
This isn't an error from rounding off or binary mathematical trickery. There is a
real discrepancy between these numbers because the CCD has to include circuitry
for the ADC to measure the charge. This circuitry is dyed black so that it doesn't
absorb light and distort the image.

Capturing Color
PREV NEXT

How the original (left) image is split in a beam splitter

Unfortunately, each photosite is colorblind. It only keeps track of the total intensity of the
light that strikes its surface. In order to get a full color image, most sensors use filtering to
look at the light in its three primary colors. Once the camera records all three colors, it
combines them to create the full spectrum.
There are several ways of recording the three colors in a digital camera. The highest quality
cameras use three separate sensors, each with a different filter. Abeam splitter directs light
to the different sensors. Think of the light entering the camera as water flowing through a
pipe. Using a beam splitter would be like dividing an identical amount of water into three
different pipes. Each sensor gets an identical look at the image; but because of the filters,
each sensor only responds to one of the primary colors.

The advantage of this method is that the camera records each of the three
colors at each pixel location. Unfortunately, cameras that use this method
tend to be bulky and expensive.
Another method is to rotate a series of red, blue and green filters in front of
a single sensor. The sensor records three separate images in rapid
succession. This method also provides information on all three colors at
each pixel location; but since the three images aren't taken at precisely the
same moment, both the camera and the target of the photo must remain
stationary for all three readings. This isn't practical for candid photography
or handheld cameras.

Both of these methods work well for professional studio cameras, but
they're not necessarily practical for casual snapshots. Next, we'll look at
filtering methods that are more suited to small, efficient cameras.

Demosaicing Algorithms: Color Filtering


PREV NEXT

A more economical and practical way to record the primary colors is to permanently place a
filter called a color filter array over each individual photosite. By breaking up the sensor
into a variety of red, blue and green pixels, it is possible to get enough information in the
general vicinity of each sensor to make very accurate guesses about the true color at that
location. This process of looking at the other pixels in the neighborhood of a sensor and
making an educated guess is called interpolation.
The most common pattern of filters is the Bayer filter pattern. This pattern alternates a row
of red and green filters with a row of blue and green filters. The pixels are not evenly divided
-- there are as many green pixels as there are blue and red combined. This is because
the human eye is not equally sensitive to all three colors. It's necessary to include more
information from the green pixels in order to create an image that the eye will perceive as a
"true color."

The advantages of this method are that only one sensor is required, and all the
color information (red, green and blue) is recorded at the same moment. That
means the camera can be smaller, cheaper, and useful in a wider variety of

situations. The raw output from a sensor with a Bayer filter is a mosaic of red,
green and blue pixels of different intensity.
Digital cameras use specialized demosaicing algorithms to convert this mosaic
into an equally sized mosaic of true colors. The key is that each colored pixel can
be used more than once. The true color of a single pixel can be determined by
averaging the values from the closest surrounding pixels.
Some single-sensor cameras use alternatives to the Bayer filter pattern. X3
technology, for example, embeds red, green and blue photodetectors in silicon.
Some of the more advanced cameras subtract values using the typesetting colors
cyan, yellow, green and magenta instead of blending red, green and blue. There is
even a method that uses two sensors. However, most consumer cameras on the
market today use a single sensor with alternating rows of green/red and green/blue
filters.

Digital Camera Exposure and Focus


PREV NEXT

Just as with film, a digital camera has to control the amount of light that reaches the sensor.
The two components it uses to do this, the aperture and shutter speed, are also present
on conventional cameras.

Aperture: The size of the opening in the camera. The aperture is automatic in
most digital cameras, but some allow manual adjustment to give professionals
and hobbyists more control over the final image.

Shutter speed: The amount of time that light can pass through the aperture.
Unlike film, the light sensor in a digital camera can be reset electronically, so
digital cameras have a digital shutter rather than a mechanical shutter.

These two aspects work together to capture the amount of light needed to make a good
image. In photographic terms, they set the exposure of the sensor. You can learn more
about a camera's aperture and shutter speed in How Cameras Work.

In addition to controlling the amount of light, the camera has to adjust the
lenses to control how the light is focused on the sensor. In general, the
lenses on digital cameras are very similar to conventional camera lenses --

some digital cameras can even use conventional lenses. Most use
automatic focusing techniques, which you can learn more about in the
article How Autofocus Cameras Work.
The focal length, however, is one important difference between the lens of
a digital camera and the lens of a 35mm camera. The focal length is the
distance between the lens and the surface of the sensor. Sensors from
different manufacturers vary widely in size, but in general they're smaller
than a piece of 35mm film. In order to project the image onto a smaller
sensor, the focal length is shortened by the same proportion. For additional
information on sensor sizes and comparisons to 35mm film, you can visit
the Photo.net Web site.
Focal length also determines the magnification, or zoom, when you look
through the camera. In 35mm cameras, a 50mm lens gives a natural view
of the subject. Increasing the focal length increases the magnification, and
objects appear to get closer. The reverse happens when decreasing the
focal length. A zoom lens is any lens that has an adjustable focal length,
and digital cameras can have optical or digitalzoom -- some have both.
Some cameras also have macro focusing capability, meaning that the
camera can take pictures from very close to the subject.
Digital cameras have one of four types of lenses:

Fixed-focus, fixed-zoom lenses - These are the kinds of lenses on disposable


and inexpensive film cameras -- inexpensive and great for snapshots, but fairly
limited.

Optical-zoom lenses with automatic focus - Similar to the lens on a video


camcorder, these have "wide" and "telephoto" options and automatic focus. The
camera may or may not support manual focus. These actually change the focal
length of the lens rather than just magnifying the information that hits the sensor.

Digital zoom - With digital zoom, the camera takes pixels from the center of the
image sensor and interpolates them to make a full-sized image. Depending on
the resolution of the image and the sensor, this approach may create a grainy or
fuzzy image. You can manually do the same thing with image processing
software -- simply snap a picture, cut out the center and magnify it.

Replaceable lens systems - These are similar to the replaceable lenses on a


35mm camera. Some digital cameras can use 35mm camera lenses.

Next, we'll learn about how the camera stores pictures and transfers them
to a computer.

Storing Digital Photos


PREV NEXT

A CompactFlash card
PHOTO COURTESY HSW SHOPPER

Most digital cameras have an LCD screen, so you can view your picture right away. This is
one of the great advantages of a digital camera -- you get immediate feedback on what you
capture. Of course, viewing the image on your camera would lose its charm if that's all you
could do. You want to be able to load the picture into your computer or send it directly to a
printer. There are several ways to do this.
Early generations of digital cameras had fixed storage inside the camera. You needed to
connect the camera directly to a computer with cables to transfer the images. Although most
of today's cameras are capable of connecting
throughserial, parallel, SCSI, USB or FireWire connections, they usually also use some sort
of removable storage device.

Digital cameras use a number of storage systems. These are like reusable, digital
film, and they use a caddy or card reader to transfer the data to a computer. Many
involve fixed or removable flash memory. Digital camera manufacturers often
develop their own proprietary flash memory devices,
including SmartMedia cards,CompactFlash cards and Memory Sticks. Some other
removable storage devices include:
Floppy disks

Hard disks, or microdrives


Writeable CDs and DVDs
No matter what type of storage they use, all digital cameras need lots of room for
pictures. They usually store images in one of two formats -- TIFF, which is
uncompressed, and JPEG, which is compressed, but some use RAW format. Most
cameras use the JPEG file format for storing pictures, and they sometimes offer
quality settings (such as medium or high). The following information will give you
an idea of the file sizes you might expect with different picture sizes.

640x480
TIFF (uncompressed) 1.0 MB
JPEG (high quality) 300 KB
JPEG (medium quality) 90 KB

800x600
TIFF (uncompressed) 1.5 MB
JPEG (high quality) 500 KB
JPEG (medium quality) 130 KB

1024x768
TIFF (uncompressed) 2.5 MB
JPEG (high quality) 800 KB
JPEG (medium quality) 200 KB

1600x1200
TIFF (uncompressed) 6.0 MB

JPEG (high quality) 1.7 MB


JPEG (medium quality) 420 KB
To make the most of their storage space, almost all digital cameras use some sort
of data compression to make the files smaller. Two features of digital images make
compression possible. One is repetition. The other is irrelevancy.
Imagine that throughout a given photo, certain patterns develop in the colors. For
example, if a blue sky takes up 30 percent of the photograph, you can be certain
that some shades of blue are going to be repeated over and over again. When
compression routines take advantage of patterns that repeat, there is no loss of
information and the image can be reconstructed exactly as it was recorded.
Unfortunately, this doesn't reduce files any more than 50 percent, and sometimes it
doesn't even come close to that level.
Irrelevancy is a trickier issue. A digital camera records more information than
the human eye can easily detect. Some compression routines take advantage of this
fact to throw away some of the more meaningless data.
Next, we'll tie it all together and see how a digital camera takes a picture.

CCD Camera Summary


PREV NEXT

A memory stick
PHOTO COURTESY HSW SHOPPER

It takes several steps for a digital camera to take a picture. Here's a review of what happens
in a CCD camera, from beginning to end:

You aim the camera at the subject and adjust the optical zoom to get closer or
farther away.

You press lightly on the shutter release.

The camera automatically focuses on the subject and takes a reading of the
available light.

The camera sets the aperture and shutter speed for optimal exposure.

You press the shutter release all the way.

The camera resets the CCD and exposes it to the light, building up an electrical
charge, until the shutter closes.

The ADC measures the charge and creates a digital signal that represents the
values of the charge at each pixel.

A processor interpolates the data from the different pixels to create natural color.
On many cameras, it is possible to see the output on the LCD at this stage.

A processor may perform a preset level of compression on the data.

The information is stored in some form of memory device (probably a Flash


memory card).

For more information on digital cameras and related topics, check out the links on the
following page.

How Autofocus Cameras Work


Autofocus is that great time saver that is found in one form or another on
most cameras today. In most cases, it helps improve the quality of the
pictures we take.
In this article, you will learn about the two most common forms of
autofocus, and find out how to determine which type of autofocus your
camera uses. You will also learn some valuable tips about preventing the
main causes of blurred pictures when using an autofocus camera.

What is Autofocus?
PREV NEXT

Autofocus (AF) really could be called power-focus, as it often uses a computer to run a
miniature motor that focuses the lens for you. Focusing is the moving of the lens in and out
until the sharpest possible image of the subject is projected onto the film. Depending on the
distance of the subject from the camera, the lens has to be a certain distance from the film
to form a clear image.
In most modern cameras, autofocus is one of a suite of automatic features that work
together to make picture-taking as easy as possible. These features include:

Automatic film advance

Automatic flash

Automatic exposure

There are two types of autofocus systems: active and passive. Some
cameras may have a combination of both types, depending on the price of
the camera. In general, less expensive point-and-shoot cameras use an
active system, while more expensive SLR (single-lens reflex) cameras with
interchangeable lenses use the passive system.

Looking for a Tripod to Steady your Images?


Read Tripod Reviews and compare prices at Consumer Guide Products
before you buy.

Active Autofocus
PREV NEXT

In 1986, the Polaroid Corporation used a form of sound navigation ranging (SONAR), like
a submarine uses underwater, to bounce a sound wave off the subject. The Polaroid
camera used an ultra-high-frequency sound emitter and then listened for the echo (see How
Radar Works for details). The Polaroid Spectra and later SX-70 models computed the
amount of time it took for the reflected ultrasonic sound wave to reach the camera and then

adjusted the lens position accordingly. This use of sound has its limitations -- for example, if
you try taking a picture from inside a tour bus with the windows closed, the sound waves will
bounce off of the window instead of the subject and so focus the lens incorrectly.
This Polaroid system is a classic active system. It is called "active" because the camera
emits something (in this case, sound waves) in order to detect the distance of the subject
from the camera.

Active autofocus on today's cameras uses an infrared signal instead of


sound waves, and is great for subjects within 20 feet (6 m) or so of the
camera. Infrared systems use a variety of techniques to judge the distance.
Typical systems might use:

Triangulation

Amount of infrared light reflected from the subject

Time

For example, this patent describes a system that reflects an infrared pulse
of light off the subject and looks at the intensity of the reflected light to
judge the distance. Infrared is active because the autofocus system is
always sending out invisible infrared light energy in pulses when in focus
mode.
It is not hard to imagine a system in which the camera sends out pulses of
infrared light just like the Polaroid camera sends out pulses of sound. The
subject reflects the invisible infrared light back to the camera, and the
camera's microprocessor computes the time difference between the time
the outbound infrared light pulses are sent and the inbound infrared pulses
are received. Using this difference, the microprocessor circuit tells the
focus motor which way to move the lens and how far to move it. This focus
process repeats over and over while the camera user presses the shutter
release button down half-way. The only difference between this system and
the ultrasound system is the speed of the pulse. Ultrasound waves move at
hundreds of miles per hour, while infrared waves move at hundreds of
thousands of miles per second.
Infrared sensing can have problems. For example:

A source of infrared light from an open flame (birthday cake candles, for
instance) can confuse the infrared sensor.

A black subject surface may absorb the outbound infrared beam.

The infrared beam can bounce off of something in front of the subject rather than
making it to the subject.

One advantage of an active autofocus system is that it works in the dark,


making flash photography much easier.
On any camera using an infrared system, you can see both the infrared
emitter and the receiver on the front of the camera, normally near the
viewfinder.
To use infrared focusing effectively, be sure the emitter and the sensor
have a clear path to and from your subject, and are not blocked by a
nearby fence or bars at a zoo cage. If your subject is not exactly in the
middle, the beam can go right past the subject and bounce off an undesired
subject in the distance, so be sure the subject is centered. Very bright
subjects or bright lights can make it difficult for the camera to "see" the
reflected infrared beam -- avoid these subjects when possible.
This patent, this patent, and this patent each show a different form of
infrared sensing.

Passive Autofocus
PREV NEXT

Out-of-focus scene

Passive autofocus, commonly found on single-lens reflex (SLR) autofocus cameras,


determines the distance to the subject by computer analysis of the image itself. The
camera actually looks at the scene and drives the lens back and forth searching for the best
focus.
A typical autofocus sensor is a charge-coupled device (CCD) that provides input to
algorithms that compute the contrast of the actual picture elements. The CCD is typically a
single strip of 100 or 200 pixels. Light from the scene hits this strip and the microprocessor
looks at the values from each pixel. The following images help you understand what the
camera sees:

The microprocessor in the camera looks at the strip of pixels and looks at
the difference in intensity among the adjacent pixels. If the scene is out of
focus, adjacent pixels have very similar intensities. The microprocessor
moves the lens, looks at the CCD's pixels again and sees if the difference
in intensity between adjacent pixels improved or got worse. The
microprocessor then searches for the point where there is maximum
intensity difference between adjacent pixels -- that's the point of best
focus. Look at the difference in the pixels in the two red boxes above: In the
upper box, the difference in intensity between adjacent pixels is very slight,
while in the bottom box it is much greater. That is what the microprocessor
is looking for as it drives the lens back and forth.

Passive autofocus must have light and image contrast in order to do its
job. The image needs to have some detail in it that provides contrast. If you
try to take a picture of a blank wall or a large object of uniform color, the
camera cannot compare adjacent pixels so it cannot focus.
There is no distance-to-subject limitation with passive autofocus like there
is with the infrared beam of an active autofocus system. Passive autofocus
also works fine through a window, since the system "sees" the subject
through the window just like you do.
Passive autofocus systems usually react to vertical detail. When you hold
the camera in the horizontal position, the passive autofocus system will
have a hard time with a boat on the horizon but no problem with a flagpole
or any other vertical detail. If you are holding the camera in the usual
horizontal mode, focus on the vertical edge of the face. If you are holding
the camera in the vertical mode, focus on a horizontal detail.
Newer, more expensive camera designs have combinations of vertical and
horizontal sensors to solve this problem. But it's still the camera user's job
to keep the camera's sensors from being confused on objects of uniform
color.
You can see how much area your camera's autofocus sensors cover by
looking through the viewfinder at a small picture or a light switch on a blank
wall. Move the camera from left to right and see at which point the
autofocus system becomes confused.

How Do I Know Which Autofocus System My


Camera Has?
PREV NEXT

Look at the type of camera you have:

If it is an under-$50 point-and-shoot camera or one of the single-use, disposable


cameras, it is definitely a fixed-focus camera with no focusing system of any
kind. This type of lens has its focus set at the factory, and it typically works best
with a subject distance of about 8 feet. Four feet is about as close as you can get

to the subject with a fixed-focus camera. When you look through a fixed-focus
camera, you typically do not see the square brackets or circles found in an
autofocus camera. However, you may see a "flash ready" indicator.

SLR cameras with interchangeable lenses typically use the passive


autofocus system.

Cameras without interchangeable lenses typically use active infrared, and you
can see the emitter and the sensor on the front of the camera.

Here's a quick test to tell which autofocus system is in use in your camera (some cameras
may have both systems):

Go outdoors and aim the viewfinder at an area of the sky with no clouds,
power lines or tree limbs. Press the shutter button halfway down.
If you get a "focus okay" indication, it's an active autofocus system.
If you get a "focus not okay" indication, it's a passive autofocus system.
The CCD cannot find any contrast in a blue sky, so it gives up.

Is Autofocus Always Accurate and Faster?


PREV NEXT

It is really up to the person using the camera to determine if the subject is in focus. The
camera merely assists you in making this decision. The two main causes of blurred pictures
taken via autofocus cameras are:

Mistakenly focusing on the background

Moving the camera while pressing the shutter button

Your eye has a fast autofocus! Try this simple experiment: Hold your hand up near your
face and focus on it, and then quickly look at something past your hand in the distance. The
distant item will be clear, and your hand will not be as clear. Look back at your hand. It will
be clear, while out of the corner of your eye the same distant item will not be as clear. Your
camera is not nearly this quick or this precise, so you often have to help it.

Focus Lock: The Key to Great Autofocus Pictures

PREV NEXT

The camera user can often fool the autofocus system. A pose of two people centered in the
picture may be unclear if the focus area (the area between the two square brackets) is in
the middle of the two people. Why? The camera's autofocus system actually focuses on the
landscape in the background, which is what it "sees" between the two people.
The solution is to move your subjects off-center and use the focus-lock feature of your
camera. Typically, focus lock works by depressing the shutter button part-way and holding it
while you compose the picture. The steps are:

1. Compose the picture so that the subject is either in the left third or the
right third of the picture. (This makes for pleasing pictures.) You will
come back to this position.
1. Move the camera right or left so the square brackets in the center of the
viewfinder are over the actual subject.

1. Press and hold the shutter button halfway down so the camera focuses on
the subject. Keep your finger on the button.

2. Slowly move your camera back to where you composed the picture in
step 1. Press (squeeze) the shutter button all the way down. It may take
some practice to do it right, but the results will be great!

You may also use the above procedure in the vertical direction, say when taking a
picture with mountains or the shore in the background.

When Should I Use Manual Focus?


PREV NEXT

Manual focus rings are still available on most SLR cameras. When taking a picture of an
animal behind bars in a zoo, the autofocus camera might focus on the cage bars instead of
the animal. On most consumer-grade autofocus cameras, use manual focus when:

You have a zoom lens on an active autofocus camera, and your subject is more
than 25 feet away.

You have a passive autofocus camera and the subject has little or no detail, like a
white shirt with no tie.

You have a passive autofocus camera and the subject is not well lit or very bright
and more than 25 feet away.

Autofocus Video Cameras


PREV NEXT

Autofocus in a video camera is a passive system that also uses the central portion of the
image. Though very convenient for fast shooting, autofocus has some problems:

It can be slow to respond.

It may search back and forth, vainly seeking a subject to focus on.

It has trouble in low light levels.

It mis-focuses when the intended subject is not in the center of the image.

It changes focus when something passes between the subject and the lens.

Autofocus video cameras work best in bright light. Switch to manual focus in low light.

How to "See" Infrared With Your Camcorder


You can sometimes "see" infrared via this simple experiment, using a camcorder
with a TV monitor attached. Point the camera toward a TV remote control. Push
some buttons on the TV remote control and the camera should "see" invisible
infrared light from the remote control. Camcorders typically use CCD imaging
chips. These chips are sensitive to infrared light. That's why your camera shows a
white spot where the remote's infrared source is located. A "spy" can take pictures
in complete darkness if they illuminate the scene with bright infrared light.

The Camera

For the pin hole camera (one that doesn't need a lens!) see here.
A Single Lens Reflex Camera
To record an image on film or on a digital camera's memory card we need to have a real image. We therefore need
convex lens in our camera and to position the object further away than the focal length of the lens.

The further away the object is the smaller the image will be on the film or array of sensors. The above diagram is
simple camera (the type you would get in an examination question at GCSE). Today's cameras are much more comple
You have to understand the basic lens function and the similarity between the camera and the human eye.
The viewfinder in SLR Cameras - Single Lens Reflex Cameras
In an SLR camera, you see the real image that the film or digital
sensor will record through a viewfinder. Have you ever wondered how
this is possible?
The camera has a slanted mirror positioned between the shutter and
the lens, with a piece of translucent glass and a prism positioned
above it. This arrangement works like a periscope and the image is
reflected off the lower mirror on to the translucent glass, which
works like a projection screen. The prism's job is to turn the image
onto the screen, so it appears right way up again, and redirect it on
to the viewfinder window allowing you to see what you are
photographing.

When you click the shutter button, the camera quickly moves the mirror out of the way, so the image is directed a
the exposed film or digital sensor array. The mirror is connected to the shutter timer system, so it stays open as
long as the shutter is open. This is why the viewfinder is suddenly blacked out when you take a picture.
The camera and the human eye
The camera is similar to our eyes. It has a lot of functions that correspond to how our eyes work.
Human Eye

Camera

Eyelid

Shutter

Iris
Pupil
Retina

Iris
Aperture
Film or array of photosite
sensors

cone cells
rod cells
cornea

filtered photosite sensors


photosite sensors
lens

lens

additional lenses and/or


system to move the lens
towards or away from the
recording medium

optic nerve

Is an opaque covering that is able to prevent light from


entering the viewing system
An adjustable circular aperture. The eye automatically
adhusts the hole in the centre of this to allow the correct
amount of light through to the system. In bright light it
closes to give a pinpoint hole (pupil) in dim light it allows
the pupil to widen to allow in more light energy. In a camera
this can be done manualy (or in some cameras automatic
electronic response is used)
Hole in the iris that lets light enter the system.
The back of the eye is covered with light sensitive cells - see
below for more detail - these allow us to record the image
by changing the light energy to electrical energy so that a
message can be sent to the brain. A photographic film
records the image using light sensitive chemicals and a
digital camera's array of photosites records the image using
light sensitive electronic components that change the light
energy into electrical energy in a very similar way to our
eye system.
see below
see below
The cornea is the major refractor in the human eye. It
refracts the light on entry to the image processing system
The lens of the human eye allows accomodation of the
imaging system - making fine adjustment so that a sharp
image is obtained on the retina. Muscles squeeze the eye
lens to change the degree of curvature. Making it mmore
curved increases its power.
In a camera this effect is obtained by moving the lens to the
optimum position by extending the lens housing or by
selecting a lens of different power from several incorporated
within the camera.

connections to the memory


card
brain's visual cortex (where
memory card of a digital
Visual memories can remain with us a long time or be lost
visual information is
computer
in a few seconds. Camera 'memories' can be kept for a very
processed) and visual memory
long time if they are put onto a photograph or stored
storage area
digitally.

Photographic film cameras capture their images on acetate coated with a light sensitive chemical .
When light energy falls on the chemical a reaction occurs and the chemical changes. The film is then
'fixed' when it is processed by another chemical process making 'negatives' of the image.
Just as the chemical coating on the film absorbs the light that falls on it, the light sensitive cells (rods
and cones) on the retina absorb light photons within the eye.

Digital cameras capture their images by using a silicon semiconductor device


This is like the way light sensitive cells on
called a digital sensor. This sensor is made up of an array of photosensitive diodes the retina absorb light photons within the eye.
called photosites that capture light energy and convert it to electrical energy
Unfortunately, each photosite is colourblind. It only detects the intensity of the light
that strikes its surface. In order to get a full colour image, most sensors
use filtering to look at the light in its three primary colours - red, green and blue.
Neigbouring photosites monitor the same point on the object and are filtered to
collect only light in one frequency band range. This is done to
mimic the behaviour of the human eye.

Within the human eye there are special light


sensitive cells called conesthat are only sensitiv
to one of three frequency ranges of the visible li
spectrum. The rods are cells that only work like
unfiltered photosites.

The size of the voltage buildup on each photosite is converted into digital data as a
picture element or pixel. These pixels are then relayed in consecutive order and
stored as an image on the cameras memory as a file.

This is similar to the way that theoptic


nerve transmits visual information to the brain.

These files can then be viewed on the camera in the LCD screen, or uploaded to a
computer where they can also be viewed or manipulated with imaging software.

This corresponds to the way we observe images


without eyes... or remember what we have seen
our mind's eye.

See 'The Camera' in How Stuff Works and 'The Digital Camera' - that will give you more detail.
Cyberphysics - a web-based teaching aid - for students of physics, their teachers and parents....

Camera obscura
From Wikipedia, the free encyclopedia

This article is about an optical device. For other uses, see Camera obscura (disambiguation).

A drawing of a camera obscura

Camerae obscurae forDaguerreotype called "Grand Photographe" produced by Charles Chevalier (Muse des Arts et Mtiers)

An image of the New Royal Palace at Prague Castle projected onto an attic wall by a hole in the tile roofing

A camera obscura (Latin for "dark room") is an optical device that led to photography and
the photographic camera. An Arab physicist,Ibn al-Haytham, published his Book of Optics in 1021
AD. He created the first pinhole camera after observing how light traveled through a window shutter.
Ibn al-Haytham realized that smaller holes would create sharper images. Ibn al-Haytham is also
credited with inventing the first camera obscura. The device consists of a box or room with a hole in
one side. Light from an external scene passes through the hole and strikes a surface inside, where it

is reproduced, inverted (thus upside-down), but with color and perspective preserved. The image
can be projected onto paper, and can then be traced to produce a highly accurate representation.
The largest camera obscura in the world is on Constitution Hill in Aberystwyth, Wales.
[1]

Using mirrors, as in an 18th-century overhead version, it is possible to project a right-side-up image.


Another more portable type is a box with an angled mirror projecting onto tracing paper placed on
the glass top, the image being upright as viewed from the back.
As the pinhole is made smaller, the image gets sharper, but the projected image becomes dimmer.
With too small a pinhole, however, the sharpness worsens, due to diffraction. In practice, most
camerae obscurae use a lens rather than a pinhole (as in a pinhole camera) because it allows a
larger aperture, giving a usable brightness while maintaining focus.
Contents
[hide]

1Role in the modern age

2History

3Examples

4Public access

5See also

6Notes

7References

8Sources

Role in the modern age[edit]


While the technical principles of the camera obscura have been known since antiquity, the broad use
of the technical concept in producing images with a linear perspective in paintings, maps, theatre
setups and architectural and later photographic images and movies started in the Western
Renaissance and the scientific revolution. While e.g. Alhazen (Ibn al-Haytham) had already
observed an optical effect and developed a state of the art theory of the refraction of light, he was
less interested to produce images with it (compare Hans Belting 2005); the society he lived in was
even hostile (compare Aniconism in Islam) towards personal images. Western artists and
philosophers used the Arab findings in new frameworks of epistemic relevance. E.g. Leonardo da
Vinci used the camera obscura as a model of the eye,Ren Descartes for eye and mind and John
Locke started to use the camera obscura as a metaphor of human understanding per se. The
modern use of the camera obscura as an epistemic machine had important side effects for science.
[2]

[3]

[4]

[5]

[6]

History[edit]
Further information: Mathematics and art

Camera obscura inEncyclopdie, ou dictionnaire raisonn des sciences, des arts et des mtiers. 18th century

The earliest extant written record of the camera obscura is to be found in the writings of Mozi (470 to
390 BC), a Chinese philosopher and the founder of Mohism. Mozi correctly asserted that the image
in a camera obscura is flipped upside down because light travels in straight lines from its source. His
disciples developed this into a minor theory of optics.
[7][note 1]

The Greek philosopher Aristotle (384 to 322 BC) was familiar with the principle of the camera
obscura. He viewed the crescent shape of a partly eclipsed sun projected on the ground through
the holes in a sieve and through the gaps between the leaves of a plane tree. In the 4th century
BC,Aristotle noted that "sunlight travelling through small openings between the leaves of a tree, the
holes of a sieve, the openings wickerwork, and even interlaced fingers will create circular patches of
light on the ground." Euclid's Optics (c. 300 BC) mentioned the camera obscura as a demonstration
that light travels in straight lines. In the 4th century, Greek scholar Theon of Alexandria observed
that "candlelight passing through a pinhole will create an illuminated spot on a screen that is directly
in line with the aperture and the center of the candle."
[8]

[9]

In the 6th century, the Byzantine-Greek mathematician and architect Anthemius of Tralles (most
famous for designing the Hagia Sophia), used a type of camera obscura in his experiments.
[10]

In the 9th century, Al-Kindi (Alkindus) demonstrated that "light from the right side of the flame will
pass through the aperture and end up on the left side of the screen, while light from the left side of
the flame will pass through the aperture and end up on the right side of the screen."
Then a breakthrough by Ibn al-Haytham (AD 9651039), also known as Alhazen, the inventor of
Camera Obscura. He described a 'dark room' and experimented with images seen through the
pinhole. He arranged three candles in a row and put a screen with a small hole between the candles
[11]

and the wall. He noted that images were formed only by means of small holes and that the candle to
the right made an image to the left on the wall.
[12]:91#5:p379[6.85],[6.86]

Leonardo da Vinci (14521519), familiar with the work of Alhazen in Latin translation and after an
extensive study of optics and human vision, published the first clear description of the camera
obscura in Codex Atlanticus (1502):
If the facade of a building, or a place, or a landscape is illuminated by the sun and a small hole is
drilled in the wall of a room in a building facing this, which is not directly lighted by the sun, then all
objects illuminated by the sun will send their images through this aperture and will appear, upside
down, on the wall facing the hole.
You will catch these pictures on a piece of white paper, which placed vertically in the room not far
from that opening, and you will see all the above-mentioned objects on this paper in their natural
shapes or colors, but they will appear smaller and upside down, on account of crossing of the rays at
that aperture. If these pictures originate from a place which is illuminated by the sun, they will appear
colored on the paper exactly as they are. The paper should be very thin and must be viewed from
the back.
[13]

The Song Dynasty Chinese scientist Shen Kuo (10311095) experimented with a camera obscura,
and was the first to apply geometrical and quantitative attributes to it in his book of 1088 AD,
the Dream Pool Essays.
However, Shen Kuo alluded to the fact that the Miscellaneous
Morsels from Youyang written in about 840 AD byDuan Chengshi (d. 863) during the Tang
Dynasty (618907) mentioned inverting the image of a Chinese pagoda tower beside a seashore.
In fact, Shen makes no assertion that he was the first to experiment with such a device. Shen
wrote of Cheng's book: "[Miscellaneous Morsels from Youyang] said that the image of the pagoda is
inverted because it is beside the sea, and that the sea has that effect. This is nonsense. It is a
normal principle that the image is inverted after passing through the small hole."
[14][verification needed]

[14]

[14]

[14]

In 13th-century England, Roger Bacon described the use of a camera obscura for the safe
observation of solar eclipses. At the end of the 13th century, Arnaldus de Villa Novais credited with
using a camera obscura to project live performances for entertainment.
Its potential as a drawing
aid may have been familiar to artists by as early as the 15th century; Leonardo da Vinci (14521519
AD) described the camera obscura in Codex Atlanticus. Johann Zahn's Oculus Artificialis
Teledioptricus Sive Telescopium, published in 1685, contains many descriptions, diagrams,
illustrations and sketches of both the camera obscura and the magic lantern.
[15]

[16][17]

Camera obscura, from a manuscript of military designs. 17th century, possibly Italian

Giambattista della Porta improved the camera obscura by replacing the hole with an old man's
lenticular (biconvex) lens in his Magia Naturalis (1558-1589), the popularity of which helped spread
knowledge of it. He compared the shape of the human eye to the lens in his camera obscura, and
provided a readily comprehensible example of how light forms images in the eye. One chapter in the
ConteAlgarotti's Saggio sopra Pittura (1764) is dedicated to the use of a camera ottica ("optic
chamber") in painting.
[18]

The 17th century Dutch Masters, such as Johannes Vermeer, were known for their magnificent
attention to detail. It has been widely speculated that they made use of such a camera, but the
extent of their use by artists at this period remains a matter of considerable controversy, recently
revived by the HockneyFalco thesis.

Four drawings by Canaletto, representing Campo San Giovanni e Paolo inVenice, obtained with a camera obscura (Venice, Gallerie
dell'Accademia)

The German astronomer Johannes Kepler described the use of a camera obscura in
his Paralipomena in 1604. The term is based on the Latin camera, "(vaulted) chamber or room",
andobscura, "darkened" (plural: camerae obscurae). The English physician and author Sir Thomas
Brownespeculated upon the interrelated workings of optics and the camera obscura in his 1658
discourse The Garden of Cyrus thus:
[19]

For at the eye the Pyramidal rayes from the object, receive a decussation, and so strike a second
base upon the Retina or hinder coat, the proper organ of Vision; wherein the pictures from objects
are represented, answerable to the paper, or wall in the dark chamber; after the decussation of the
rayes at the hole of the hornycoat, and their refraction upon the Christalline humour, answering the
foramen of the window, and the convex or burning-glasses, which refract the rayes that enter it.
Early models were large, comprising either a whole darkened room or a tent (as employed
by Johannes Kepler). By the 18th century, following developments by Robert Boyleand Robert
Hooke, more easily portable models became available. These were extensively used by amateur
artists while on their travels, but they were also employed by professionals, including Paul
Sandby, Canaletto and Joshua Reynolds, whose camera (disguised as a book) is now in
the Science Museum in London. Such cameras were later adapted by Joseph Nicephore
Niepce, Louis Daguerre and William Fox Talbot for creating the first photographs.

Camera obscura[edit]

An artist using an 18th-century camera obscura to trace an image

Photographic cameras were a development of the camera obscura, a device possibly dating back to the ancient
Chinese[1] and ancient Greeks,[2][3] which uses a pinhole or lens to project an image of the scene outside upside-down
onto a viewing surface.
An Arab physicist, Ibn al-Haytham, published his Book of Optics in 1021 AD. He created the first pinhole camera after
observing how light traveled through a window shutter. Ibn al-Haytham realized that smaller holes would create
sharper images. Ibn al-Haytham is also credited with inventing the first camera obscura.[4]
On 24 January 1544 mathematician and instrument maker Reiners Gemma Frisius of Leuven University used one to
watch a solar eclipse, publishing a diagram of his method in De Radio Astronimica et Geometrico in the following
year.[5] In 1558 Giovanni Batista della Porta was the first to recommend the method as an aid to drawing.[6]
Before the invention of photographic processes there was no way to preserve the images produced by these
cameras apart from manually tracing them. The earliest cameras were room-sized, with space for one or more people
inside; these gradually evolved into more and more compact models such as that by Nipce's time portable handheld
cameras suitable for photography were readily available. The first camera that was small and portable enough to be

practical for photography was envisioned by Johann Zahn in 1685, though it would be almost 150 years before such
an application was possible.

Early fixed images[edit]


The first partially successful photograph of a camera image was made in approximately 1816 by Nicphore Nipce,[7]
[8]

using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened

where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Nipce,
so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light
necessary for viewing it. In the mid-1820s, Nipce used a sliding wooden box camera made by Parisian opticians
Charles and Vincent Chevalier to experiment with photography on surfaces thinly coated with Bitumen of Judea.[9] The
bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One
of those photographs has survived.

Daguerreotype camera made by Maison Susse Frres in 1839, with a lens by Charles Chevalier

Daguerreotypes and calotypes[edit]


After Nipce's death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first
practical photographic process, which he named the daguerreotype and publicly unveiled in 1839.[10] Daguerre treated
a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in
the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium
chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes
used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and
could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder
containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of
the holder, uncapped the lens, and counted off as many secondsor minutesas the lighting conditions seemed to
require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic
lenses were standard.[11]

Late 19th century studio camera

Dry plates[edit]
Collodion dry plates had been available since 1855, thanks to the work of Dsir van Monckhoven, but it was not until
the invention of thegelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in
quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally
made so-called "instantaneous" snapshot exposures practical. For the first time, a tripod or other support was no
longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking
the picture. The ranks of amateur photographers swelled and informal "candid" portraits became popular. There was
a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box
cameras, and even "detective cameras" disguised as pocket watches, hats, or other objects.
The short exposure times that made candid photography possible also necessitated another innovation, the
mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the
end of the 19th century.[11]

Kodak and the birth of film[edit]

Kodak No. 2 Brownie box camera, circa 1910

The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885
before switching tocelluloid in 1888-1889. His first camera, which he called the "Kodak," was first offered for sale in

1888. It was a very simple box camerawith a fixed-focus lens and single shutter speed, which along with its relatively
low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and
needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th
century Eastman had expanded his lineup to several models including both box and folding cameras.
In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive
box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models
remained on sale until the 1960s.
Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool.
Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality
prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger
number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to
hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also
available, as were backs that enabled rollfilm cameras to use plates.
Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until
the end of the 20th century when electronic photography replaced them.

35 mm[edit]

Leica I, 1925

Argus C3, 1939

See also: History of 135 film

A number of manufacturers started to use 35mm film for still photography between 1905 and 1913. The first 35mm
cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the
Simplex, in 1914.[citation needed]
Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine
film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He
built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years
by World War I. It wasn't until after World War I that Leica commercialized their first 35mm Cameras. Leitz testmarketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into
production as the Leica I (for Leitz camera) in 1925. The Leica's immediate popularity spawned a number of
competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of
choice for high-end compact cameras.
Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm
cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people
and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of
the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3.
Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3
was discontinued in 1966.
The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved
version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean
War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere.

TLRs and SLRs[edit]


See also: History of the single-lens reflex camera

A historic camera: the Contax S of 1949 the first pentaprism SLR

Asahiflex IIb, 1954

Nikon F of 1959 the first Japanese system camera

The first practical reflex camera was the Franke & Heidecke Rolleiflex medium format TLR of 1928. Though both
single- and twin-lens reflex cameras had been available for decades, they were too bulky to achieve much popularity.
The Rolleiflex, however, was sufficiently compact to achieve widespread popularity and the medium-format TLR
design became popular for both high- and low-end cameras.
A similar revolution in SLR design began in 1933 with the introduction of the Ihagee Exakta, a compact SLR which
used 127 rollfilm. This was followed three years later by the first Western SLR to use 135 film, the Kine Exakta
(World's first true 35mm SLR was Soviet "Sport" camera, marketed several months before Kine Exakta, though
"Sport" used its own film cartridge). The35mm SLR design gained immediate popularity and there was an explosion
of new models and innovative features after World War II. There were also a few 35mm TLRs, the best-known of
which was the Contaflex of 1935, but for the most part these met with little success.
The first major post-war SLR innovation was the eye-level viewfinder, which first appeared on the Hungarian Duflex in
1947 and was refined in 1948 with the Contax S, the first camera to use a pentaprism. Prior to this, all SLRs were
equipped with waist-level focusing screens. The Duflex was also the first SLR with an instant-return mirror, which
prevented the viewfinder from being blacked out after each exposure. This same time period also saw the
introduction of the Hasselblad 1600F, which set the standard for medium format SLRs for decades.
In 1952 the Asahi Optical Company (which later became well known for its Pentax cameras) introduced the first
Japanese SLR using 135 film, the Asahiflex. Several other Japanese camera makers also entered the SLR market in
the 1950s, including Canon, Yashica, andNikon. Nikon's entry, the Nikon F, had a full line of interchangeable

components and accessories and is generally regarded as the first Japanese system camera. It was the F, along with
the earlier S series of rangefinder cameras, that helped establish Nikon's reputation as a maker of professionalquality equipment.

Instant cameras[edit]

Polaroid Model J66, 1961

While conventional cameras were becoming more refined and sophisticated, an entirely new type of camera
appeared on the market in 1948. This was the Polaroid Model 95, the world's first viable instant-picture camera.
Known as a Land Camera after its inventor, Edwin Land, the Model 95 used a patented chemical process to produce
finished positive prints from the exposed negatives in under a minute. The Land Camera caught on despite its
relatively high price and the Polaroid lineup had expanded to dozens of models by the 1960s. The first Polaroid
camera aimed at the popular market, the Model 20 Swinger of 1965, was a huge success and remains one of the topselling cameras of all time.

Automation[edit]
The first camera to feature automatic exposure was the selenium light meter-equipped, fully automatic Super Kodak
Six-20 pack of 1938, but its extremely high price (for the time) of $225 ($3782 in present terms[12]) kept it from
achieving any degree of success. By the 1960s, however, low-cost electronic components were commonplace and
cameras equipped with light meters and automatic exposure systems became increasingly widespread.
The next technological advance came in 1960, when the German Mec 16 SB subminiature became the first camera
to place the light meter behind the lens for more accurate metering. However, through-the-lens metering ultimately
became a feature more commonly found on SLRs than other types of camera; the first SLR equipped with a TTL
system was the Topcon RE Super of 1962.

Digital cameras[edit]
See also: Dslr History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save
photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical

cameras to niche markets. Digital cameras now include wireless communication capabilities (for example WiFi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development[edit]
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of
making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the
extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed
to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission
to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the
push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging
satellite was theKH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a
resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York,Edward Stupp, Pieter Cath and Zsolt
Szilagyi filed for a patent on "All Solid State Radiation Imagers" on 6 September 1968 and constructed a flat-screen
target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a
capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on
10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and
applied for a patent in 1972, but it is not known whether it was ever built.[15] The Cromemco CYCLOPS introduced as
a hobbyist construction project in 1975[16] was the first digital camera to be interfaced to a microcomputer.
The first recorded attempt at building a self-contained digital camera was in 1975 by Steven Sasson, an engineer at
Eastman Kodak.[17][18] It used the then-new solid-state CCDimage sensor chips developed by Fairchild
Semiconductor in 1973.[19] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact
cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in
December 1975. The prototype camera was a technical exercise, not intended for production.

Analog electronic cameras[edit]

Sony Mavica, 1981

Main article: Still video camera

Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera,
appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused
with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel
signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like
signals to a 2 2 inch "video floppy".[20] In essence it was a video movie camera that recorded single frames, 50 per
disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current
televisions.

Canon RC-701, 1986

Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon
demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shinbun, a
Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA
Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog
cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable
printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was
beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for
viewing on a screen, but were never standardized as a computer drive.
The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to
transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics.
This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of
1989 and the first Gulf War in 1991.
US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real
time air-to-sea surveillance system.
The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable
analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for
sale to general users, which sold only a few hundred units. It recorded images ingreyscale, and the quality in
newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex
camera. Images were stored on video floppy disks.
Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital
photographs without modification was announced in late 1998. Silicon Film was to work like a roll of 35 mm film, with
a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The

product, which was never released, became increasingly obsolete due to improvements in digital camera technology
and affordability. Silicon Films' parent company filed for bankruptcy in 2001.[21]

Arrival of true digital cameras[edit]

The first portable digital SLR camera, introduced by Minolta in 1995.

Nikon D1, 1999

By the late 1980s, the technology required to produce truly commercial digital cameras existed. The first true portable
digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a
2 MB SRAM memory card that used a battery to keep the data in memory. This camera was never marketed to the
public.
The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987

[22]

though

there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed
commercially was sold in December 1989 in Japan, the DS-X by Fuji[23] The first commercially available portable
digital camera in the United States was the Dycam Model 1, first shipped in November 1990.[24] It was originally a
commercial failure because it was black and white, low in resolution, and cost nearly $1,000 (about $2000 in 2014).
[25]

It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor,

stored pictures digitally, and connected directly to a computer for download.[26][27][28]


In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of
professional Kodak DCSSLR cameras that were based in part on film bodies, often Nikons. It used a 1.3 megapixel
sensor, had a bulky external digital storage system and was priced at $13,000. At the arrival of the Kodak DCS-200,
the Kodak DCS was dubbed Kodak DCS-100.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which
allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display
on the back was the Casio QV-10 developed by a team led by Hiroyuki Suetaka in 1995. The first camera to
use CompactFlash was the Kodak DC-25 in 1996.[citation needed]. The first camera that offered the ability to
record video clips may have been the Ricoh RDC-1 in 1995.
In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three
independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use
any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was
the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 at
introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon Fmount lenses, which meant film photographers could use many of the same lenses they already owned.
Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into
different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. One of
the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough
to enable the widespread adoption of camera phones.
Since 2003, digital cameras have outsold film cameras[29] and Kodak announced in January 2004 that they would no
longer sell Kodak-branded film cameras in the developed world[30] - and 2012 filed for bankruptcy after struggling to
adapt to the changing industry.[31] Smartphones now routinely include high resolution digital cameras.

Вам также может понравиться