Вы находитесь на странице: 1из 96

PAPER 2001

Q2:
Cyclone, in strict meteorological terminology, an area of low atmospheric pressure surrounded
by a wind system blowing, in the northern hemisphere, in a counterclockwise direction. A
corresponding high-pressure area with clockwise winds is known as an anticyclone. In the
southern hemisphere these wind directions are reversed. Cyclones are commonly called lows and
anticyclones highs. The term cyclone has often been more loosely applied to a storm and
disturbance attending such pressure systems, particularly the violent tropical hurricane and the
typhoon, which center on areas of unusually low pressure.
Tornado, violently rotating column of air extending from within a thundercloud (see Cloud)
down to ground level. The strongest tornadoes may sweep houses from their foundations, destroy
brick buildings, toss cars and school buses through the air, and even lift railroad cars from their
tracks. Tornadoes vary in diameter from tens of meters to nearly 2 km (1 mi), with an average
diameter of about 50 m (160 ft). Most tornadoes in the northern hemisphere create winds that
blow counterclockwise around a center of extremely low atmospheric pressure. In the southern
hemisphere the winds generally blow clockwise. Peak wind speeds can range from near 120
km/h (75 mph) to almost 500 km/h (300 mph). The forward motion of a tornado can range from
a near standstill to almost 110 km/h (70 mph).
Hurricane, name given to violent storms that originate over the tropical or subtropical waters of
the Atlantic Ocean, Caribbean Sea, Gulf of Mexico, or North Pacific Ocean east of the
International Date Line. Such storms over the North Pacific west of the International Date Line
are called typhoons; those elsewhere are known as tropical cyclones, which is the general name
for all such storms including hurricanes and typhoons. These storms can cause great damage to
property and loss of human life due to high winds, flooding, and large waves crashing against
shorelines. The worst natural disaster in United States history was caused by a hurricane that
struck the coast of Texas in 1900. See also Tropical Storm; Cyclone.
Q3:
Energy
Energy, capacity of matter to perform work as the result of its motion or its position in relation to
forces acting on it. Energy associated with motion is known as kinetic energy, and energy related
to position is called potential energy. Thus, a swinging pendulum has maximum potential energy
at the terminal points; at all intermediate positions it has both kinetic and potential energy in
varying proportions. Energy exists in various forms, including mechanical (see Mechanics),
thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity),
radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are
interconvertible by appropriate processes. In the process of transformation either kinetic or
potential energy may be lost or gained, but the sum total of the two remains always the same.
A weight suspended from a cord has potential energy due to its position, inasmuch as it can
perform work in the process of falling. An electric battery has potential energy in chemical form.
A piece of magnesium has potential energy stored in chemical form that is expended in the form
of heat and light if the magnesium is ignited. If a gun is fired, the potential energy of the
gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic
mechanical energy of the moving rotor of a dynamo is changed into kinetic electrical energy by
electromagnetic induction. All forms of energy tend to be transformed into heat, which is the
most transient form of energy. In mechanical devices energy not expended in useful work is
dissipated in frictional heat, and losses in electrical circuits are largely heat losses.
Empirical observation in the 19th century led to the conclusion that although energy can be
transformed, it cannot be created or destroyed. This concept, known as the conservation of
energy, constitutes one of the basic principles of classical mechanics. The principle, along with
the parallel principle of conservation of matter, holds true only for phenomena involving
velocities that are small compared with the velocity of light. At higher velocities close to that of
light, as in nuclear reactions, energy and matter are interconvertible (see Relativity). In modern
physics the two concepts, the conservation of energy and of mass, are thus unified.
ENERGY CONVERSION
Transducer, device that converts an input energy into an output energy. Usually, the output
energy is a different kind of energy than the input energy. An example is a temperature gauge in
which a spiral metallic spring converts thermal energy into a mechanical deflection of the dial
needle. Because of the ease with which electrical energy may be transmitted and amplified, the
most useful transducers are those that convert other forms of energy, such as heat, light, or
sound, into electrical energy. Some examples are microphones, which convert sound energy into
electrical energy; photoelectric materials, which convert light energy into electrical energy; and
pyroelectric energy crystals, which convert heat energy into electrical energy.
Electric Motors and Generators, group of devices used to convert mechanical energy into
electrical energy, or electrical energy into mechanical energy, by electromagnetic means (see
Energy). A machine that converts mechanical energy into electrical energy is called a generator,
alternator, or dynamo, and a machine that converts electrical energy into mechanical energy is
called a motor.
Most electric cars use lead-acid batteries, but new types of batteries, including zinc-chlorine,
nickel metal hydride, and sodium-sulfur, are becoming more common. The motor of an electric
car harnesses the battery's electrical energy by converting it to kinetic energy. The driver simply
switches on the power, selects “Forward” or “Reverse” with another switch, and steps on the
accelerator pedal.
Photosynthesis, process by which green plants and certain other organisms use the energy of
light to convert carbon dioxide and water into the simple sugar glucose.
Turbine, rotary engine that converts the energy of a moving stream of water, steam, or gas into
mechanical energy. The basic element in a turbine is a wheel or rotor with paddles, propellers,
blades, or buckets arranged on its circumference in such a fashion that the moving fluid exerts a
tangential force that turns the wheel and imparts energy to it. This mechanical energy is then
transferred through a drive shaft to operate a machine, compressor, electric generator, or
propeller. Turbines are classified as hydraulic, or water, turbines, steam turbines, or gas turbines.
Today turbine-powered generators produce most of the world's electrical energy. Windmills that
generate electricity are known as wind turbines (see Windmill).
Wind Energy, energy contained in the force of the winds blowing across the earth’s surface.
When harnessed, wind energy can be converted into mechanical energy for performing work
such as pumping water, grinding grain, and milling lumber. By connecting a spinning rotor (an
assembly of blades attached to a hub) to an electric generator, modern wind turbines convert
wind energy, which turns the rotor, into electrical energy.
Q4:
(I)
Polymer
I INTRODUCTION
Polymer, substance consisting of large molecules that are made of many small, repeating units
called monomers, or mers. The number of repeating units in one large molecule is called the
degree of polymerization. Materials with a very high degree of polymerization are called high
polymers. Polymers consisting of only one kind of repeating unit are called homopolymers.
Copolymers are formed from several different repeating units.
Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and
resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives,
glass, and porcelain, are also to a large extent polymeric substances.
II STRUCTURE OF POLYMERS
Polymers can be subdivided into three, or possibly four, structural groups. The molecules in
linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain
degree—the monomers cannot rotate freely with respect to each other. Typical examples are
polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC).
Branched polymers have side chains that are attached to the chain molecule itself. Branching can
be caused by impurities or by the presence of monomers that have several reactive groups. Chain
polymers composed of monomers with side groups that are part of the monomers, such as
polystyrene or polypropylene, are not considered branched polymers.
In cross-linked polymers, two or more chains are joined together by side chains. With a small
degree of cross-linking, a loose network is obtained that is essentially two dimensional. High
degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually
caused by chemical reactions. An example of a two-dimensional cross-linked structure is
vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are
examples of highly cross-linked polymers; their structure is so rigid that when heated they
decompose or burn rather than melt.
III SYNTHESIS
Two general methods exist for forming large molecules from small monomers: addition
polymerization and condensation polymerization. In the chemical process called addition
polymerization, monomers join together without the loss of atoms from the molecules. Some
examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate,
and polytetrafluoroethylene (Teflon).
In condensation polymerization, monomers join together with the simultaneous elimination of
atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and
certain polyurethanes.
In 1983 a new method of addition polymerization called group transfer polymerization was
announced. An activating group within the molecule initiating the process transfers to the end of
the growing polymer chain as individual monomers insert themselves in the group. The method
has been used for acrylic plastics; it should prove applicable to other plastics as well.
Synthetic polymers include the plastics polystyrene, polyester, nylon (a polyamide), and
polyvinyl chloride. These polymers differ in their repeating monomer units. Scientists build
polymers from different monomer units to create plastics with different properties. For example,
polyvinyl chloride is tough and nylon is silklike. Synthetic polymers usually do not dissolve in
water or react with other chemicals. Strong synthetic polymers form fibers for clothing and other
materials. Synthetic fibers usually last longer than natural fibers do.

(II)
Laser
I INTRODUCTION
Laser, a device that produces and amplifies light. The word laser is an acronym for Light
Amplification by Stimulated Emission of Radiation. Laser light is very pure in color, can be
extremely intense, and can be directed with great accuracy. Lasers are used in many modern
technological devices including bar code readers, compact disc (CD) players, and laser printers.
Lasers can generate light beyond the range visible to the human eye, from the infrared through
the X-ray range. Masers are similar devices that produce and amplify microwaves.
II PRINCIPLES OF OPERATION
Lasers generate light by storing energy in particles called electrons inside atoms and then
inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of all
matter on Earth and are a thousand times smaller than viruses. Electrons are the underlying
source of almost all light.
Light is composed of tiny packets of energy called photons. Lasers produce coherent light: light
that is monochromatic (one color) and whose photons are “in step” with one another.
A Excited Atoms
At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is
made up of two types of particles: protons, which have a positive charge, and neutrons, which
have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but occupies
only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee Stadium and the
equally magnified nucleus is only the size of a baseball.
Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside
atoms. Electrons travel in complex orbits and exist only in certain specific energy states or levels
(see Quantum Theory). Electrons can move from a low to a high energy level by absorbing
energy. An atom with at least one electron that occupies a higher energy level than it normally
would is said to be excited. An atom can become excited by absorbing a photon whose energy
equals the difference between the two energy levels. A photon’s energy, color, frequency, and
wavelength are directly related: All photons of a given energy are the same color and have the
same frequency and wavelength.
Usually, electrons quickly jump back to the low energy level, giving off the extra energy as light
(see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as many
electrons independently emit photons of different colors in all directions.
B Stimulated Emission
Lasers are different from more familiar sources of light. Excited atoms in lasers collectively emit
photons of a single color, all traveling in the same direction and all in step with one another.
When two photons are in step, the peaks and troughs of their waves line up. The electrons in the
atoms of a laser are first pumped, or energized, to an excited state by an energy source. An
excited atom can then be “stimulated” by a photon of exactly the same color (or, equivalently,
the same wavelength) as the photon this atom is about to emit spontaneously. If the photon
approaches closely enough, the photon can stimulate the excited atom to immediately emit light
that has the same wavelength and is in step with the photon that interacted with it. This
stimulated emission is the key to laser operation. The new light adds to the existing light, and the
two photons go on to stimulate other excited atoms to give up their extra energy, again in step.
The phenomenon snowballs into an amplified, coherent beam of light: laser light.
In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with highly
reflective mirrors facing inward at each end. As the photons bounce between the two parallel
mirrors, they trigger further stimulated emissions and the light gets brighter and brighter with
each pass through the excited atoms. One of the mirrors is only partially silvered, allowing a
small amount of light to pass through rather than reflecting it all. The intense, directional, and
single-colored laser light finally escapes through this slightly transparent mirror. The escaped
light forms the laser beam.
Albert Einstein first proposed stimulated emission, the underlying process for laser action, in
1917. Translating the idea of stimulated emission into a working model, however, required more
than four decades. The working principles of lasers were outlined by the American physicists
Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application. (Both men
won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981). The patent
for the laser was granted to Townes and Schawlow, but it was later challenged by the American
physicist and engineer Gordon Gould, who had written down some ideas and coined the word
laser in 1957. Gould eventually won a partial patent covering several types of laser. In 1960
American physicist Theodore Maiman of Hughes Aircraft Corporation constructed the first
working laser from a ruby rod.
III TYPES OF LASERS
Lasers are generally classified according to the material, called the medium, they use to produce
the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all common types of
lasers.
A Solid-State Lasers
Solid-state lasers produce light by means of a solid medium. The most common solid laser media
are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rods are
fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film. Solid-state
lasers offer the highest power output. They are usually pulsed to generate a very brief burst of
light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts are useful for
studying physical phenomena of very brief duration.
One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-
energy light than the laser produces. This procedure, called pumping, is achieved with brilliant
strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps.
B Gas Lasers
The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor. The
medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located
outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet
light, electron beams, electric current, or chemical reactions. The helium-neon laser is known for
its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at turning the
energy used to excite their atoms into laser light. Consequently, they are the most powerful
continuous wave (CW) lasers—that is, lasers that emit light continuously rather than in pulses.
C Liquid Lasers
The most common liquid laser media are inorganic dyes contained in glass vessels. They are
pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous wave
mode. Some dye lasers are tunable, meaning that the color of the laser light they emit can be
adjusted with the help of a prism located inside the laser cavity.
D Semiconductor Lasers
Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common
semiconductor used. A typical semiconductor laser consists of a junction between two flat layers
of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra electron,
and the other with an impurity whose atoms are one electron short. Semiconductor lasers are
pumped by the direct application of electric current across the junction. They can be operated in
the continuous wave mode with better than 50 percent efficiency. Only a small percentage of the
energy used to excite most other lasers is converted into light.
Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-
cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit on
a chip the size of a fingernail.
Common uses for semiconductor lasers include compact disc (CD) players and laser printers.
Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber
Optics).
E Free Electron Lasers.
Free electron lasers employ an array of magnets to excite free electrons (electrons not bound to
atoms). First developed in 1977, they are now becoming important research instruments. Free
electron lasers are tunable over a broader range of energies than dye lasers. The devices become
more difficult to operate at higher energies but generally work successfully from infrared
through ultraviolet wavelengths. Theoretically, electron lasers can function even in the X-ray
range.
The free electron laser facility at the University of California at Santa Barbara uses intense far-
infrared light to investigate mutations in DNA molecules and to study the properties of
semiconductor materials. Free electron lasers should also eventually become capable of
producing very high-power radiation that is currently too expensive to produce. At high power,
near-infrared beams from a free electron laser could defend against a missile attack.
IV LASER APPLICATIONS
The use of lasers is restricted only by imagination. Lasers have become valuable tools in
industry, scientific research, communications, medicine, the military, and the arts.
A Industry
Powerful laser beams can be focused on a small spot to generate enormous temperatures.
Consequently, the focused beams can readily and precisely heat, melt, or vaporize material.
Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim
microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to induce
controlled nuclear fusion (see Nuclear Energy).
Highly directional laser beams are used for alignment in construction. Perfectly straight and
uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short laser
pulses also make high-speed photography with exposure times of only several trillionths of a
second possible.
B Scientific Research
Because laser light is highly directional and monochromatic, extremely small amounts of light
scattering and small shifts in color caused by the interaction between laser light and matter can
easily be detected. By measuring the scattering and color shifts, scientists can study molecular
structures of matter. Chemical reactions can be selectively induced, and the existence of trace
substances in samples can be detected. Lasers are also the most effective detectors of certain
types of air pollution. (see Chemical Analysis; Photochemistry).
Scientists use lasers to make extremely accurate measurements. Lasers are used in this way for
monitoring small movements associated with plate tectonics and for geographic surveys. Lasers
have been used for precise determination (to within one inch) of the distance between Earth and
the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists also have used
lasers to determine the speed of light to an unprecedented accuracy.
Very fast laser-activated switches are being developed for use in particle accelerators. Scientists
also use lasers to trap single atoms and subatomic particles in order to study these tiny bits of
matter (see Particle Trap).
C Communications
Laser light can travel a large distance in outer space with little reduction in signal strength. In
addition, high-energy laser light can carry 1,000 times the television channels today carried by
microwave signals. Lasers are therefore ideal for space communications. Low-loss optical fibers
have been developed to transmit laser light for earthbound communication in telephone and
computer systems. Laser techniques have also been used for high-density information recording.
For instance, laser light simplifies the recording of a hologram, from which a three-dimensional
image can be reconstructed with a laser beam. Lasers are also used to play audio CDs and
videodiscs (see Sound Recording and Reproduction).
D Medicine
Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and
cauterize certain body tissues in a small fraction of a second without damaging surrounding
healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize
lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical
procedures for eye disorders. Laser techniques have also been developed for lab tests of small
biological samples.
E Military Applications
Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can be
fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic
missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president Ronald
Reagan and the Ballistic Missile Defense program supported by President George W. Bush. The
ability of tunable dye lasers to selectively excite an atom or molecule may open up more efficient
ways to separate isotopes for construction of nuclear weapons.
V LASER SAFETY
Because the eye focuses laser light just as it does other light, the chief danger in working with
lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected.
Lasers sold and used commercially in the United States must comply with a strict set of laws
enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food
and Drug Administration. The CDRH has divided lasers into six groups, depending on their
power output, their emission duration, and the energy of the photons they emit. The classification
is then attached to the laser as a sticker. The higher the laser’s energy, the higher its potential to
injure. High-powered lasers of the Class IV type (the highest classification) generate a beam of
energy that can start fires, burn flesh, and cause permanent eye damage whether the light is
direct, reflected, or diffused. Canada uses the same classification system, and laser use in Canada
is overseen by Health Canada’s Radiation Protection Bureau.
Goggles blocking the specific color of photons that a laser produces are mandatory for the safe
use of lasers. Even with goggles, direct exposure to laser light should be avoided.
(iii)Pesticides
The chemical agents called pesticides include herbicides (for weed control), insecticides, and
fungicides. More than half the pesticides used in the U.S. are herbicides that control weeds:
USDA estimates indicate that 86 percent of U.S. agricultural land areas are treated with
herbicides, 18 percent with insecticides, and 3 percent with fungicides. The amount of pesticide
used on different crops also varies. For example, in the U.S., about 67 percent of the insecticides
used in agriculture are applied to two crops, cotton and corn; about 70 percent of the herbicides
are applied to corn and soybeans, and most of the fungicides are applied to fruit and vegetable
crops.
Most of the insecticides now applied are long-lasting synthetic compounds that affect the
nervous system of insects on contact. Among the most effective are the chlorinated hydrocarbons
DDT, chlordane, and toxaphene, although agricultural use of DDT has been banned in the U.S.
since 1973. Others, the organophosphate insecticides, include malathion, parathion, and
dimethoate. Among the most effective herbicides are the compounds of 2,4-D (2,4-
dichlorophenoxyacetic acid), only a few kilograms of which are required per hectare to kill
broad-leaved weeds while leaving grains unaffected.
Agricultural pesticides prevent a monetary loss of about $9 billion each year in the U.S. For
every $1 invested in pesticides, the American farmer gets about $4 in return. These benefits,
however, must be weighed against the costs to society of using pesticides, as seen in the banning
of ethylene dibromide in the early 1980s. These costs include human poisonings, fish kills,
honey bee poisonings, and the contamination of livestock products. The environmental and
social costs of pesticide use in the U.S. have been estimated to be at least $1 billion each year.
Thus, although pesticides are valuable for agriculture, they also can cause serious harm. Indeed,
the question may be asked—what would crop losses be if insecticides were not used in the U.S.,
and readily available nonchemical controls were substituted? The best estimate is that only
another 5 percent of the nation's food would be lost.

(iv) Fission and Fusion

Fission and Fusion


Nuclear energy can be released in two different ways: fission, the splitting of a large nucleus, and
fusion, the combining of two small nuclei. In both cases energy—measured in millions of
electron volts (MeV)—is released because the products are more stable (have a higher binding
energy) than the reactants. Fusion reactions are difficult to maintain because the nuclei repel
each other, but fusion creates much less radioactive waste than does fission.
© Microsoft Corporation. All Rights Reserved.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Q: How would a fusion reactor differ from the nuclear reactors we currently have?
A: The nuclear reactors we have now are fission reactors. This means that they obtain their
energy from nuclear reactions that split large nuclei such as uranium into smaller ones such as
rubidium and cesium. There is a binding energy that holds a nucleus together. If the binding
energy of the original large nucleus is greater than the sum of the binding energies of the smaller
pieces, you get the difference in energy as heat that can be used in a power station to generate
electricity.
A fusion reaction works the other way. It takes small nuclei like deuterium (heavy hydrogen) and
fuses them together to make larger ones such as helium. If the binding energy of the two
deuterium nuclei is greater than that of the final larger helium nucleus, it can be used to generate
electricity.
There are two main differences between fission and fusion. The first is that the materials required
for fission are rarer and more expensive to produce than those for fusion. For example, uranium
has to be mined in special areas and then purified by difficult processes. By contrast, even
though deuterium makes up only 0.02 percent of naturally occurring hydrogen, we have a vast
supply of hydrogen in the water making up the oceans. The second difference is that the products
of fission are radioactive and so need to be treated carefully, as they are dangerous to health. The
products of fusion are not radioactive (although a realistic reactor will likely have some
relatively small amount of radioactive product).
The problem with building fusion reactors is that a steady, controlled fusion reaction is very hard
to achieve. It is still a subject of intense research. The main problem is that to achieve fusion we
need to keep the nuclei we wish to fuse at extremely high temperatures and close enough for
them to have a chance of fusing with one other. It is extremely difficult to find a way of holding
everything together, since the nuclei naturally repel each other and the temperatures involved are
high enough to melt any solid substance known. As technology improves, holding everything
together will become easier, but it seems that we are a long way off from having commercial
fusion reactors.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(v) Paramagnetism and Diamagnetism
Paramagnetism
Liquid oxygen becomes trapped in an electromagnet’s magnetic field because oxygen (O2) is
paramagnetic. Oxygen has two unpaired electrons whose magnetic moments align with external
magnetic field lines. When this occurs, the O2 molecules themselves behave like tiny magnets,
and become trapped between the poles of the electromagnet.
Magnetism
I INTRODUCTION
Magnetism, an aspect of electromagnetism, one of the fundamental forces of nature. Magnetic
forces are produced by the motion of charged particles such as electrons, indicating the close
relationship between electricity and magnetism. The unifying frame for these two forces is called
electromagnetic theory (see Electromagnetic Radiation). The most familiar evidence of
magnetism is the attractive or repulsive force observed to act between magnetic materials such as
iron. More subtle effects of magnetism, however, are found in all matter. In recent times these
effects have provided important clues to the atomic structure of matter.
II HISTORY OF STUDY
The phenomenon of magnetism has been known of since ancient times. The mineral lodestone
(see Magnetite), an oxide of iron that has the property of attracting iron objects, was known to
the Greeks, Romans, and Chinese. When a piece of iron is stroked with lodestone, the iron itself
acquires the same ability to attract other pieces of iron. The magnets thus produced are polarized
—that is, each has two sides or ends called north-seeking and south-seeking poles. Like poles
repel one another, and unlike poles attract.
The compass was first used for navigation in the West some time after AD1200. In the 13th
century, important investigations of magnets were made by the French scholar Petrus Peregrinus.
His discoveries stood for nearly 300 years, until the English physicist and physician William
Gilbert published his book Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth in
1600. Gilbert applied scientific methods to the study of electricity and magnetism. He pointed
out that the earth itself behaves like a giant magnet, and through a series of experiments, he
investigated and disproved several incorrect notions about magnetism that were accepted as
being true at the time. Subsequently, in 1750, the English geologist John Michell invented a
balance that he used in the study of magnetic forces. He showed that the attraction and repulsion
of magnets decrease as the squares of the distance from the respective poles increase. The French
physicist Charles Augustin de Coulomb, who had measured the forces between electric charges,
later verified Michell's observation with high precision.
III ELECTROMAGNETIC THEORY
In the late 18th and early 19th centuries, the theories of electricity and magnetism were
investigated simultaneously. In 1819 an important discovery was made by the Danish physicist
Hans Christian Oersted, who found that a magnetic needle could be deflected by an electric
current flowing through a wire. This discovery, which showed a connection between electricity
and magnetism, was followed up by the French scientist André Marie Ampère, who studied the
forces between wires carrying electric currents, and by the French physicist Dominique François
Jean Arago, who magnetized a piece of iron by placing it near a current-carrying wire. In 1831
the English scientist Michael Faraday discovered that moving a magnet near a wire induces an
electric current in that wire, the inverse effect to that found by Oersted: Oersted showed that an
electric current creates a magnetic field, while Faraday showed that a magnetic field can be used
to create an electric current. The full unification of the theories of electricity and magnetism was
achieved by the English physicist James Clerk Maxwell, who predicted the existence of
electromagnetic waves and identified light as an electromagnetic phenomenon.
Subsequent studies of magnetism were increasingly concerned with an understanding of the
atomic and molecular origins of the magnetic properties of matter. In 1905 the French physicist
Paul Langevin produced a theory regarding the temperature dependence of the magnetic
properties of paramagnets (discussed below), which was based on the atomic structure of matter.
This theory is an early example of the description of large-scale properties in terms of the
properties of electrons and atoms. Langevin's theory was subsequently expanded by the French
physicist Pierre Ernst Weiss, who postulated the existence of an internal, “molecular” magnetic
field in materials such as iron. This concept, when combined with Langevin's theory, served to
explain the properties of strongly magnetic materials such as lodestone.
After Weiss's theory, magnetic properties were explored in greater and greater detail. The theory
of atomic structure of Danish physicist Niels Bohr, for example, provided an understanding of
the periodic table and showed why magnetism occurs in transition elements such as iron and the
rare earth elements, or in compounds containing these elements. The American physicists
Samuel Abraham Goudsmit and George Eugene Uhlenbeck showed in 1925 that the electron
itself has spin and behaves like a small bar magnet. (At the atomic level, magnetism is measured
in terms of magnetic moments—a magnetic moment is a vector quantity that depends on the
strength and orientation of the magnetic field, and the configuration of the object that produces
the magnetic field.) The German physicist Werner Heisenberg gave a detailed explanation for
Weiss's molecular field in 1927, on the basis of the newly-developed quantum mechanics (see
Quantum Theory). Other scientists then predicted many more complex atomic arrangements of
magnetic moments, with diverse magnetic properties.
IV THE MAGNETIC FIELD
Objects such as a bar magnet or a current-carrying wire can influence other magnetic materials
without physically contacting them, because magnetic objects produce a magnetic field.
Magnetic fields are usually represented by magnetic flux lines. At any point, the direction of the
magnetic field is the same as the direction of the flux lines, and the strength of the magnetic field
is proportional to the space between the flux lines. For example, in a bar magnet, the flux lines
emerge at one end of the magnet, then curve around the other end; the flux lines can be thought
of as being closed loops, with part of the loop inside the magnet, and part of the loop outside. At
the ends of the magnet, where the flux lines are closest together, the magnetic field is strongest;
toward the side of the magnet, where the flux lines are farther apart, the magnetic field is weaker.
Depending on their shapes and magnetic strengths, different kinds of magnets produce different
patterns of flux lines. The pattern of flux lines created by magnets or any other object that creates
a magnetic field can be mapped by using a compass or small iron filings. Magnets tend to align
themselves along magnetic flux lines. Thus a compass, which is a small magnet that is free to
rotate, will tend to orient itself in the direction of the magnetic flux lines. By noting the direction
of the compass needle when the compass is placed at many locations around the source of the
magnetic field, the pattern of flux lines can be inferred. Alternatively, when iron filings are
placed around an object that creates a magnetic field, the filings will line up along the flux lines,
revealing the flux line pattern.
Magnetic fields influence magnetic materials, and also influence charged particles that move
through the magnetic field. Generally, when a charged particle moves through a magnetic field,
it feels a force that is at right angles both to the velocity of the charged particle and the magnetic
field. Since the force is always perpendicular to the velocity of the charged particle, a charged
particle in a magnetic field moves in a curved path. Magnetic fields are used to change the paths
of charged particles in devices such as particle accelerators and mass spectrometers.
V KINDS OF MAGNETIC MATERIALS
The magnetic properties of materials are classified in a number of different ways.
One classification of magnetic materials—into diamagnetic, paramagnetic, and ferromagnetic—
is based on how the material reacts to a magnetic field. Diamagnetic materials, when placed in a
magnetic field, have a magnetic moment induced in them that opposes the direction of the
magnetic field. This property is now understood to be a result of electric currents that are
induced in individual atoms and molecules. These currents, according to Ampere's law, produce
magnetic moments in opposition to the applied field. Many materials are diamagnetic; the
strongest ones are metallic bismuth and organic molecules, such as benzene, that have a cyclic
structure, enabling the easy establishment of electric currents.
Paramagnetic behavior results when the applied magnetic field lines up all the existing magnetic
moments of the individual atoms or molecules that make up the material. This results in an
overall magnetic moment that adds to the magnetic field. Paramagnetic materials usually contain
transition metals or rare earth elements that possess unpaired electrons. Paramagnetism in
nonmetallic substances is usually characterized by temperature dependence; that is, the size of an
induced magnetic moment varies inversely to the temperature. This is a result of the increasing
difficulty of ordering the magnetic moments of the individual atoms along the direction of the
magnetic field as the temperature is raised.
A ferromagnetic substance is one that, like iron, retains a magnetic moment even when the
external magnetic field is reduced to zero. This effect is a result of a strong interaction between
the magnetic moments of the individual atoms or electrons in the magnetic substance that causes
them to line up parallel to one another. In ordinary circumstances these ferromagnetic materials
are divided into regions called domains; in each domain, the atomic moments are aligned parallel
to one another. Separate domains have total moments that do not necessarily point in the same
direction. Thus, although an ordinary piece of iron might not have an overall magnetic moment,
magnetization can be induced in it by placing the iron in a magnetic field, thereby aligning the
moments of all the individual domains. The energy expended in reorienting the domains from the
magnetized back to the demagnetized state manifests itself in a lag in response, known as
hysteresis.
Ferromagnetic materials, when heated, eventually lose their magnetic properties. This loss
becomes complete above the Curie temperature, named after the French physicist Pierre Curie,
who discovered it in 1895. (The Curie temperature of metallic iron is about 770° C/1300° F.)
VI OTHER MAGNETIC ORDERINGS
In recent years, a greater understanding of the atomic origins of magnetic properties has resulted
in the discovery of other types of magnetic ordering. Substances are known in which the
magnetic moments interact in such a way that it is energetically favorable for them to line up
antiparallel; such materials are called antiferromagnets. There is a temperature analogous to the
Curie temperature called the Neel temperature, above which antiferromagnetic order disappears.
Other, more complex atomic arrangements of magnetic moments have also been found.
Ferrimagnetic substances have at least two different kinds of atomic magnetic moments, which
are oriented antiparallel to one another. Because the moments are of different size, a net
magnetic moment remains, unlike the situation in an antiferromagnet where all the magnetic
moments cancel out. Interestingly, lodestone is a ferrimagnet rather than a ferromagnet; two
types of iron ions, each with a different magnetic moment, are in the material. Even more
complex arrangements have been found in which the magnetic moments are arranged in spirals.
Studies of these arrangements have provided much information on the interactions between
magnetic moments in solids.
VII APPLICATIONS
Numerous applications of magnetism and of magnetic materials have arisen in the past 100
years. The electromagnet, for example, is the basis of the electric motor and the transformer. In
more recent times, the development of new magnetic materials has also been important in the
computer revolution. Computer memories can be fabricated using bubble domains. These
domains are actually smaller regions of magnetization that are either parallel or antiparallel to the
overall magnetization of the material. Depending on this direction, the bubble indicates either a
one or a zero, thus serving as the units of the binary number system used in computers. Magnetic
materials are also important constituents of tapes and disks on which data are stored.
In addition to the atomic-sized magnetic units used in computers, large, powerful magnets are
crucial to a variety of modern technologies. Powerful magnetic fields are used in nuclear
magnetic resonance imaging, an important diagnostic tool used by doctors. Superconducting
magnets are used in today's most powerful particle accelerators to keep the accelerated particles
focused and moving in a curved path. Scientists are developing magnetic levitation trains that
use strong magnets to enable trains to float above the tracks, reducing friction.

Contributed By:
Martin Blume
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Q5:
(i) Microcomputer and Minicomputer
Minicomputer, a mid-level computer built to perform complex computations while dealing
efficiently with a high level of input and output from users connected via terminals.
Minicomputers also frequently connect to other minicomputers on a network and distribute
processing among all the attached machines. Minicomputers are used heavily in transaction-
processing applications and as interfaces between mainframe computer systems and wide area
networks. See also Office Systems; Time-Sharing.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Microcomputer, desktop- or notebook-size computing device that uses a microprocessor as its
central processing unit, or CPU (see Computer). Microcomputers are also called personal
computers (PCs), home computers, small-business computers, and micros. The smallest, most
compact are called laptops. When they first appeared, they were considered single-user devices,
and they were capable of handling only four, eight, or 16 bits of information at one time. More
recently the distinction between microcomputers and large, mainframe computers (as well as the
smaller mainframe-type systems called minicomputers) has become blurred, as newer
microcomputer models have increased the speed and data-handling capabilities of their CPUs
into the 32-bit, multiuser range.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(ii)
Supercomputer
I INTRODUCTION
Supercomputer, computer designed to perform calculations as fast as current technology allows
and used to solve extremely complex problems. Supercomputers are used to design automobiles,
aircraft, and spacecraft; to forecast the weather and global climate; to design new drugs and
chemical compounds; and to make calculations that help scientists understand the properties of
particles that make up atoms as well as the behavior and evolution of stars and galaxies.
Supercomputers are also used extensively by the military for weapons and defense systems
research, and for encrypting and decoding sensitive intelligence information. See Computer;
Encryption; Cryptography.
Supercomputers are different than other types of computers in that they are designed to work on
a single problem at a time, devoting all their resources to the solution of the problem. Other
powerful computers such as mainframes and workstations are specifically designed so that they
can work on numerous problems, and support numerous users, simultaneously. Because of their
high cost—usually in the hundreds of thousands to millions of dollars—supercomputers are
shared resources. Supercomputers are so expensive that usually only large companies,
universities, and government agencies and laboratories can afford them.
II HOW SUPERCOMPUTERS WORK
The two major components of a supercomputer are the same as any other computer—a central
processing unit (CPU) where instructions are carried out, and the memory in which data and
instructions are stored. The CPU in a supercomputer is similar in function to a standard personal
computer (PC) CPU, but it usually has a different type of transistor technology that minimizes
transistor switching time. Switching time is the length of time that it takes for a transistor in the
CPU to open or close, which corresponds to a piece of data moving or changing value in the
computer. This time is extremely important in determining the absolute speed at which a CPU
can operate. By using very high performance circuits, architectures, and, in some cases, even
special materials, supercomputer designers are able to make CPUs that are 10 to 20 times faster
than state-of-the-art processors for other types of commercial computers.
Supercomputer memory also has the same function as memory in other computers, but it is
optimized so that retrieval of data and instructions from memory takes the least amount of time
possible. Also important to supercomputer performance is that the connections between the
memory and the CPU be as short as possible to minimize the time that information takes to travel
between the memory and the CPU.
A supercomputer functions in much the same way as any other type of computer, except that it is
designed to do calculations as fast as possible. Supercomputer designers use two main methods
to reduce the amount of time that supercomputers spend carrying out instructions—pipelining
and parallelism. Pipelining allows multiple operations to take place at the same time in the
supercomputer’s CPU by grouping together pieces of data that need to have the same sequence
of operations performed on them and then feeding them through the CPU one after the other. The
general idea of parallelism is to process data and instructions in parallel rather than in sequence.
In pipelining, the various logic circuits (electronic circuits within the CPU that perform
arithmetic calculations) used on a specific calculation are continuously in use, with data
streaming from one logic unit to the next without interruption. For instance, a sequence of
operations on a large group of numbers might be to add adjacent numbers together in pairs
beginning with the first and second numbers, then to multiply these results by some constant, and
finally to store these results in memory. The addition operation would be Step 1, the
multiplication operation would be Step 2, and the assigning of the result to a memory location
would be Step 3 in the sequence. The CPU could perform the sequence of operations on the first
pair of numbers, store the result in memory and then pass the second pair of numbers through,
and continue on like this. For a small group of numbers this would be fine, but since
supercomputers perform calculations on massive groups of numbers this technique would be
inefficient, because only one operation at a time is being performed.
Pipelining overcomes the source of inefficiency associated with the CPU performing a sequence
of operations on only one piece of data at a time until the sequence is finished. The pipeline
method would be to perform Step 1 on the first pair of data and move it to Step 2. As the result
of the first operation move to Step 2, the second pair of data move into Step 1. Step 1 and 2 are
then performed simultaneously on their respective data and the results of the operations are
moved ahead in the pipeline, or the sequence of operations performed on a group of data. Hence
the third pair of numbers are in Step 1, the second pair of numbers are in Step 2, and the first pair
of numbers are in Step 3. The remainder of the calculations are performed in this way, with the
specific logic units in the sequence are always operating simultaneously on data.
The example used above to illustrate pipelining can also be used to illustrate the concept of
parallelism (see Parallel Processing). A computer that parallel-processed data would perform
Step 1 on multiple pieces of data simultaneously, then move these to Step 2, then to Step 3, each
step being performed on the multiple pieces of data simultaneously. One way to do this is to have
multiple logic circuits in the CPU that perform the same sequence of operations. Another way is
to link together multiple CPUs, synchronize them (meaning that they all perform an operation at
exactly the same time) and have each CPU perform the necessary operation on one of the pieces
of data.
Pipelining and parallelism are combined and used to greater or lesser extent in all
supercomputers. Until the early 1990s, parallelism achieved through the interconnection of CPUs
was limited to between 2 and 16 CPUs connected in parallel. However, the rapid increase in
processing speed of off-the-shelf microprocessors used in personal computers and workstations
made possible massively-parallel processing (MPP) supercomputers. While the individual
processors used in MPP supercomputers are not as fast as specially designed supercomputer
CPUs, they are much less expensive and because of this, hundreds or even thousands of these
processors can be linked together to achieve extreme parallelism.
III SUPERCOMPUTER PERFORMANCE
Supercomputers are used to create mathematical models of complex phenomena. These models
usually contain long sequences of numbers that are manipulated by the supercomputer with a
kind of mathematics called matrix arithmetic. For example, to accurately predict the weather,
scientists use mathematical models that contain current temperature, air pressure, humidity, and
wind velocity measurements at many neighboring locations and altitudes. Using these numbers
as data, the computer makes many calculations to simulate the physical interactions that will
likely occur during the forecast period.
When supercomputers perform matrix arithmetic on large sets of numbers, it is often necessary
to multiply many pairs of numbers together and to then add up each of their individual products.
A simple example of such a calculation is: (4 × 6) + (7 × 2) + (9 × 5) + (8 × 8) + (2 × 9) = 165.
In real problems, the strings of numbers used in calculations are usually much longer, often
containing hundreds or thousands of pairs of numbers. Furthermore, the numbers used are not
simple integers but more complicated types of numbers called floating point numbers that allow
a wide range of digits before and after the decimal point, for example 5,063,937.9120834.
The various operations of adding, subtracting, multiplying, and dividing floating-point numbers
are collectively called floating-point operations. An important way of measuring a
supercomputer’s performance is in the peak number of floating-point operations per second
(FLOPS) that it can do. In the mid-1990s, the peak computational rate for state-of-the-art
supercomputers was between 1 and 200 Gigaflops (billion floating-point operations per second),
depending on the specific model and configuration of the supercomputer.
In July 1995, computer scientists at the University of Tokyo, in Japan, broke the 1 teraflops (1
trillion floating-point operations per second) mark with a computer they designed to perform
astrophysical simulations. Named GRAPE-4 (GRAvity PipE number 4), this MPP
supercomputer consisted of 1692 interconnected processors. In November 1996, Cray Research
debuted the CRAY T3E-900, the first commercially-available supercomputer to offer teraflops
performance. In 1997 the Intel Corporation installed the teraflop machine Janus at Sandia
National Laboratories in New Mexico. Janus is composed of 9072 interconnected processors.
Scientists use Janus for classified work such as weapons research as well as for unclassified
scientific research such as modeling the impact of a comet on the earth.
The definition of what a supercomputer is constantly changes with technological progress. The
same technology that increases the speed of supercomputers also increases the speed of other
types of computers. For instance, the first computer to be called a supercomputer, the Cray-1
developed by Cray Research and first sold in 1976, had a peak speed of 167 megaflops. This is
only a few times faster than standard personal computers today, and well within the reach of
some workstations.

Contributed By:
Steve Nelson
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(iii)
(iv) Byte and Word
Byte, in computer science, a unit of information built from bits, the smallest units of information
used in computers. Bits have one of two absolute values, either 0 or 1. These bit values
physically correspond to whether transistors and other electronic circuitry in a computer are on
or off. A byte is usually composed of 8 bits, although bytes composed of 16 bits are also used.
See Number Systems.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(v)RAM and Cache Memory
Cache (computer), in computer science, an area of memory that holds frequently accessed data or
program instructions for the purpose of speeding a computer system's performance. A cache
consists of ultrafast static random-access memory (SRAM) chips, which rapidly move data to the
central processing unit (the device in a computer that interprets and executes instructions). The
process minimizes the amount of time the processor must be idle while it waits for data. This
time is measured in a clock cycle, which is the equivalent in time to a bit in data. The
effectiveness of the cache is dependent on the speed of the chips and the quality of the algorithm
that determines which data is most likely to be requested by the processor See also Disk Cache.
RAM, in computer science, acronym for random access memory. Semiconductor-based memory
that can be read and written by the microprocessor or other hardware devices. The storage
locations can be accessed in any order. Note that the various types of ROM memory are capable
of random access. The term RAM, however, is generally understood to refer to volatile memory,
which can be written as well as read. See also Computer; EPROM; PROM.
Buffer (computer science), in computer science, an intermediate repository of data—a reserved
portion of memory in which data is temporarily held pending an opportunity to complete its
transfer to or from a storage device or another location in memory. Some devices, such as
printers or the adapters supporting them, commonly have their own buffers.
Q6:
(i)
(ii)Television
Television
I INTRODUCTION
Television, system of sending and receiving pictures and sound by means of electronic signals
transmitted through wires and optical fibers or by electromagnetic radiation. These signals are
usually broadcast from a central source, a television station, to reception devices such as
television sets in homes or relay stations such as those used by cable television service providers.
Television is the most widespread form of communication in the world. Though most people will
never meet the leader of a country, travel to the moon, or participate in a war, they can observe
these experiences through the images on their television.
Television has a variety of applications in society, business, and science. The most common use
of television is as a source of information and entertainment for viewers in their homes. Security
personnel also use televisions to monitor buildings, manufacturing plants, and numerous public
facilities. Public utility employees use television to monitor the condition of an underground
sewer line, using a camera attached to a robot arm or remote-control vehicle. Doctors can probe
the interior of a human body with a microscopic television camera without having to conduct
major surgery on the patient. Educators use television to reach students throughout the world.
People in the United States have the most television sets per person of any country, with 835 sets
per 1,000 people as of 2000. Canadians possessed 710 sets per 1,000 people during the same
year. Japan, Germany, Denmark, and Finland follow North America in the number of sets per
person.
II HOW TELEVISION WORKS
A television program is created by focusing a television camera on a scene. The camera changes
light from the scene into an electric signal, called the video signal, which varies depending on the
strength, or brightness, of light received from each part of the scene. In color television, the
camera produces an electric signal that varies depending on the strength of each color of light.
Three or four cameras are typically used to produce a television program (see Television
Production). The video signals from the cameras are processed in a control room, then combined
with video signals from other cameras and sources, such as videotape recorders, to provide the
variety of images and special effects seen during a television program.
Audio signals from microphones placed in or near the scene also flow to the control room, where
they are amplified and combined. Except in the case of live broadcasts (such as news and sports
programs) the video and audio signals are recorded on tape and edited, assembled with the use of
computers into the final program, and broadcast later. In a typical television station, the signals
from live and recorded features, including commercials, are put together in a master control
room to provide the station's continuous broadcast schedule. Throughout the broadcast day,
computers start and stop videotape machines and other program sources, and switch the various
audio and visual signals. The signals are then sent to the transmitter.
The transmitter amplifies the video and audio signals, and uses the electronic signals to
modulate, or vary, carrier waves (oscillating electric currents that carry information). The carrier
waves are combined (diplexed), then sent to the transmitting antenna, usually placed on the
tallest available structure in a given broadcast area. In the antenna, the oscillations of the carrier
waves generate electromagnetic waves of energy that radiate horizontally throughout the
atmosphere. The waves excite weak electric currents in all television-receiving antennas within
range. These currents have the characteristics of the original picture and sound currents. The
currents flow from the antenna attached to the television into the television receiver, where they
are electronically separated into audio and video signals. These signals are amplified and sent to
the picture tube and the speakers, where they produce the picture and sound portions of the
program.
III THE TELEVISION CAMERA
The television camera is the first tool used to produce a television program. Most cameras have
three basic elements: an optical system for capturing an image, a pickup device for translating
the image into electronic signals, and an encoder for encoding signals so they may be
transmitted.
A Optical System
The optical system of a television camera includes a fixed lens that is used to focus the scene
onto the front of the pickup device. Color cameras also have a system of prisms and mirrors that
separate incoming light from a scene into the three primary colors: red, green, and blue. Each
beam of light is then directed to its own pickup device. Almost any color can be reproduced by
combining these colors in the appropriate proportions. Most inexpensive consumer video
cameras use a filter that breaks light from an image into the three primary colors.
B Pickup Device
The pickup device takes light from a scene and translates it into electronic signals. The first
pickup devices used in cameras were camera tubes. The first camera tube used in television was
the iconoscope. Invented in the 1920s, it needed a great deal of light to produce a signal, so it
was impractical to use in a low-light setting, such as an outdoor evening scene. The image-
orthicon tube and the vidicon tube were invented in the 1940s and were a vast improvement on
the iconoscope. They needed only about as much light to record a scene as human eyes need to
see. Instead of camera tubes, most modern cameras now use light-sensitive integrated circuits
(tiny, electronic devices) called charge-coupled devices (CCDs).
When recording television images, the pickup device replaces the function of film used in
making movies. In a camera tube pickup device, the front of the tube contains a layer of
photosensitive material called a target. In the image-orthicon tube, the target material is
photoemissive—that is, it emits electrons when it is struck by light. In the vidicon camera tube,
the target material is photoconductive—that is, it conducts electricity when it is struck by light.
In both cases, the lens of a camera focuses light from a scene onto the front of the camera tube,
and this light causes changes in the target material. The light image is transformed into an
electronic image, which can then be read from the back of the target by a beam of electrons (tiny,
negatively charged particles).
The beam of electrons is produced by an electron gun at the back of the camera tube. The beam
is controlled by a system of electromagnets that make the beam systematically scan the target
material. Whenever the electron beam hits the bright parts of the electronic image on the target
material, the tube emits a high voltage, and when the beam hits a dark part of the image, the tube
emits a low voltage. This varying voltage is the electronic television signal.
A charge-coupled device (CCD) can be much smaller than a camera tube and is much more
durable. As a result, cameras with CCDs are more compact and portable than those using a
camera tube. The image they create is less vulnerable to distortion and is therefore clearer. In a
CCD, the light from a scene strikes an array of photodiodes arranged on a silicon chip.
Photodiodes are devices that conduct electricity when they are struck by light; they send this
electricity to tiny capacitors. The capacitors store the electrical charge, with the amount of charge
stored depending on the strength of the light that struck the photodiode. The CCD converts the
incoming light from the scene into an electrical signal by releasing the charges from the
photodiodes in an order that follows the scanning pattern that the receiver will follow in re-
creating the image.
C Encoder
In color television, the signals from the three camera tubes or charge-coupled devices are first
amplified, then sent to the encoder before leaving the camera. The encoder combines the three
signals into a single electronic signal that contains the brightness info
rmation of the colors (luminance). It then adds another signal that contains the code used to
combine the colors (color burst), and the synchronization information used to direct the
television receiver to follow the same scanning pattern as the camera. The color television
receiver uses the color burst part of the signal to separate the three colors again.
IV SCANNING
Television cameras and television receivers use a procedure called scanning to record visual
images and re-create them on a television screen. The television camera records an image, such
as a scene in a television show, by breaking it up into a series of lines and scanning over each
line with the beam or beams of electrons contained in the camera tube. The pattern is created in a
CCD camera by the array of photodiodes. One scan of an image produces one static picture, like
a single frame in a film. The camera must scan a scene many times per second to record a
continuous image. In the television receiver, another electron beam—or set of electron beams, in
the case of color television—uses the signals recorded by the camera to reproduce the original
image on the receiver's screen. Just like the beam or beams in the camera, the electron beam in
the receiver must scan the screen many times per second to reproduce a continuous image.
In order for television to work, television images must be scanned and recorded in the same
manner as television receivers reproduce them. In the United States, broadcasters and television
manufacturers have agreed on a standard of breaking images down into 525 horizontal lines, and
scanning images 30 times per second. In Europe, most of Asia, and Australia, images are broken
down into 625 lines, and they are scanned 25 times per second. Special equipment can be used to
make television images that have been recorded in one standard fit a television system that uses a
different standard. Telecine equipment (from the words television and cinema) is used to convert
film and slide images to television signals. The images from film projectors or slides are directed
by a system of mirrors toward the telecine camera, which records the images as video signals.
The scanning method that is most commonly used today is called interlaced scanning. It
produces a clear picture that does not fade. When an image is scanned line by line from top to
bottom, the top of the image on the screen will begin to fade by the time the electron beam
reaches the bottom of the screen. With interlaced scanning, odd-numbered lines are scanned first,
and the remaining even-numbered lines are scanned next. A full image is still produced 30 times
a second, but the electron beam travels from the top of the screen to the bottom of the screen
twice for every time a full image is produced.
V TRANSMISSION OF TELEVISION SIGNALS
The audio and video signals of a television program are broadcast through the air by a
transmitter. The transmitter superimposes the information in the camera's electronic signals onto
carrier waves. The transmitter amplifies the carrier waves, making them much stronger, and
sends them to a transmitting antenna. This transmitting antenna radiates the carrier waves in all
directions, and the waves travel through the air to antennas connected to television sets or relay
stations.
A The Transmitter
The transmitter superimposes the information from the electronic television signal onto carrier
waves by modulating (varying) either the wave's amplitude, which corresponds to the wave's
strength, or the wave's frequency, which corresponds to the number of times the wave oscillates
each second (see Radio: Modulation). The amplitude of one carrier wave is modulated to carry
the video signal (amplitude modulation, or AM) and the frequency of another wave is modulated
to carry the audio signal (frequency modulation, or FM). These waves are combined to produce a
carrier wave that contains both the video and audio information. The transmitter first generates
and modulates the wave at a low power of several watts. After modulation, the transmitter
amplifies the carrier signal to the desired power level, sometimes many kilowatts (1,000 watts),
depending on how far the signal needs to travel, and then sends the carrier wave to the
transmitting antenna.
The frequency of carrier waves is measured in hertz (Hz), which is equal to the number of wave
peaks that pass by a point every second. The frequency of the modulated carrier wave varies,
covering a range, or band, of about 4 million hertz, or 4 megahertz (4 MHz). This band is much
wider than the band needed for radio broadcasting, which is about 10,000 Hz, or 10 kilohertz (10
kHz). Television stations that broadcast in the same area send out carrier waves on different
bands of frequencies, each called a channel, so that the signals from different stations do not mix.
To accommodate all the channels, which are spaced at least 6 MHz apart, television carrier
frequencies are very high. Six MHz does not represent a significant chunk of bandwidth if the
television stations broadcast between 50 and 800 MHz.
In the United States and Canada, there are two ranges of frequency bands that cover 67 different
channels. The first range is called very high frequency (VHF), and it includes frequencies from
54 to 72 MHz, from 76 to 88 MHz, and from 174 to 216 MHz. These frequencies correspond to
channels 2 through 13 on a television set. The second range, ultrahigh frequency (UHF), includes
frequencies from 407 MHz to 806 MHz, and it corresponds to channels 14 through 69 (see Radio
and Television Broadcasting).
The high-frequency waves radiated by transmitting antennas can travel only in a straight line,
and may be blocked by obstacles in between the transmitting and receiving antennas. For this
reason, transmitting antennas must be placed on tall buildings or towers. In practice, these
transmitters have a range of about 120 km (75 mi). In addition to being blocked, some television
signals may reflect off buildings or hills and reach a receiving antenna a little later than the
signals that travel directly to the antenna. The result is a ghost, or second image, that appears on
the television screen. Television signals may, however, be sent clearly from almost any point on
earth to any other—and from spacecraft to earth—by means of cables, microwave relay stations,
and communications satellites.
B Cable Transmission
Cable television was first developed in the late 1940s to serve shadow areas—that is, areas that
are blocked from receiving signals from a station's transmitting antenna. In these areas, a
community antenna receives the signal, and the signal is then redistributed to the shadow areas
by coaxial cable (a large cable with a wire core that can transmit the wide band of frequencies
required for television) or, more recently, by fiber-optic cable. Viewers in most areas can now
subscribe to a cable television service, which provides a wide variety of television programs and
films adapted for television that are transmitted by cable directly to the viewer's television set.
Digital data-compression techniques, which convert television signals to digital code in an
efficient way, have increased cable's capacity to 500 or more channels.
C Microwave Relay Transmission
Microwave relay stations are tall towers that receive television signals, amplify them, and
retransmit them as a microwave signal to the next relay station. Microwaves are electromagnetic
waves that are much shorter than normal television carrier waves and can travel farther. The
stations are placed about 50 km (30 mi) apart. Television networks once relied on relay stations
to broadcast to affiliate stations located in cities far from the original source of the broadcast.
The affiliate stations received the microwave transmission and rebroadcast it as a normal
television signal to the local area. This system has now been replaced almost entirely by satellite
transmission in which networks send or uplink their program signals to a satellite that in turn
downlinks the signals to affiliate stations.
D Satellite Transmission
Communications satellites receive television signals from a ground station, amplify them, and
relay them back to the earth over an antenna that covers a specified terrestrial area. The satellites
circle the earth in a geosynchronous orbit, which means they stay above the same place on the
earth at all times. Instead of a normal aerial antenna, receiving dishes are used to receive the
signal and deliver it to the television set or station. The dishes can be fairly small for home use,
or large and powerful, such as those used by cable and network television stations.
Satellite transmissions are used to efficiently distribute television and radio programs from one
geographic location to another by networks; cable companies; individual broadcasters; program
providers; and industrial, educational, and other organizations. Programs intended for specific
subscribers are scrambled so that only the intended recipients, with appropriate decoders, can
receive the program.
Direct-broadcast satellites (DBS) are used worldwide to deliver TV programming directly to TV
receivers through small home dishes. The Federal Communications Commission (FCC) licensed
several firms in the 1980s to begin DBS service in the United States. The actual launch of DBS
satellites, however, was delayed due to the economic factors involved in developing a digital
video compression system. The arrival in the early 1990s of digital compression made it possible
for a single DBS satellite to carry more than 200 TV channels. DBS systems in North America
are operating in the Ku band (12.0-19.0 GHz). DBS home systems consist of the receiving dish
antenna and a low-noise amplifier that boosts the antenna signal level and feeds it to a coaxial
cable. A receiving box converts the superhigh frequency (SHF) signals to lower frequencies and
puts them on channels that the home TV set can display.
VI TELEVISION RECEIVER
The television receiver translates the pulses of electric current from the antenna or cable back
into images and sound. A traditional television set integrates the receiver, audio system, and
picture tube into one device. However, some cable TV systems use a separate component such as
a set-top box as a receiver. A high-definition television (HDTV) set integrates the receiver
directly into the set like a traditional TV. However, some televisions receive high-definition
signals and display them on a monitor. In these instances, an external receiver is required.
A Tuner
The tuner blocks all signals other than that of the desired channel. Blocking is done by the radio
frequency (RF) amplifier. The RF amplifier is set to amplify a frequency band, 6 MHz wide,
transmitted by a television station; all other frequencies are blocked. A channel selector
connected to the amplifier determines the particular frequency band that is amplified. When a
new channel is selected, the amplifier is reset accordingly. In this way, the band, or channel,
picked out by the home receiver is changed. Once the viewer selects a channel, the incoming
signal is amplified, and the video, audio, and scanning signals are separated from the higher-
frequency carrier waves by a process called demodulation. The tuner amplifies the weak signal
intercepted by the antenna and partially demodulates (decodes) it by converting the carrier
frequency to a lower frequency—the intermediate frequency. Intermediate-frequency amplifiers
further increase the strength of the signals received from the antenna. After the incoming signals
have been amplified, audio, scanning, and video signals are separated.
B Audio System
The audio system consists of a discriminator, which translates the audio portion of the carrier
wave back into an electronic audio signal; an amplifier; and a speaker. The amplifier strengthens
the audio signal from the discriminator and sends it to the speaker, which converts the electrical
waves into sound waves that travel through the air to the listener.
C Picture Tube
The television picture tube receives video signals from the tuner and translates the signals back
into images. The images are created by an electron gun in the back of the picture tube, which
shoots a beam of electrons toward the back of the television screen. A black-and-white picture
tube contains just one electron gun, while a color picture tube contains three electron guns, one
for each of the primary colors of light (red, green, and blue). Part of the video signal goes to a
magnetic coil that directs the beam and makes it scan the screen in the same manner as the
camera originally scanned the scene. The rest of the signal directs the strength of the electron
beam as it strikes the screen. The screen is coated with phosphor, a substance that glows when it
is struck by electrons (see Luminescence). The stronger the electron beam, the stronger the glow
and the brighter that section of the scene appears.
In color television, a portion of the video signal is used to separate out the three color signals,
which are then sent to their corresponding electron beams. The screen is coated by tiny phosphor
strips or dots that are arranged in groups of three: one strip or dot that emits blue, one that emits
green, and one that emits red. Before light from each beam hits the screen, it passes through a
shadow mask located just behind the screen. The shadow mask is a layer of opaque material that
is covered with slots or holes. It partially blocks the beam corresponding to one color and
prevents it from hitting dots of another color. As a result, the electron beam directed by signals
for the color blue can strike and light up only blue dots. The result is similar for the beams
corresponding to red and green. Images in the three different colors are produced on the
television screen. The eye automatically combines these images to produce a single image
having the entire spectrum of colors formed by mixing the primary colors in various proportions.
VII TELEVISION'S HISTORY
The scientific principles on which television is based were discovered in the course of basic
research. Only much later were these concepts applied to television as it is known today. The
first practical television system began operating in the 1940s.
In 1873 the Scottish scientist James Clerk Maxwell predicted the existence of the
electromagnetic waves that make it possible to transmit ordinary television broadcasts. Also in
1873 the English scientist Willoughby Smith and his assistant Joseph May noticed that the
electrical conductivity of the element selenium changes when light falls on it. This property,
known as photoconductivity, is used in the vidicon television camera tube. In 1888 the German
physicist Wilhelm Hallwachs noticed that certain substances emit electrons when exposed to
light. This effect, called photoemission, was applied to the image-orthicon television camera
tube.
Although several methods of changing light into electric current were discovered, it was some
time before the methods were applied to the construction of a television system. The main
problem was that the currents produced were weak and no effective method of amplifying them
was known. Then, in 1906, the American engineer Lee De Forest patented the triode vacuum
tube. By 1920 the tube had been improved to the point where it could be used to amplify electric
currents for television.
A Nipkow Disk
Some of the earliest work on television began in 1884, when the German engineer Paul Nipkow
designed the first true television mechanism. In front of a brightly lit picture, he placed a
scanning disk (called a Nipkow disk) with a spiral pattern of holes punched in it. As the disk
revolved, the first hole would cross the picture at the top. The second hole passed across the
picture a little lower down, the third hole lower still, and so on. In effect, he designed a disk with
its own form of scanning. With each complete revolution of the disk, all parts of the picture
would be briefly exposed in turn. The disk revolved quickly, accomplishing the scanning within
one-fifteenth of a second. Similar disks rotated in the camera and receiver. Light passing through
these disks created crude television images.
Nipkow's mechanical scanner was used from 1923 to 1925 in experimental television systems
developed in the United States by the inventor Charles F. Jenkins, and in England by the inventor
John L. Baird. The pictures were crude but recognizable. The receiver also used a Nipkow disk
placed in front of a lamp whose brightness was controlled by the signal from the light-sensitive
tube behind the disk in the transmitter. In 1926 Baird demonstrated a system that used a 30-hole
Nipkow disk.
B Electronic Television
Simultaneous to the development of a mechanical scanning method, an electronic method of
scanning was conceived in 1908 by the English inventor A. A. Campbell-Swinton. He proposed
using a screen to collect a charge whose pattern would correspond to the scene, and an electron
gun to neutralize this charge and create a varying electric current. This concept was used by the
Russian-born American physicist Vladimir Kosma Zworykin in his iconoscope camera tube of
the 1920s. A similar arrangement was later used in the image-orthicon tube.
The American inventor and engineer Philo Taylor Farnsworth also devised an electronic
television system in the 1920s. He called his television camera, which converted each element of
an image into an electrical signal, an image dissector. Farnsworth continued to improve his
system in the 1930s, but his project lost its financial backing at the beginning of World War II
(1939-1945). Many aspects of Farnsworth's image dissector were also used in Zworykin's more
successful iconoscope camera.
Cathode rays, or beams of electrons in evacuated glass tubes, were first noted by the British
chemist and physicist Sir William Crookes in 1878. By 1908 Campbell-Swinton and a Russian,
Boris Rosing, had independently suggested that a cathode-ray tube (CRT) be used to reproduce
the television picture on a phosphor-coated screen. The CRT was developed for use in television
during the 1930s by the American electrical engineer Allen B. DuMont. DuMont's method of
picture reproduction is essentially the same as the one used today.
The first home television receiver was demonstrated in Schenectady, New York, on January 13,
1928, by the American inventor Ernst F. W. Alexanderson. The images on the 76-mm (3-in)
screen were poor and unsteady, but the set could be used in the home. A number of these
receivers were built by the General Electric Company (GE) and distributed in Schenectady. On
May 10, 1928, station WGY began regular broadcasting to this area.
C Public Broadcasting
The first public broadcasting of television programs took place in London in 1936. Broadcasts
from two competing firms were shown. Marconi-EMI produced a 405-line frame at 25 frames
per second, and Baird Television produced a 240-line picture at 25 frames per second. In early
1937 the Marconi system, clearly superior, was chosen as the standard. In 1941 the United States
adopted a 525-line, 30-image-per-second standard.
The first regular television broadcasts began in the United States in 1939, but after two years
they were suspended until shortly after the end of World War II in 1945. A television
broadcasting boom began just after the war in 1946, and the industry grew rapidly. The
development of color television had always lagged a few steps behind that of black-and-white
(monochrome) television. At first, this was because color television was technically more
complex. Later, however, the growth of color television was delayed because it had to be
compatible with monochrome—that is, color television would have to use the same channels as
monochrome television and be receivable in black and white on monochrome sets.
D Color Television
It was realized as early as 1904 that color television was possible using the three primary colors
of light: red, green, and blue. In 1928 Baird demonstrated color television using a Nipkow disk
in which three sets of openings scanned the scene. A fairly refined color television system was
introduced in New York City in 1940 by the Hungarian-born American inventor Peter Goldmark.
In 1951 public broadcasting of color television was begun using Goldmark's system. However,
the system was incompatible with monochrome television, and the experiment was dropped at
the end of the year. Compatible color television was perfected in 1953, and public broadcasting
in color was revived a year later.
Other developments that improved the quality of television were larger screens and better
technology for broadcasting and transmitting television signals. Early television screens were
either 18 or 25 cm (7 or 10 in) diagonally across. Television screens now come in a range of
sizes. Those that use built-in cathode-ray tubes (CRTs) measure as large as 89 or 100 cm (35 or
40 in) diagonally. Projection televisions (PTVs), first introduced in the 1970s, now come with
screens as large as 2 m (7 ft) diagonally. The most common are rear-projection sets in which
three CRTs beam their combined light indirectly to a screen via an assembly of lenses and
mirrors. Another type of PTV is the front-projection set, which is set up like a motion picture
projector to project light across a room to a separate screen that can be as large as a wall in a
home allows. Newer types of PTVs use liquid-crystal display (LCD) technology or an array of
micro mirrors, also known as a digital light processor (DLP), instead of cathode-ray tubes.
Manufacturers have also developed very small, portable television sets with screens that are 7.6
cm (3 in) diagonally across.
E Television in Space
Television evolved from an entertainment medium to a scientific medium during the exploration
of outer space. Knowing that broadcast signals could be sent from transmitters in space, the
National Aeronautics and Space Administration (NASA) began developing satellites with
television cameras. Unmanned spacecraft of the Ranger and Surveyor series relayed thousands of
close-up pictures of the moon's surface back to earth for scientific analysis and preparation for
lunar landings. The successful U.S. manned landing on the moon in July 1969 was documented
with live black-and-white broadcasts made from the surface of the moon. NASA's use of
television helped in the development of photosensitive camera lenses and more-sophisticated
transmitters that could send images from a quarter-million miles away.
Since 1960 television cameras have also been used extensively on orbiting weather satellites.
Video cameras trained on Earth record pictures of cloud cover and weather patterns during the
day, and infrared cameras (cameras that record light waves radiated at infrared wavelengths)
detect surface temperatures. The ten Television Infrared Observation Satellites (TIROS)
launched by NASA paved the way for the operational satellites of the Environmental Science
Services Administration (ESSA), which in 1970 became a part of the National Oceanic and
Atmospheric Administration (NOAA). The pictures returned from these satellites aid not only
weather prediction but also understanding of global weather systems. High-resolution cameras
mounted in Landsat satellites have been successfully used to provide surveys of crop, mineral,
and marine resources.
F Home Recording
In time, the process of watching images on a television screen made people interested in either
producing their own images or watching programming at their leisure, rather than during
standard broadcasting times. It became apparent that programming on videotape—which had
been in use since the 1950s—could be adapted for use by the same people who were buying
televisions. Affordable videocassette recorders (VCRs) were introduced in the 1970s and in the
1980s became almost as common as television sets.
During the late 1990s and early 2000s the digital video disc (DVD) player had the most
successful product launch in consumer electronics history. According to the Consumer
Electronics Association (CEA), which represents manufacturers and retailers of audio and video
products, 30 million DVD players were sold in the United States in a record five-year period
from 1997 to 2001. It took compact disc (CD) players 8 years and VCRs 13 years to achieve that
30-million milestone. The same size as a CD, a DVD can store enough data to hold a full-length
motion picture with a resolution twice that of a videocassette. The DVD player also offered the
digital surround-sound quality experienced in a state-of-the-art movie theater. Beginning in 2001
some DVD players also offered home recording capability.
G Digital Television
Digital television receivers, which convert the analog, or continuous, electronic television signals
received by an antenna into an electronic digital code (a series of ones and zeros), are currently
available. The analog signal is first sampled and stored as a digital code, then processed, and
finally retrieved. This method provides a cleaner signal that is less vulnerable to distortion, but in
the event of technical difficulties, the viewer is likely to receive no picture at all rather than the
degraded picture that sometimes occurs with analog reception. The difference in quality between
digital television and regular television is similar to the difference between a compact disc
recording (using digital technology) and an audiotape or long-playing record.
The high-definition television (HDTV) system was developed in the 1980s. It uses 1,080 lines
and a wide-screen format, providing a significantly clearer picture than the traditional 525- and
625-line television screens. Each line in HDTV also contains more information than normal
formats. HDTV is transmitted using digital technology. Because it takes a huge amount of coded
information to represent a visual image—engineers believe HDTV will need about 30 million
bits (ones and zeros of the digital code) each second—data-compression techniques have been
developed to reduce the number of bits that need to be transmitted. With these techniques, digital
systems need to continuously transmit codes only for a scene in which images are changing; the
systems can compress the recurring codes for images that remain the same (such as the
background) into a single code. Digital technology is being developed that will offer sharper
pictures on wider screens, and HDTV with cinema-quality images.
A fully digital system was demonstrated in the United States in the 1990s. A common world
standard for digital television, the MPEG-2, was agreed on in April 1993 at a meeting of
engineers representing manufacturers and broadcasters from 18 countries. Because HDTV
receivers initially cost much more than regular television sets, and broadcasts of HDTV and
regular television are incompatible, the transition from one format to the next could take many
years. The method endorsed by the U.S. Congress and the FCC to ease this transition is to give
existing television networks a second band of frequencies on which to broadcast, allowing
networks to broadcast in both formats at the same time. Engineers are also working on making
HDTV compatible with computers and telecommunications equipment so that HDTV technology
may be applied to other systems besides home television, such as medical devices, security
systems, and computer-aided manufacturing (CAM).
H Flat Panel Display
In addition to getting clearer, televisions are also getting thinner. Flat panel displays, some just a
few centimeters thick, offer an alternative to bulky cathode ray tube televisions. Even the largest
flat panel display televisions are thin enough to be hung on the wall like a painting. Many flat
panel TVs use liquid-crystal display (LCD) screens that make use of a special substance that
changes properties when a small electric current is applied to it. LCD technology has already
been used extensively in laptop computers. LCD television screens are flat, use very little
electricity, and work well for small, portable television sets. LCD has not been as successful,
however, for larger television screens.
Flat panel TVs made from gas-plasma displays can be much larger. In gas-plasma displays, a
small electric current stimulates an inert gas sandwiched between glass panels, including one
coated with phosphors that emit light in various colors. While just 8 cm (3 in) thick, plasma
screens can be more than 150 cm (60 in) diagonally.
I Computer and Internet Integration
As online computer systems become more popular, televisions and computers are increasingly
integrated. Such technologies combine the capabilities of personal computers, television, DVD
players, and in some cases telephones, and greatly expand the kinds of services that can be
provided. For example, computer-like hard drives in set-top recorders automatically store a TV
program as it is being received so that the consumer can pause live TV, replay a scene, or skip
ahead. For programs that consumers want to record for future viewing, a hard drive makes it
possible to store a number of shows. Some set-top devices offer Internet access through a dial-up
modem or broadband connection. Others allow the consumer to browse the World Wide Web on
their TV screen. When a device has both a hard drive and a broadband connection, consumers
may be able to download a specific program, opening the way for true video on demand.
Consumers may eventually need only one system or device, known as an information appliance,
which they could use for entertainment, communication, shopping, and banking in the
convenience of their home.

Reviewed By:
Michael Antonoff
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(iii) Microwave Oven
Microwave Oven, appliance that uses electromagnetic energy to heat and cook foods. A
microwave oven uses microwaves, very short radio waves commonly employed in radar and
satellite communications. When concentrated within a small space, these waves efficiently heat
water and other substances within foods.
In a microwave oven, an electronic vacuum tube known as a magnetron produces an oscillating
beam of microwaves. Before passing into the cooking space, the microwaves are sent through a
fanlike set of spinning metal blades called a stirrer. The stirrer scatters the microwaves,
dispersing them evenly within the oven, where they are absorbed by the food. Within the food
the microwaves orient molecules, particularly water molecules, in a specific direction. The
oscillating effect produced by the magnetron changes the orientation of the microwaves millions
of times per second. The water molecules begin to vibrate as they undergo equally rapid changes
in direction. This vibration produces heat, which in turn cooks the food.
Microwaves cook food rapidly and efficiently because, unlike conventional ovens, they heat only
the food and not the air or the oven walls. The heat spreads within food by conduction (see Heat
Transfer). Microwave ovens tend to cook moist food more quickly than dry foods, because there
is more water to absorb the microwaves. However, microwaves cannot penetrate deeply into
foods, sometimes making it difficult to cook thicker foods.
Microwaves pass through many types of glass, paper, ceramics, and plastics, making many
containers composed of these materials good for holding food; microwave instructions detail
exactly which containers are safe for microwave use. Metal containers are particularly unsuitable
because they reflect microwaves and prevent food from cooking. Metal objects may also reflect
microwaves back into the magnetron and cause damage. The door of the oven should always be
securely closed and properly sealed to prevent escape of microwaves. Leakage of microwaves
affects cooking efficiency and can pose a health hazard to anyone near the oven.
The discovery that microwaves could cook food was accidental. In 1945 Percy L. Spencer, a
technician at the Raytheon Company, was experimenting with a magnetron designed to produce
short radio waves for a radar system. Standing close to the magnetron, he noticed that a candy
bar in his pocket melted even though he felt no heat. Raytheon developed this food-heating
capacity and introduced the first microwave oven, then called a radar range, in the early 1950s.
Although it was slow to catch on at first, the microwave oven has since grown steadily in
popularity to its current status as a common household appliance.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(iv) Radar
I INTRODUCTION
Radar (Radio Detection And Ranging), remote detection system used to locate and identify
objects. Radar signals bounce off objects in their path, and the radar system detects the echoes of
signals that return. Radar can determine a number of properties of a distant object, such as its
distance, speed, direction of motion, and shape. Radar can detect objects out of the range of sight
and works in all weather conditions, making it a vital and versatile tool for many industries.
Radar has many uses, including aiding navigation in the sea and air, helping detect military
forces, improving traffic safety, and providing scientific data. One of radar’s primary uses is air
traffic control, both civilian and military. Large networks of ground-based radar systems help air
traffic controllers keep track of aircraft and prevent midair collisions. Commercial and military
ships also use radar as a navigation aid to prevent collisions between ships and to alert ships of
obstacles, especially in bad weather conditions when visibility is poor. Military forces around the
world use radar to detect aircraft and missiles, troop movement, and ships at sea, as well as to
target various types of weapons. Radar is a valuable tool for the police in catching speeding
motorists. In the world of science, meteorologists use radar to observe and forecast the weather
(see Meteorology). Other scientists use radar for remote sensing applications, including mapping
the surface of the earth from orbit, studying asteroids, and investigating the surfaces of other
planets and their moons (see Radar Astronomy).
II HOW RADAR WORKS
Radar relies on sending and receiving electromagnetic radiation, usually in the form of radio
waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or
near the speed of light. The characteristics of electromagnetic waves depend on their wavelength.
Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the
electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves.
Radar systems use long-wavelength electromagnetic radiation in the microwave and radio
ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better
than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the
target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the
atmosphere’s ionosphere, a layer of electrically-charged particles in the earth’s atmosphere.
A radar system starts by sending out electromagnetic radiation, called the signal. The signal
bounces off objects in its path. When the radiation bounces back, part of the signal returns to the
radar system; this echo is called the return. The radar system detects the return and, depending on
the sophistication of the system, simply reports the detection or analyzes the signal for more
information. Even though radio waves and microwaves reflect better than electromagnetic waves
of other lengths, only a tiny portion—about a billionth of a billionth—of the radar signal gets
reflected back. Therefore, a radar system must be able to transmit high amounts of energy in the
signal and to detect tiny amounts of energy in the return.
A radar system is composed of four basic components: a transmitter, an antenna, a receiver, and
a display. The transmitter produces the electrical signals in the correct form for the type of radar
system. The antenna sends these signals out as electromagnetic radiation. The antenna also
collects incoming return signals and passes them to the receiver, which analyzes the return and
passes it to a display. The display enables human operators see the data.
All radar systems perform the same basic tasks, but the way systems carry out their tasks has
some effect on the system’s parts. A type of radar called pulse radar sends out bursts of radar at
regular intervals. Pulse radar requires a method of timing the bursts from its transmitter, so this
part is more complicated than the transmitter in other radar systems. Another type of radar called
continuous-wave radar sends out a continuous signal. Continuous-wave radar gets much of its
information about the target from subtle changes in the return, or the echo of the signal. The
receiver in continuous-wave radar is therefore more complicated than in other systems.
A Transmitter System
The system surrounding the transmitter is made up of three main elements: the oscillator, the
modulator, and the transmitter itself. The transmitter supplies energy to the antenna in the form
of a high-energy electrical signal. The antenna then sends out electromagnetic radar waves as the
signal passes through it.
A1 The Oscillator
The production of a radar signal begins with an oscillator, a device that produces a pure electrical
signal at the desired frequency. Most radar systems use frequencies that fall in the radio range
(from a few million cycles per second—or Hertz—to several hundred million Hertz) or the
microwave range (from several hundred million Hertz to a several tens of billions Hertz). The
oscillator must produce a precise and pure frequency to provide the radar system with an
accurate reference when it calculates the Doppler shift of the signal (for further discussion of the
Doppler shift, see the Receiver section of this article below).
A2 The Modulator
The next stage of a radar system is the modulator, which rapidly varies, or modulates, the signal
from the oscillator. In a simple pulse radar system the modulator merely turns the signal on and
off. The modulator should vary the signal, but not distort it. This requires careful design and
engineering.
A3 The Transmitter
The radar system’s transmitter increases the power of the oscillator signal. The transmitter
amplifies the power from the level of about 1 watt to as much as 1 megawatt, or 1 million watts.
Radar signals have such high power levels because so little of the original signal comes back in
the return.
A4 The Antenna
After the transmitter amplifies the radar signal to the required level, it sends the signal to the
antenna, usually a dish-shaped piece of metal. Electromagnetic waves at the proper wavelength
propagate out from the antenna as the electrical signal passes through it. Most radar antennas
direct the radiation by reflecting it from a parabolic, or concave shaped, metal dish. The output
from the transmitter feeds into the focus of the dish. The focus is the point at which radio waves
reflected from the dish travel out from the surface of the dish in a single direction. Most antennas
are steerable, meaning that they can move to point in different directions. This enables a radar
system to scan an area of space rather than always pointing in the same direction.
B Reception Elements
A radar receiver detects and often analyzes the faint echoes produced when radar waves bounce
off of distant objects and return to the radar system. The antenna gathers the weak returning
radar signals and converts them into an electric current. Because a radar antenna may both
transmit and receive signals, the duplexer determines whether the antenna is connected to the
receiver or the transmitter. The receiver determines whether the signal should be reported and
often does further analysis before sending the results to the display. The display conveys the
results to the human operator through a visual display or an audible signal.
B1 The Antenna
The receiver uses an antenna to gather the reflected radar signal. Often the receiver uses the same
antenna as the transmitter. This is possible even in some continuous-wave radar because the
modulator in the transmitter system formats the outgoing signals in such a way that the receiver
(described in following paragraphs) can recognize the difference between outgoing and incoming
signals.
B2 The Duplexer
The duplexer enables a radar system to transmit powerful signals and still receive very weak
radar echoes. The duplexer acts as a gate between the antenna and the receiver and transmitter. It
keeps the intense signals from the transmitter from passing to the receiver and overloading it, and
also ensures that weak signals coming in from the antenna go to the receiver. A pulse radar
duplexer connects the transmitter to the antenna only when a pulse is being emitted. Between
pulses, the duplexer disconnects the transmitter and connects the receiver to the antenna. If the
receiver were connected to the antenna while the pulse was being transmitted, the high power
level of the pulse would damage the receiver’s sensitive circuits. In continuous-wave radar the
receivers and transmitters operate at the same time. These systems have no duplexer. In this case,
the receiver separates the signals by frequency alone. Because the receiver must listen for weak
signals at the same time that the transmitter is operating, high power continuous-wave radar
systems use separate transmitting and receiving antennas.
B3 The Receiver
Most modern radar systems use digital equipment because this equipment can perform many
complicated functions. In order to use digital equipment, radar systems need analog-to-digital
converters to change the received signal from an analog form to a digital form.
The incoming analog signal can have any value, from 0 to tens of . Digital information millions,
including fractional values such as must have discrete values, in certain regular steps, such as 0,
1, or 2, but nothing in between. A digital system might require the fraction to be rounded off to
the decimal number 0.6666667, or 0.667, or 0.7, or even 1. After the analog information has
been translated into discrete intervals, digital numbers are usually expressed in binary form, or as
series of 1s and 0s that represent numbers. The analog-to-digital converter measures the
incoming analog signal many times each second and expresses each signal as a binary number.
Once the signal is in digital form, the receiver can perform many complex functions on it. One of
the most important functions for the receiver is Doppler filtering. Signals that bounce off of
moving objects come back with a slightly different wavelength because of an effect called the
Doppler effect. The wavelength changes as waves leave a moving object because the movement
of the object causes each wave to leave from a slightly different position than the waves before
it. If an object is moving away from the observer, each successive wave will leave from slightly
farther away, so the waves will be farther apart and the signal will have a longer wavelength. If
an object is moving toward the observer, each successive wave will leave from a position slightly
closer than the one before it, so the waves will be closer to each other and the signal will have a
shorter wavelength. Doppler shifts occur in all kinds of waves, including radar waves, sound
waves, and light waves. Doppler filtering is the receiver’s way of differentiating between
multiple targets. Usually, targets move at different speeds, so each target will have a different
Doppler shift. Following Doppler filtering, the receiver performs other functions to maximize the
strength of the return signal and to eliminate noise and other interfering signals.
B4 The Display
Displaying the results is the final step in converting the received radar signals into useful
information. Early radar systems used a simple amplitude scope—a display of received signal
amplitude, or strength, as a function of distance from the antenna. In such a system, a spike in the
signal strength appears at the place on the screen that corresponds to the target’s distance. A
more useful and more modern display is the plan position indicator (PPI). The PPI displays the
direction of the target in relation to the radar system (relative to north)as an angle measured from
the top of the display, while the distance to the target is represented as a distance from the center
of the display. Some radar systems that use PPI display the actual amplitude of the signal, while
others process the signal before displaying it and display possible targets as symbols. Some
simple radar systems designed to look for the presence of an object and not the object’s speed or
distance notify the user with an audible signal, such as a beep.
C Radar Frequencies
Early radar systems were capable only of detecting targets and making a crude measurement of
the distance to the target. As radar technology evolved, radar systems could measure more and
more properties. Modern technology allows radar systems to use higher frequencies, permitting
better measurement of the target’s direction and location. Advanced radar can detect individual
features of the target and show a detailed picture of the target instead of a single blurred object.
Most radar systems operate in frequencies ranging from the Very High Frequency (VHF) band,
at about 150 MHz (150 million Hz), to the Extra High Frequency band, which may go as high as
95 GHz (95 billion Hz). Specific ranges of frequencies work well for certain applications and not
as well for others, so most radar systems are specialized to do one type of tracking or detection.
The frequency of the radar system is related to the resolution of the system. Resolution
determines how close two objects may be and still be distinguished by the radar, and how
accurately the system can determine the target’s position. Higher frequencies provide better
resolution than lower frequencies because the beam formed by the antenna is sharper. Tracking
radar, which precisely locates objects and tracks their movement, needs higher resolution and so
uses higher frequencies. On the other hand, if a radar system is used to search large areas for
targets, a narrow beam of high-frequency radar will be less efficient. Because the high-power
transmitters and large antennas that radar systems require are easier to build for lower
frequencies, lower frequency radar systems are more popular for radar that does not need
particularly good resolution.
D Clutter
Clutter is what radar users call radar signals that do not come from actual targets. Rain, snow,
and the surface of the earth reflect energy, including radar waves. Such echoes can produce
signals that the radar system may mistake for actual targets. Clutter makes it difficult to locate
targets, especially when the system is searching for objects that are small and distant.
Fortunately, most sources of clutter move slowly if at all, so their radar echoes produce little or
no Doppler shift. Radar engineers have developed several systems to take advantage of the
difference in Doppler shifts between clutter and moving targets. Some radar systems use a
moving target indicator (MTI), which subtracts out every other radar return from the total signal.
Because the signals from stationary objects will remain the same over time, the MTI subtracts
them from the total signal, and only signals from moving targets get past the receiver. Other
radar systems actually measure the frequencies of all returning signals. Frequencies with very
low Doppler shifts are assumed to come from clutter. Those with substantial shifts are assumed
to come from moving targets.
Clutter is actually a relative term, since the clutter for some systems could be the target for other
systems. For example, a radar system that tracks airplanes considers precipitation to be clutter,
but precipitation is the target of weather radar. The plane-tracking radar would ignore the returns
with large sizes and low Doppler shifts that represent weather features, while the weather radar
would ignore the small-sized, highly-Doppler-shifted returns that represent airplanes.
III TYPES OF RADAR
All radar systems send out electromagnetic radiation in radio or microwave frequencies and use
echoes of that radiation to detect objects, but different systems use different methods of emitting
and receiving radiation. Pulse radar sends out short bursts of radiation. Continuous wave radar
sends out a constant signal. Synthetic aperture radar and phased-array radar have special ways of
positioning and pointing the antennas that improve resolution and accuracy. Secondary radar
detects radar signals that targets send out, instead of detecting echoes of radiation.
A Simple Pulse Radar
Simple pulse radar is the simplest type of radar. In this system, the transmitter sends out short
pulses of radio frequency energy. Between pulses, the radar receiver detects echoes of radiation
that objects reflect. Most pulse radar antennas rotate to scan a wide area. Simple pulse radar
requires precise timing circuits in the duplexer to prevent the transmitter from transmitting while
the receiver is acquiring a signal from the antenna, and to keep the receiver from trying to read a
signal from the antenna while the transmitter is operating. Pulse radar is good at locating an
object, but it is not very accurate at measuring an object’s speed.
B Continuous Wave Radar
Continuous-wave (CW) radar systems transmit a constant radar signal. The transmission is
continuous, so, except in systems with very low power, the receiver cannot use the same antenna
as the transmitter because the radar emissions would interfere with the echoes that the receiver
detects. CW systems can distinguish between stationary clutter and moving targets by analyzing
the Doppler shift of the signals, without having to use the precise timing circuits that separates
the signal from the return in pulse radar. Continuous wave radar systems are excellent at
measuring the speed and direction of an object, but they are not as accurate as pulse radar at
measuring an object’s position. Some systems combine pulse and CW radar to achieve both good
range and velocity resolution. Such systems are called Pulse-Doppler radar systems.
C Synthetic Aperture Radar
Synthetic aperture radar (SAR) tracks targets on the ground from the air. The name comes from
the fact that the system uses the movement of the airplane or satellite carrying it to make the
antenna seem much larger than it actually is. The ability of radar to distinguish between two
closely spaced objects depends on the width of the beam that the antenna sends out. The
narrower the beam is, the better its resolution. Getting a narrow beam requires a big antenna. A
SAR system is limited to a relatively small antenna with a wide beam because it must fit on an
aircraft or satellite. SAR systems are called synthetic aperture, however, because the antenna
appears to be bigger than it really is. This is because the moving aircraft or satellite allows the
SAR system to repeatedly take measurements from different positions. The receiver processes
these signals to make it seem as though they came from a large stationary antenna instead of a
small moving one. Synthetic aperture radar resolution can be high enough to pick out individual
objects as small as automobiles.
Typically, an aircraft or satellite equipped with SAR flies past the target object. In inverse
synthetic aperture radar, the target moves past the radar antenna. Inverse SAR can give results as
good as normal SAR.
D Phased-Array Radar
Most radar systems use a single large antenna that stays in one place, but can rotate on a base to
change the direction of the radar beam. A phased-array radar antenna actually comprises many
small separate antennas, each of which can be rotated. The system combines the signals gathered
from all the small antennas. The receiver can change the way it combines the signals from the
antennas to change the direction of the beam. A huge phased-array radar antenna can change its
beam direction electronically many times faster than any mechanical radar system can.
E Secondary Radar
A radar system that sends out radar signals and reads the echoes that bounce back is a primary
radar system. Secondary radar systems read coded radar signals that the target emits in response
to signals received, instead of signals that the target reflects. Air traffic control depends heavily
on the use of secondary radar. Aircraft carry small radar transmitters called beacons or
transponders. Receivers at the air traffic control tower search for signals from the transponders.
The transponder signals not only tell controllers the location of the aircraft, but can also carry
encoded information about the target. For example, the signal may contain a code that indicates
whether the aircraft is an ally, or it may contain encoded information from the aircraft’s altimeter
(altitude indicator).
IV RADAR APPLICATIONS
Many industries depend on radar to carry out their work. Civilian aircraft and maritime industries
use radar to avoid collisions and to keep track of aircraft and ship positions. Military craft also
use radar for collision avoidance, as well as for tracking military targets. Radar is important to
meteorologists, who use it to track weather patterns. Radar also has many other scientific
applications.
A Air-Traffic Control
Radar is a vital tool in avoiding midair aircraft collisions. The international air traffic control
system uses both primary and secondary radar. A network of long-range radar systems called Air
Route Surveillance Radar (ARSR) tracks aircraft as they fly between airports. Airports use
medium-range radar systems called Airport Surveillance Radar to track aircraft more accurately
while they are near the airport.
B Maritime Navigation
Radar also helps ships navigate through dangerous waters and avoid collisions. Unlike air-traffic
radar, with its centralized networks that monitor many craft, maritime radar depends almost
entirely on radar systems installed on individual vessels. These radar systems search the surface
of the water for landmasses; navigation aids, such as lighthouses and channel markers; and other
vessels. For a ship’s navigator, echoes from landmasses and other stationary objects are just as
important as those from moving objects. Consequently, marine radar systems do not include
clutter removal circuits. Instead, ship-based radar depends on high-resolution distance and
direction measurements to differentiate between land, ships, and unwanted signals. Marine radar
systems have become available at such low cost that many pleasure craft are equipped with them,
especially in regions where fog is common.
C Military Defense and Attack
Historically, the military has played the leading role in the use and development of radar. The
detection and interception of opposing military aircraft in air defense has been the predominant
military use of radar. The military also uses airborne radar to scan large battlefields for the
presence of enemy forces and equipment and to pick out precise targets for bombs and missiles.
C1 Air Defense
A typical surface-based air defense system relies upon several radar systems. First, a lower
frequency radar with a high-powered transmitter and a large antenna searches the airspace for all
aircraft, both friend and foe. A secondary radar system reads the transponder signals sent by each
aircraft to distinguish between allies and enemies. After enemy aircraft are detected, operators
track them more precisely by using high-frequency waves from special fire control radar
systems. The air defense system may attempt to shoot down threatening aircraft with gunfire or
missiles, and radar sometimes guides both gunfire and missiles (see Guided Missiles).
Longer-range air defense systems use missiles with internal guidance. These systems track a
target using data from a radar system on the missile. Such missile-borne radar systems are called
seekers. The seeker uses radar signals from the missile or radar signals from a transmitter on the
ground to determine the position of the target relative to the missile, then passes the information
to the missile’s guidance system.
The military uses surface-to-air systems for defense against ballistic missiles as well as aircraft
(see Defense Systems). During the Cold War both the United States and the Union of Soviet
Socialist Republics (USSR) did a great deal of research into defense against intercontinental
ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). The United
States and the USSR signed the Anti-Ballistic Missile (ABM) treaty in 1972. This treaty limited
each of the superpowers to a single, limited capability system. The U.S. system consisted of a
low-frequency (UHF) phased-array radar around the perimeter of the country, another phased-
array radar to track incoming missiles more accurately, and several very high speed missiles to
intercept the incoming ballistic missiles. The second radar guided the interceptor missiles.
Airborne air defense systems incorporate the same functions as ground-based air defense, but
special aircraft carry the large area search radar systems. This is necessary because it is difficult
for high-performance fighter aircraft to carry both large radar systems and weapons.
Modern warfare uses air-to-ground radar to detect targets on the ground and to monitor the
movement of troops. Advanced Doppler techniques and synthetic aperture radar have greatly
increased the accuracy and usefulness of air-to-ground radar since their introduction in the 1960s
and 1970s. Military forces around the world use air-to-ground radar for weapon aiming and for
battlefield surveillance. The United States used the Joint Surveillance and Tracking Radar
System (JSTARS) in the Persian Gulf War (1991), demonstrating modern radar’s ability to
provide information about enemy troop concentrations and movements during the day or night,
regardless of weather conditions.
C2 Countermeasures
The military uses several techniques to attempt to avoid detection by enemy radar. One common
technique is jamming—that is, sending deceptive signals to the enemy’s radar system. During
World War II (1939-1945), flyers under attack jammed enemy radar by dropping large clouds of
chaff—small pieces of aluminum foil or some other material that reflects radar well. “False”
returns from the chaff hid the aircraft’s exact location from the enemy’s air defense radar.
Modern jamming uses sophisticated electronic systems that analyze enemy radar, then send out
false radar echoes that mask the actual target echoes or deceive the radar about a target’s
location.
Stealth technology is a collection of methods that reduce the radar echoes from aircraft and other
radar targets (see Stealth Aircraft). Special paint can absorb radar signals and sharp angles in the
aircraft design can reflect radar signals in deceiving directions. Improvements in jamming and
stealth technology force the continual development of high-power transmitters, antennas good at
detecting weak signals, and very sensitive receivers, as well as techniques for improved clutter
rejection.
D Traffic safety
Since the 1950s, police have used radar to detect motorists who are exceeding the speed limit.
Most older police radar “guns” use Doppler technology to determine the target vehicle’s speed.
Such systems were simple, but they sometimes produced false results. The radar beam of such
systems was relatively wide, which meant that stray radar signals could be detected by motorists
with radar detectors. Newer police radar systems, developed in the 1980s and 1990s, use laser
light to form a narrow, highly selective radar beam. The narrow beam helps insure that the radar
returns signals from a single, selected car and reduces the chance of false results. Instead of
relying on the Doppler effect to measure speed, these systems use pulse radar to measure the
distance to the car many times, then calculate the speed by dividing the change in distance by the
change in time. Laser radar is also more reliable than normal radar for the detection of speeding
motorists because its narrow beam is more difficult to detect by motorists with radar detectors.
E Meteorology
Meteorologists use radar to learn about the weather. Networks of radar systems installed across
many countries throughout the world detect and display areas of rain, snow, and other
precipitation. Weather radar systems use Doppler radar to determine the speed of the wind within
the storm. The radar signals bounce off of water droplets or ice crystals in the atmosphere.
Gaseous water vapor does not reflect radar waves as well as the liquid droplets of water or solid
ice crystals, so radar returns from rain or snow are stronger than that from clouds. Dust in the
atmosphere also reflects radar, but the returns are only significant when the concentration of dust
is much higher than usual. The Terminal Doppler Weather Radar can detect small, localized, but
hazardous wind conditions, especially if precipitation or a large amount of dust accompanies the
storm. Many airports use this advanced radar to make landing safer.
F Scientific Applications
Scientists use radar in several space-related applications. The Spacetrack system is a cooperative
effort of the United States, Canada, and the United Kingdom. It uses data from several large
surveillance and tracking radar systems (including the Ballistic Missile Early Warning System)
to detect and track all objects in orbit around the earth. This helps scientists and engineers keep
an eye on space junk—abandoned satellites, discarded pieces of rockets, and other unused
fragments of spacecraft that could pose a threat to operating spacecraft. Other special-purpose
radar systems track specific satellites that emit a beacon signal. One of the most important of
these systems is the Global Positioning System (GPS), operated by the U.S. Department of
Defense. GPS provides highly accurate navigational data for the U.S. military and for anyone
who owns a GPS receiver.
During space flights, radar gives precise measurements of the distances between the spacecraft
and other objects. In the U.S. Surveyor missions to the moon in the 1960s, radar measured the
altitude of the probe above the moon’s surface to help the probe control its descent. In the Apollo
missions, which landed astronauts on the moon during the 1960s and 1970s, radar measured the
altitude of the Lunar Module, the part of the Apollo spacecraft that carried two astronauts from
orbit around the moon down to the moon’s surface, above the surface of the moon. Apollo also
used radar to measure the distance between the Lunar Module and the Command and Service
Module, the part of the spacecraft that remained in orbit around the moon.
Astronomers have used ground-based radar to observe the moon, some of the larger asteroids in
our solar system, and a few of the planets and their moons. Radar observations provide
information about the orbit and surface features of the object.
The U.S. Magellan space probe mapped the surface of the planet Venus with radar from 1990 to
1994. Magellan’s radar was able to penetrate the dense cloud layer of the Venusian atmosphere
and provide images of much better quality than radar measurements from Earth.
Many nations have used satellite-based radar to map portions of the earth’s surface. Radar can
show conditions on the surface of the earth and can help determine the location of various
resources such as oil, water for irrigation, and mineral deposits. In 1995 the Canadian Space
Agency launched a satellite called RADARsat to provide radar imagery to commercial,
government, and scientific users.
V HISTORY
Although British physicist James Clerk Maxwell predicted the existence of radio waves in the
1860s, it wasn’t until the 1890s that British-born American inventor Elihu Thomson and German
physicist Heinrich Hertz independently confirmed their existence. Scientists soon realized that
radio waves could bounce off of objects, and by 1904 Christian Hülsmeyer, a German inventor,
had used radio waves in a collision avoidance device for ships. Hülsmeyer’s system was only
effective for a range of about 1.5 km (about 1 mi). The first long-range radar systems were not
developed until the 1920s. In 1922 Italian radio pioneer Guglielmo Marconi demonstrated a low-
frequency (60 MHz) radar system. In 1924 English physicist Edward Appleton and his graduate
student from New Zealand, Miles Barnett, proved the existence of the ionosphere, an electrically
charged upper layer of the atmosphere, by reflecting radio waves off of it. Scientists at the U.S.
Naval Research Laboratory in Washington, D.C., became the first to use radar to detect aircraft
in 1930.
A Radar in World War II
None of the early demonstrations of radar generated much enthusiasm. The commercial and
military value of radar did not become readily apparent until the mid-1930s. Before World War
II, the United States, France, and the United Kingdom were all
carrying out radar research. Beginning in 1935, the British built a network of ground-based
aircraft detection radar, called Chain Home, under the direction of Sir Robert Watson-Watt.
Chain Home was fully operational from 1938 until the end of World War II in 1945 and was
extremely instrumental in Britain’s defense against German bombers.
The British recognized the value of radar with frequencies much higher than the radio waves
used for most systems. A breakthrough in radar technology came in 1939 when two British
scientists, physicist Henry Boot and biophysicist John Randall, developed the resonant-cavity
magnetron. This device generates high-frequency radio pulses with a large amount of power, and
it made the development of microwave radar possible. Also in 1939, the Massachusetts Institute
of Technology (MIT) Radiation Laboratory was formed in Cambridge, Massachusetts, bringing
together U.S. and British radar research. In March 1942 scientists demonstrated the detection of
ships from the air. This technology became the basis of antiship and antisubmarine radar for the
U.S. Navy.
The U.S. Army operated air surveillance radar at the start of World War II. The army also used
early forms of radar to direct antiaircraft guns. Initially the radar systems were used to aim
searchlights so the soldier aiming the gun could see where to fire, but the systems evolved into
fire-control radar that aimed the guns automatically.
B Radar during the Cold War
With the end of World War II, interest in radar development declined. Some experiments
continued, however; for instance, in 1946 the U.S. Army Signal Corps bounced radar signals off
of the moon, ushering in the field of radar astronomy. The growing hostility between the United
States and the Union of Soviet Socialists Republics—the so-called Cold War—renewed military
interest in radar improvements. After the Soviets detonated their first atomic bomb in 1949,
interest in radar development, especially for air defense, surged. Major programs included the
installation of the Distant Early Warning (DEW) network of long-range radar across the northern
reaches of North America to warn against bomber attacks. As the potential threat of attack by
ICBMs increased, the United Kingdom, Greenland, and Alaska installed the Ballistic Missile
Early Warning System (BMEWS).
C Modern Radar
Radar found many applications in civilian and military life and became more sophisticated and
specialized for each application. The use of radar in air traffic control grew quickly during the
Cold War, especially with the jump in air traffic that occurred in the 1960s. Today almost all
commercial and private aircraft have transponders. Transponders send out radar signals encoded
with information about an aircraft and its flight that other aircraft and air traffic controllers can
use. American traffic engineer John Barker discovered in 1947 that moving automobiles would
reflect radar waves, which could be analyzed to determine the car’s speed. Police began using
traffic radar in the 1950s, and the accuracy of traffic radar has increased markedly since the
1980s.
Doppler radar came into use in the 1960s and was first dedicated to weather forecasting in the
1970s. In the 1990s the United States had a nationwide network of more than 130 Doppler radar
stations to help meteorologists track weather patterns.
Earth-observing satellites such as those in the SEASAT program began to use radar to measure
the topography of the earth in the late 1970s. The Magellan spacecraft mapped most of the
surface of the planet Venus in the 1990s. The Cassini spacecraft, scheduled to reach Saturn in
2004, carries radar instruments for studying the surface of Saturn’s moon Titan.
As radar continues to improve, so does the technology for evading radar. Stealth aircraft feature
radar-absorbing coatings and deceptive shapes to reduce the possibility of radar detection. The
Lockheed F-117A, first flown in 1981, and the Northrop , first flown in 1989, are two of the
latest additions to the U.S. stealth aircraft fleet. In the area of civilian radar avoidance,
companies are introducing increasingly sophisticated radar detectors, designed to warn motorists
of police using traffic radar.

Contributed By:
Robert E. Millett
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(v)

1. Tape Recording
In analog tape recording, electrical signals from a microphone are transformed into magnetic
signals. These signals are encoded onto a thin plastic ribbon of recording tape. Recording tape is
coated with tiny magnetic particles. Chromium dioxide and ferric oxide are two magnetic
materials commonly used. A chemical binder coats the particles to the tape, and a back coating
prevents the magnetic charge from traveling from one layer of tape to the next.
Tape is wound onto reels, which can vary in diameter and size. Professional reel-to-reel tape,
which is 6.2 mm (0.25 in) wide, is wound on large metal or plastic reels. Reel-to-reel tapes must
be loaded onto a reel-to-reel tape recorder by hand. Cassette tape is only 3.81 mm (0.15 in) wide
and is completely self-enclosed for convenience. Regardless of size, all magnetic tape is drawn
from a supply reel on the left side of the recorder to a take-up reel on the right. A drive shaft,
called a capstan, rolls against a pinch roller and pulls the tape along. Various guides and rollers
are used to mechanically regulate the speed and tension of the tape, since any variations in speed
or tension will affect sound quality.
As the tape is drawn from the supply reel to the take-up reel, it passes over a series of three
magnetic coils called heads. The erase head is activated only while recording. It generates a
current that places the tape's magnetic particles in a neutral position in order to remove any
previous sounds. The record head transforms the electrical signal coming into the recorder into a
magnetic flux and thus applies the original electrical signal onto the tape. The sound wave is now
physically present on the analog tape. The playback head reads the magnetic field on the tape
and converts this field back to electric energy.
Unwanted noise, such as hiss, is a frequent problem with recording on tape. To combat this
problem, sound engineers developed noise reduction systems that help reduce unwanted sounds.
Many different systems exist, such as the Dolby System, which is used to reduce hiss on musical
recordings and motion-picture soundtracks. Most noise occurs around the weakest sounds on a
tape recording. Noise reduction systems work by boosting weak signals during recording. When
the tape is played, the boosted signals are reduced to their normal levels. This reduction to
normal levels also minimizes any noise that might have been present.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Q7:
(i)
Deoxyribonucleic Acid
I INTRODUCTION
Deoxyribonucleic Acid (DNA), genetic material of all cellular organisms and most viruses. DNA
carries the information needed to direct protein synthesis and replication. Protein synthesis is the
production of the proteins needed by the cell or virus for its activities and development.
Replication is the process by which DNA copies itself for each descendant cell or virus, passing
on the information needed for protein synthesis. In most cellular organisms, DNA is organized
on chromosomes located in the nucleus of the cell.
II STRUCTURE
A molecule of DNA consists of two chains, strands composed of a large number of chemical
compounds, called nucleotides, linked together to form a chain. These chains are arranged like a
ladder that has been twisted into the shape of a winding staircase, called a double helix. Each
nucleotide consists of three units: a sugar molecule called deoxyribose, a phosphate group, and
one of four different nitrogen-containing compounds called bases. The four bases are adenine
(A), guanine (G), thymine (T), and cytosine (C). The deoxyribose molecule occupies the center
position in the nucleotide, flanked by a phosphate group on one side and a base on the other. The
phosphate group of each nucleotide is also linked to the deoxyribose of the adjacent nucleotide in
the chain. These linked deoxyribose-phosphate subunits form the parallel side rails of the ladder.
The bases face inward toward each other, forming the rungs of the ladder.
The nucleotides in one DNA strand have a specific association with the corresponding
nucleotides in the other DNA strand. Because of the chemical affinity of the bases, nucleotides
containing adenine are always paired with nucleotides containing thymine, and nucleotides
containing cytosine are always paired with nucleotides containing guanine. The complementary
bases are joined to each other by weak chemical bonds called hydrogen bonds.
In 1953 American biochemist James D. Watson and British biophysicist Francis Crick published
the first description of the structure of DNA. Their model proved to be so important for the
understanding of protein synthesis, DNA replication, and mutation that they were awarded the
1962 Nobel Prize for physiology or medicine for their work.
III PROTEIN SYNTHESIS
DNA carries the instructions for the production of proteins. A protein is composed of smaller
molecules called amino acids, and the structure and function of the protein is determined by the
sequence of its amino acids. The sequence of amino acids, in turn, is determined by the sequence
of nucleotide bases in the DNA. A sequence of three nucleotide bases, called a triplet, is the
genetic code word, or codon, that specifies a particular amino acid. For instance, the triplet GAC
(guanine, adenine, and cytosine) is the codon for the amino acid leucine, and the triplet CAG
(cytosine, adenine, and guanine) is the codon for the amino acid valine. A protein consisting of
100 amino acids is thus encoded by a DNA segment consisting of 300 nucleotides. Of the two
polynucleotide chains that form a DNA molecule, only one strand contains the information
needed for the production of a given amino acid sequence. The other strand aids in replication.
Protein synthesis begins with the separation of a DNA molecule into two strands. In a process
called transcription, a section of one strand acts as a template, or pattern, to produce a new strand
called messenger RNA (mRNA). The mRNA leaves the cell nucleus and attaches to the
ribosomes, specialized cellular structures that are the sites of protein synthesis. Amino acids are
carried to the ribosomes by another type of RNA, called transfer RNA (tRNA). In a process
called translation, the amino acids are linked together in a particular sequence, dictated by the
mRNA, to form a protein.
A gene is a sequence of DNA nucleotides that specify the order of amino acids in a protein via
an intermediary mRNA molecule. Substituting one DNA nucleotide with another containing a
different base causes all descendant cells or viruses to have the altered nucleotide base sequence.
As a result of the substitution, the sequence of amino acids in the resulting protein may also be
changed. Such a change in a DNA molecule is called a mutation. Most mutations are the result of
errors in the replication process. Exposure of a cell or virus to radiation or to certain chemicals
increases the likelihood of mutations.
IV REPLICATION
In most cellular organisms, replication of a DNA molecule takes place in the cell nucleus and
occurs just before the cell divides. Replication begins with the separation of the two
polynucleotide chains, each of which then acts as a template for the assembly of a new
complementary chain. As the old chains separate, each nucleotide in the two chains attracts a
complementary nucleotide that has been formed earlier by the cell. The nucleotides are joined to
one another by hydrogen bonds to form the rungs of a new DNA molecule. As the
complementary nucleotides are fitted into place, an enzyme called DNA polymerase links them
together by bonding the phosphate group of one nucleotide to the sugar molecule of the adjacent
nucleotide, forming the side rail of the new DNA molecule. This process continues until a new
polynucleotide chain has been formed alongside the old one, forming a new double-helix
molecule.
V TOOLS AND PROCEDURES
Several tools and procedures facilitate are used by scientists for the study and manipulation of
DNA. Specialized enzymes, called restriction enzymes, found in bacteria act like molecular
scissors to cut the phosphate backbones of DNA molecules at specific base sequences. Strands of
DNA that have been cut with restriction enzymes are left with single-stranded tails that are called
sticky ends, because they can easily realign with tails from certain other DNA fragments.
Scientists take advantage of restriction enzymes and the sticky ends generated by these enzymes
to carry out recombinant DNA technology, or genetic engineering. This technology involves
removing a specific gene from one organism and inserting the gene into another organism.
Another tool for working with DNA is a procedure called polymerase chain reaction (PCR). This
procedure uses the enzyme DNA polymerase to make copies of DNA strands in a process that
mimics the way in which DNA replicates naturally within cells. Scientists use PCR to obtain vast
numbers of copies of a given segment of DNA.
DNA fingerprinting, also called DNA typing, makes it possible to compare samples of DNA
from various sources in a manner that is analogous to the comparison of fingerprints. In this
procedure, scientists use restriction enzymes to cleave a sample of DNA into an assortment of
fragments. Solutions containing these fragments are placed at the surface of a gel to which an
electric current is applied. The electric current causes the DNA fragments to move through the
gel. Because smaller fragments move more quickly than larger ones, this process, called
electrophoresis, separates the fragments according to their size. The fragments are then marked
with probes and exposed on X-ray film, where they form the DNA fingerprint—a pattern of
characteristic black bars that is unique for each type of DNA.
A procedure called DNA sequencing makes it possible to determine the precise order, or
sequence, of nucleotide bases within a fragment of DNA. Most versions of DNA sequencing use
a technique called primer extension, developed by British molecular biologist Frederick Sanger.
In primer extension, specific pieces of DNA are replicated and modified, so that each DNA
segment ends in a fluorescent form of one of the four nucleotide bases. Modern DNA
sequencers, pioneered by American molecular biologist Leroy Hood, incorporate both lasers and
computers. Scientists have completely sequenced the genetic material of several microorganisms,
including the bacterium Escherichia coli. In 1998, scientists achieved the milestone of
sequencing the complete genome of a multicellular organism—a roundworm identified as
Caenorhabditis elegans. The Human Genome Project, an international research collaboration, has
been established to determine the sequence of all of the three billion nucleotide base pairs that
make up the human genetic material.
An instrument called an atomic force microscope enables scientists to manipulate the three-
dimensional structure of DNA molecules. This microscope involves laser beams that act like
tweezers—attaching to the ends of a DNA molecule and pulling on them. By manipulating these
laser beams, scientists can stretch, or uncoil, fragments of DNA. This work is helping reveal how
DNA changes its three-dimensional shape as it interacts with enzymes.
VI APPLICATIONS
Research into DNA has had a significant impact on medicine. Through recombinant DNA
technology, scientists can modify microorganisms so that they become so-called factories that
produce large quantities of medically useful drugs. This technology is used to produce insulin,
which is a drug used by diabetics, and interferon, which is used by some cancer patients. Studies
of human DNA are revealing genes that are associated with specific diseases, such as cystic
fibrosis and breast cancer. This information is helping physicians to diagnose various diseases,
and it may lead to new treatments. For example, physicians are using a technology called
chimeraplasty, which involves a synthetic molecule containing both DNA and RNA strands, in
an effort to develop a treatment for a form of hemophilia.
Forensic science uses techniques developed in DNA research to identify individuals who have
committed crimes. DNA from semen, skin, or blood taken from the crime scene can be compared
with the DNA of a suspect, and the results can be used in court as evidence.
DNA has helped taxonomists determine evolutionary relationships among animals, plants, and
other life forms. Closely related species have more similar DNA than do species that are
distantly related. One surprising finding to emerge from DNA studies is that vultures of the
Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa (see
Classification).
Techniques of DNA manipulation are used in farming, in the form of genetic engineering and
biotechnology. Strains of crop plants to which genes have been transferred may produce higher
yields and may be more resistant to insects. Cattle have been similarly treated to increase milk
and beef production, as have hogs, to yield more meat with less fat.
VII SOCIAL ISSUES
Despite the many benefits offered by DNA technology, some critics argue that its development
should be monitored closely. One fear raised by such critics is that DNA fingerprinting could
provide a means for employers to discriminate against members of various ethnic groups. Critics
also fear that studies of people’s DNA could permit insurance companies to deny health
insurance to those people at risk for developing certain diseases. The potential use of DNA
technology to alter the genes of embryos is a particularly controversial issue.
The use of DNA technology in agriculture has also sparked controversy. Some people question
the safety, desirability, and ecological impact of genetically altered crop plants. In addition,
animal rights groups have protested against the genetic engineering of farm animals.
Despite these and other areas of disagreement, many people agree that DNA technology offers a
mixture of benefits and potential hazards. Many experts also agree that an informed public can
help assure that DNA technology is used wisely.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Ribonucleic Acid
I INTRODUCTION
Ribonucleic Acid (RNA), genetic material of certain viruses (RNA viruses) and, in cellular
organisms, the molecule that directs the middle steps of protein production. In RNA viruses, the
RNA directs two processes—protein synthesis (production of the virus's protein coat) and
replication (the process by which RNA copies itself). In cellular organisms, another type of
genetic material, called deoxyribonucleic acid (DNA), carries the information that determines
protein structure. But DNA cannot act alone and relies upon RNA to transfer this crucial
information during protein synthesis (production of the proteins needed by the cell for its
activities and development).
Like DNA, RNA consists of a chain of chemical compounds called nucleotides. Each nucleotide
is made up of a sugar molecule called ribose, a phosphate group, and one of four different
nitrogen-containing compounds called bases. The four bases are adenine, guanine, uracil, and
cytosine. These components are joined together in the same manner as in a deoxyribonucleic
acid (DNA) molecule. RNA differs chemically from DNA in two ways: The RNA sugar
molecule contains an oxygen atom not found in DNA, and RNA contains the base uracil in the
place of the base thymine in DNA.
II CELLULAR RNA
In cellular organisms, RNA is a single-stranded polynucleotide chain, a strand of many
nucleotides linked together. There are three types of RNA. Ribosomal RNA (rRNA) is found in
the cell's ribosomes, the specialized structures that are the sites of protein synthesis). Transfer
RNA (tRNA) carries amino acids to the ribosomes for incorporation into a protein. Messenger
RNA (mRNA) carries the genetic blueprint copied from the sequence of bases in a cell's DNA.
This blueprint specifies the sequence of amino acids in a protein. All three types of RNA are
formed as needed, using specific sections of the cell's DNA as templates.
III VIRAL RNA
Some RNA viruses have double-stranded RNA—that is, their RNA molecules consist of two
parallel polynucleotide chains. The base of each RNA nucleotide in one chain pairs with a
complementary base in the second chain—that is, adenine pairs with uracil, and guanine pairs
with cytosine. For these viruses, the process of RNA replication in a host cell follows the same
pattern as that of DNA replication, a method of replication called semi-conservative replication.
In semi-conservative replication, each newly formed double-stranded RNA molecule contains
one polynucleotide chain from the parent RNA molecule, and one complementary chain formed
through the process of base pairing. The Colorado tick fever virus, which causes mild respiratory
infections, is a double stranded RNA virus.
There are two types of single-stranded RNA viruses. After entering a host cell, one type, polio
virus, becomes double-stranded by making an RNA strand complementary to its own. During
replication, although the two strands separate, only the recently formed strand attracts
nucleotides with complementary bases. Therefore, the polynucleotide chain that is produced as a
result of replication is exactly the same as the original RNA chain.
The other type of single-stranded RNA viruses, called retroviruses, include the human
immunodeficiency virus (HIV), which causes AIDS, and other viruses that cause tumors. After
entering a host cell, a retrovirus makes a DNA strand complementary to its own RNA strand
using the host's DNA nucleotides. This new DNA strand then replicates and forms a double helix
that becomes incorporated into the host cell's chromosomes, where it is replicated along with the
host DNA. While in a host cell, the RNA-derived viral DNA produces single-stranded RNA
viruses that then leave the host cell and enter other cells, where the replication process is
repeated.
IV RNA AND THE ORIGIN OF LIFE
In 1981, American biochemist Thomas Cech discovered that certain RNA molecules appear to
act as enzymes, molecules that speed up, or catalyze, some reactions inside cells. Until this
discovery biologists thought that all enzymes were proteins. Like other enzymes, these RNA
catalysts, called ribozymes, show great specificity with respect to the reactions they speed up.
The discovery of ribozymes added to the evidence that RNA, not DNA, was the earliest genetic
material. Many scientists think that the earliest genetic molecule was simple in structure and
capable of enzymatic activity. Furthermore, the molecule would necessarily exist in all
organisms. The enzyme ribonuclease-P, which exists in all organisms, is made of protein and a
form of RNA that has enzymatic activity. Based on this evidence, some scientists suspect that the
RNA portion of ribonuclease-P may be the modern equivalent of the earliest genetic molecule,
the molecule that first enabled replication to occur in primitive cells.

Contributed By:
Louis Levine
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(ii) Brass (alloy)
Brass (alloy), alloy of copper and zinc. Harder than copper, it is ductile and can be hammered
into thin leaves. Formerly any alloy of copper, especially one with tin, was called brass, and it is
probable that the “brass” of ancient times was of copper and tin (see Bronze). The modern alloy
came into use about the 16th century.
The malleability of brass varies with its composition and temperature and with the presence of
foreign metals, even in minute quantities. Some kinds of brass are malleable only when cold,
others only when hot, and some are not malleable at any temperature. All brass becomes brittle if
heated to a temperature near the melting point. See Metalwork.
To prepare brass, zinc is mixed directly with copper in crucibles or in a reverberatory or cupola
furnace. The ingots are rolled when cold. The bars or sheets can be rolled into rods or cut into
strips that can be drawn out into wire.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Bronze
I INTRODUCTION
Bronze, metal compound containing copper and other elements. The term bronze was originally
applied to an alloy of copper containing tin, but the term is now used to describe a variety of
copper-rich material, including aluminum bronze, manganese bronze, and silicon bronze.
Bronze was developed about 3500 BC by the ancient Sumerians in the Tigris-Euphrates Valley.
Historians are unsure how this alloy was discovered, but believe that bronze may have first been
made accidentally when rocks rich in ores of copper and tin were used to build campfire rings
(enclosures for preventing fires from spreading). As fire heated these stones, the metals may
have mixed, forming bronze. This theory is supported by the fact that bronze was not developed
in North America, where natural tin and copper ores are rarely found in the same rocks.
Around 3000 BC, bronze-making spread to Persia, where bronze objects such as ornaments,
weapons, and chariot fittings have been found. Bronzes appeared in both Egypt and China
around 2000 BC. The earliest bronze castings (objects made by pouring liquid metal into molds)
were made in sand; later, clay and stone molds were used. Zinc, lead, and silver were added to
bronze alloys by Greek and Roman metalworkers for use in tools, weapons, coins, and art
objects. During the Renaissance, a series of cultural movements that occurred in Europe in the
14th, 15th, and 16th centuries, bronze was used to make guns, and artists such as Michelangelo
and Benvenuto Cellini used bronze for sculpting See also Metalwork; Founding.
Today, bronze is used for making products ranging from household items such as doorknobs,
drawer handles, and clocks to industrial products such as engine parts, bearings, and wire.
II TYPES
Tin bronzes, the original bronzes, are alloys of copper and tin. They may contain from 5 to 22
percent tin. When a tin bronze contains at least 10 percent tin, the alloy is hard and has a low
melting point. Leaded tin bronzes, used for casting, contain 5 to 10 percent tin, 1.5 to 25 percent
lead, and 0 to 4.5 percent zinc. Manganese bronze contains 39 percent zinc, 1 percent tin, and 0.5
to 4 percent manganese. Aluminum bronze contains 5 to 10 percent aluminum. Silicon bronze
contains 1.5 to 3 percent silicon.
Bronze is made by heating and mixing the molten metal constituents. When the molten mixture
is poured into a mold and begins to harden, the bronze expands and fills the entire mold. Once
the bronze has cooled, it shrinks slightly and can easily be removed from the mold.
III CHARACTERISTICS AND USES
Bronze is stronger and harder than any other common metal alloy except steel. It does not easily
break under stress, is corrosion resistant, and is easy to form into finished shapes by molding,
casting, or machining (See also Engineering).
The strongest bronze alloys contain tin and a small amount of lead. Tin, silicon, or aluminum is
often added to bronze to improve its corrosion resistance. As bronze weathers, a brown or green
film forms on the surface. This film inhibits corrosion. For example, many bronze statues erected
hundreds of years ago show little sign of corrosion. Bronzes have a low melting point, a
characteristic that makes them useful for brazing—that is, for joining two pieces of metal. When
used as brazing material, bronze is heated above 430°C (800°F), but not above the melting point
of the metals being joined. The molten bronze fuses to the other metals, forming a solid joint
after cooling.
Lead is often added to make bronze easier to machine. Silicon bronze is machined into piston
rings and screening, and because of its resistance to chemical corrosion it is also used to make
chemical containers. Manganese bronze is used for valve stems and welding rods. Aluminum
bronzes are used in engine parts and in marine hardware.
Bronze containing 10 percent or more tin is most often rolled or drawn into wires, sheets, and
pipes. Tin bronze, in a powdered form, is sintered (heated without being melted), pressed into a
solid mass, saturated with oil, and used to make self-lubricating bearings.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(iii) Lymph
Lymph, common name for the fluid carried in the lymphatic system. Lymph is diluted blood
plasma containing large numbers of white blood cells, especially lymphocytes, and occasionally
a few red blood cells. Because of the number of living cells it contains, lymph is classified as a
fluid tissue.
Lymph diffuses into and is absorbed by the lymphatic capillaries from the spaces between the
various cells constituting the tissues. In these spaces lymph is known as tissue fluid, plasma that
has permeated the blood capillary walls and surrounded the cells to bring them nutriment and to
remove their waste substances. The lymph contained in the lacteals of the small intestine is
known as chyle.
The synovial fluid that lubricates joints is almost identical with lymph, as is the serous fluid
found in the body and pleural cavities. The fluid contained within the semicircular canals of the
ear, although known as endolymph, is not true lymph.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Blood
I INTRODUCTION
Blood, vital fluid found in humans and other animals that provides important nourishment to all
body organs and tissues and carries away waste materials. Sometimes referred to as “the river of
life,” blood is pumped from the heart through a network of blood vessels collectively known as
the circulatory system.
An adult human has about 5 to 6 liters (1 to 2 gal) of blood, which is roughly 7 to 8 percent of
total body weight. Infants and children have comparably lower volumes of blood, roughly
proportionate to their smaller size. The volume of blood in an individual fluctuates. During
dehydration, for example while running a marathon, blood volume decreases. Blood volume
increases in circumstances such as pregnancy, when the mother’s blood needs to carry extra
oxygen and nutrients to the baby.
II ROLE OF BLOOD
Blood carries oxygen from the lungs to all the other tissues in the body and, in turn, carries waste
products, predominantly carbon dioxide, back to the lungs where they are released into the air.
When oxygen transport fails, a person dies within a few minutes. Food that has been processed
by the digestive system into smaller components such as proteins, fats, and carbohydrates is also
delivered to the tissues by the blood. These nutrients provide the materials and energy needed by
individual cells for metabolism, or the performance of cellular function. Waste products
produced during metabolism, such as urea and uric acid, are carried by the blood to the kidneys,
where they are transferred from the blood into urine and eliminated from the body. In addition to
oxygen and nutrients, blood also transports special chemicals, called hormones, that regulate
certain body functions. The movement of these chemicals enables one organ to control the
function of another even though the two organs may be located far apart. In this way, the blood
acts not just as a means of transportation but also as a communications system.
The blood is more than a pipeline for nutrients and information; it is also responsible for the
activities of the immune system, helping fend off infection and fight disease. In addition, blood
carries the means for stopping itself from leaking out of the body after an injury. The blood does
this by carrying special cells and proteins, known as the coagulation system, that start to form
clots within a matter of seconds after injury.
Blood is vital to maintaining a stable body temperature; in humans, body temperature normally
fluctuates within a degree of 37.0° C (98.6° F). Heat production and heat loss in various parts of
the body are balanced out by heat transfer via the bloodstream. This is accomplished by varying
the diameter of blood vessels in the skin. When a person becomes overheated, the vessels dilate
and an increased volume of blood flows through the skin. Heat dissipates through the skin,
effectively lowering the body temperature. The increased flow of blood in the skin makes the
skin appear pink or flushed. When a person is cold, the skin may become pale as the vessels
narrow, diverting blood from the skin and reducing heat loss.
III COMPOSITION OF BLOOD
About 55 percent of the blood is composed of a liquid known as plasma. The rest of the blood is
made of three major types of cells: red blood cells (also known as erythrocytes), white blood
cells (leukocytes), and platelets (thrombocytes).
A Plasma
Plasma consists predominantly of water and salts. The kidneys carefully maintain the salt
concentration in plasma because small changes in its concentration will cause cells in the body to
function improperly. In extreme conditions this can result in seizures, coma, or even death. The
pH of plasma, the common measurement of the plasma’s acidity, is also carefully controlled by
the kidneys within the neutral range of 6.8 to 7.7. Plasma also contains other small molecules,
including vitamins, minerals, nutrients, and waste products. The concentrations of all of these
molecules must be carefully regulated.
Plasma is usually yellow in color due to proteins dissolved in it. However, after a person eats a
fatty meal, that person’s plasma temporarily develops a milky color as the blood carries the
ingested fats from the intestines to other organs of the body.
Plasma carries a large number of important proteins, including albumin, gamma globulin, and
clotting factors. Albumin is the main protein in blood. It helps regulate the water content of
tissues and blood. Gamma globulin is composed of tens of thousands of unique antibody
molecules. Antibodies neutralize or help destroy infectious organisms. Each antibody is designed
to target one specific invading organism. For example, chicken pox antibody will target chicken
pox virus, but will leave an influenza virus unharmed. Clotting factors, such as fibrinogen, are
involved in forming blood clots that seal leaks after an injury. Plasma that has had the clotting
factors removed is called serum. Both serum and plasma are easy to store and have many
medical uses.
B Red Blood Cells
Red blood cells make up almost 45 percent of the blood volume. Their primary function is to
carry oxygen from the lungs to every cell in the body. Red blood cells are composed
predominantly of a protein and iron compound, called hemoglobin, that captures oxygen
molecules as the blood moves through the lungs, giving blood its red color. As blood passes
through body tissues, hemoglobin then releases the oxygen to cells throughout the body. Red
blood cells are so packed with hemoglobin that they lack many components, including a nucleus,
found in other cells.
The membrane, or outer layer, of the red blood cell is flexible, like a soap bubble, and is able to
bend in many directions without breaking. This is important because the red blood cells must be
able to pass through the tiniest blood vessels, the capillaries, to deliver oxygen wherever it is
needed. The capillaries are so narrow that the red blood cells, normally shaped like a disk with a
concave top and bottom, must bend and twist to maneuver single file through them.
C Blood Type
There are several types of red blood cells and each person has red blood cells of just one type.
Blood type is determined by the occurrence or absence of substances, known as recognition
markers or antigens, on the surface of the red blood cell. Type A blood has just marker A on its
red blood cells while type B has only marker B. If neither A nor B markers are present, the blood
is type O. If both the A and B markers are present, the blood is type AB. Another marker, the Rh
antigen (also known as the Rh factor), is present or absent regardless of the presence of A and B
markers. If the Rh marker is present, the blood is said to be Rh positive, and if it is absent, the
blood is Rh negative. The most common blood type is A positive—that is, blood that has an A
marker and also an Rh marker. More than 20 additional red blood cell types have been
discovered.
Blood typing is important for many medical reasons. If a person loses a lot of blood, that person
may need a blood transfusion to replace some of the lost red blood cells. Since everyone makes
antibodies against substances that are foreign, or not of their own body, transfused blood must be
matched so as not to contain these substances. For example, a person who is blood type A
positive will not make antibodies against the A or Rh markers, but will make antibodies against
the B marker, which is not on that person’s own red blood cells. If blood containing the B marker
(from types B positive, B negative, AB positive, or AB negative) is transfused into this person,
then the transfused red blood cells will be rapidly destroyed by the patient’s anti-B antibodies. In
this case, the transfusion will do the patient no good and may even result in serious harm. For a
successful blood transfusion into an A positive blood type individual, blood that is type O
negative, O positive, A negative, or A positive is needed because these blood types will not be
attacked by the patient’s anti-B antibodies.
D White Blood Cells
White blood cells only make up about 1 percent of blood, but their small number belies their
immense importance. They play a vital role in the body’s immune system—the primary defense
mechanism against invading bacteria, viruses, fungi, and parasites. They often accomplish this
goal through direct attack, which usually involves identifying the invading organism as foreign,
attaching to it, and then destroying it. This process is referred to as phagocytosis.
White blood cells also produce antibodies, which are released into the circulating blood to target
and attach to foreign organisms. After attachment, the antibody may neutralize the organism, or
it may elicit help from other immune system cells to destroy the foreign substance. There are
several varieties of white blood cells, including neutrophils, monocytes, and lymphocytes, all of
which interact with one another and with plasma proteins and other cell types to form the
complex and highly effective immune system.
E Platelets and Clotting
The smallest cells in the blood are the platelets, which are designed for a single purpose—to
begin the process of coagulation, or forming a clot, whenever a blood vessel is broken. As soon
as an artery or vein is injured, the platelets in the area of the injury begin to clump together and
stick to the edges of the cut. They also release messengers into the blood that perform a variety
of functions: constricting the blood vessels to reduce bleeding, attracting more platelets to the
area to enlarge the platelet plug, and initiating the work of plasma-based clotting factors, such as
fibrinogen. Through a complex mechanism involving many steps and many clotting factors, the
plasma protein fibrinogen is transformed into long, sticky threads of fibrin. Together, the
platelets and the fibrin create an intertwined meshwork that forms a stable clot. This self-sealing
aspect of the blood is crucial to survival.
IV PRODUCTION AND ELIMINATION OF BLOOD CELLS
Blood is produced in the bone marrow, a tissue in the central cavity inside almost all of the bones
in the body. In infants, the marrow in most of the bones is actively involved in blood cell
formation. By later adult life, active blood cell formation gradually ceases in the bones of the
arms and legs and concentrates in the skull, spine, ribs, and pelvis.
Red blood cells, white blood cells, and platelets grow from a single precursor cell, known as a
hematopoietic stem cell. Remarkably, experiments have suggested that as few as 10 stem cells
can, in four weeks, multiply into 30 trillion red blood cells, 30 billion white blood cells, and 1.2
trillion platelets—enough to replace every blood cell in the body.
Red blood cells have the longest average life span of any of the cellular elements of blood. A red
blood cell lives 100 to 120 days after being released from the marrow into the blood. Over that
period of time, red blood cells gradually age. Spent cells are removed by the spleen and, to a
lesser extent, by the liver. The spleen and the liver also remove any red blood cells that become
damaged, regardless of their age. The body efficiently recycles many components of the
damaged cells, including parts of the hemoglobin molecule, especially the iron contained within
it.
The majority of white blood cells have a relatively short life span. They may survive only 18 to
36 hours after being released from the marrow. However, some of the white blood cells are
responsible for maintaining what is called immunologic memory. These memory cells retain
knowledge of what infectious organisms the body has previously been exposed to. If one of those
organisms returns, the memory cells initiate an extremely rapid response designed to kill the
foreign invader. Memory cells may live for years or even decades before dying.
Memory cells make immunizations possible. An immunization, also called a vaccination or an
inoculation, is a method of using a vaccine to make the human body immune to certain diseases.
A vaccine consists of an infectious agent that has been weakened or killed in the laboratory so
that it cannot produce disease when injected into a person, but can spark the immune system to
generate memory cells and antibodies specific for the infectious agent. If the infectious agent
should ever invade that vaccinated person in the future, these memory cells will direct the cells
of the immune system to target the invader before it has the opportunity to cause harm.
Platelets have a life span of seven to ten days in the blood. They either participate in clot
formation during that time or, when they have reached the end of their lifetime, are eliminated by
the spleen and, to a lesser extent, by the liver.
V BLOOD DISEASES
Many diseases are caused by abnormalities in the blood. These diseases are categorized by which
component of the blood is affected.
A Red Blood Cell Diseases
One of the most common blood diseases worldwide is anemia, which is characterized by an
abnormally low number of red blood cells or low levels of hemoglobin. One of the major
symptoms of anemia is fatigue, due to the failure of the blood to carry enough oxygen to all of
the tissues.
The most common type of anemia, iron-deficiency anemia, occurs because the marrow fails to
produce sufficient red blood cells. When insufficient iron is available to the bone marrow, it
slows down its production of hemoglobin and red blood cells. The most common causes of iron-
deficiency anemia are certain infections that result in gastrointestinal blood loss and the
consequent chronic loss of iron. Adding supplemental iron to the diet is often sufficient to cure
iron-deficiency anemia.
Some anemias are the result of increased destruction of red blood cells, as in the case of sickle-
cell anemia, a genetic disease most common in persons of African ancestry. The red blood cells
of sickle-cell patients assume an unusual crescent shape, causing them to become trapped in
some blood vessels, blocking the flow of other blood cells to tissues and depriving them of
oxygen.
B White Blood Cell Diseases
Some white blood cell diseases are characterized by an insufficient number of white blood cells.
This can be caused by the failure of the bone marrow to produce adequate numbers of normal
white blood cells, or by diseases that lead to the destruction of crucial white blood cells. These
conditions result in severe immune deficiencies characterized by recurrent infections.
Any disease in which excess white blood cells are produced, particularly immature white blood
cells, is called leukemia, or blood cancer. Many cases of leukemia are linked to gene
abnormalities, resulting in unchecked growth of immature white blood cells. If this growth is not
halted, it often results in the death of the patient. These genetic abnormalities are not inherited in
the vast majority of cases, but rather occur after birth. Although some causes of these
abnormalities are known, for example exposure to high doses of radiation or the chemical
benzene, most remain poorly understood.
Treatment for leukemia typically involves the use of chemotherapy, in which strong drugs are
used to target and kill leukemic cells, permitting normal cells to regenerate. In some cases, bone
marrow transplants are effective. Much progress has been made over the last 30 years in the
treatment of this disease. In one type of childhood leukemia, more than 80 percent of patients can
now be cured of their disease.
C Coagulation Diseases
One disease of the coagulation system is hemophilia, a genetic bleeding disorder in which one of
the plasma clotting factors, usually factor VIII, is produced in abnormally low quantities,
resulting in uncontrolled bleeding from minor injuries. Although individuals with hemophilia are
able to form a good initial platelet plug when blood vessels are damaged, they are not easily able
to form the meshwork that holds the clot firmly intact. As a result, bleeding may occur some
time after the initial traumatic event. Treatment for hemophilia relies on giving transfusions of
factor VIII. Factor VIII can be isolated from the blood of normal blood donors but it also can be
manufactured in a laboratory through a process known as gene cloning.
VI BLOOD BANKS
The Red Cross and a number of other organizations run programs, known as blood banks, to
collect, store, and distribute blood and blood products for transfusions. When blood is donated,
its blood type is determined so that only appropriately matched blood is given to patients needing
a transfusion. Before using the blood, the blood bank also tests it for the presence of disease-
causing organisms, such as hepatitis viruses and human immunodeficiency virus (HIV), the
cause of acquired immunodeficiency syndrome (AIDS). This blood screening dramatically
reduces, but does not fully eliminate, the risk to the recipient of acquiring a disease through a
blood transfusion. Blood donation, which is extremely safe, generally involves giving about 400
to 500 ml (about 1 pt) of blood, which is only about 7 percent of a person’s total blood.
VII BLOOD IN NONHUMANS
One-celled organisms have no need for blood. They are able to absorb nutrients, expel wastes,
and exchange gases with their environment directly. Simple multicelled marine animals, such as
sponges, jellyfishes, and anemones, also do not have blood. They use the seawater that bathes
their cells to perform the functions of blood. However, all more complex multicellular animals
have some form of a circulatory system using blood. In some invertebrates, there are no cells
analogous to red blood cells. Instead, hemoglobin, or the related copper compound heocyanin,
circulates dissolved in the plasma.
The blood of complex multicellular animals tends to be similar to human blood, but there are
also some significant differences, typically at the cellular level. For example, fish, amphibians,
and reptiles possess red blood cells that have a nucleus, unlike the red blood cells of mammals.
The immune system of invertebrates is more primitive than that of vertebrates, lacking the
functionality associated with the white blood cell and antibody system found in mammals. Some
arctic fish species produce proteins in their blood that act as a type of antifreeze, enabling them
to survive in environments where the blood of other animals would freeze. Nonetheless, the
essential transportation, communication, and protection functions that make blood essential to
the continuation of life occur throughout much of the animal kingdom.
(IV)
Heavy water
Almost all the hydrogen in water has an atomic weight of 1. The American chemist Harold
Clayton Urey discovered in 1932 the presence in water of a small amount (1 part in 6000) of so-
called heavy water, or deuterium oxide (D2O); deuterium is the hydrogen isotope with an atomic
weight of 2. In 1951 the American chemist Aristid Grosse discovered that naturally occurring
water contains also minute traces of tritium oxide (T2O); tritium is the hydrogen isotope with an
atomic weight of 3. See Atom.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Hard Water
Hardness of natural waters is caused largely by calcium and magnesium salts and to a small
extent by iron, aluminum, and other metals. Hardness resulting from the bicarbonates and
carbonates of calcium and magnesium is called temporary hardness and can be removed by
boiling, which also sterilizes the water. The residual hardness is known as noncarbonate, or
permanent, hardness. The methods of softening noncarbonate hardness include the addition of
sodium carbonate and lime and filtration through natural or artificial zeolites which absorb the
hardness-producing metallic ions and release sodium ions to the water See Ion Exchange;
Zeolite. Sequestering agents in detergents serve to inactivate the substances that make water
hard.
Iron, which causes an unpleasant taste in drinking water, may be removed by aeration and
sedimentation or by passing the water through iron-removing zeolite filters, or the iron may be
stabilized by addition of such salts as polyphosphates. For use in laboratory applications, water is
either distilled or demineralized by passing it through ion-absorbing compounds.
(v)
Smallpox, highly contagious viral disease that is often fatal. The disease is chiefly characterized
by a skin rash that develops on the face, chest, back, and limbs. Over the course of a week the
rash develops into pustular (pus-filled) pimples resembling boils. In extreme cases the pustular
pimples run together—usually an indication of a fatal infection. Death may result from a
secondary bacterial infection of the pustules, from cell damage caused by the viral infection, or
from heart attack or shock. In the latter stages of nonfatal cases, smallpox pustules become
crusted, often leaving the survivor with permanent, pitted scars.
Smallpox is caused by a virus. An infected person spreads virus particles into the air in the form
of tiny droplets emitted from the mouth by speaking, coughing, or simply breathing. The virus
can then infect anyone who inhales the droplets. By this means, smallpox can spread extremely
rapidly from person to person.
Smallpox has afflicted humanity for thousands of years, causing epidemics from ancient times
through the 20th century. No one is certain where the smallpox virus came from, but scientists
speculate that several thousand years ago the virus made a trans-species jump into humans from
an animal—likely a rodent species such as a mouse. The disease probably did not become
established among humans until the beginnings of agriculture gave rise to the first large
settlements in the Nile valley (northeastern Africa) and Mesopotamia (now eastern Syria,
southeastern Turkey, and Iraq) more than 10,000 years ago.
Over the next several centuries smallpox established itself as a widespread disease in Europe,
Asia, and across Africa. During the 16th and 17th centuries, a time when Europeans made
journeys of exploration and conquest to the Americas and other continents, smallpox went with
them. By 1518 the disease reached the Americas aboard a Spanish ship that landed at the island
of Hispaniola (now the Dominican Republic and Haiti) in the West Indies. Before long smallpox
had killed half of the Taíno people, the native population of the island. The disease followed the
Spanish conquistadors into Mexico and Central America in 1520. With fewer than 500 men, the
Spanish explorer Hernán Cortés was able to conquer the great Aztec Empire under the emperor
Montezuma in what is now Mexico. One of Cortés's men was infected with smallpox, triggering
an epidemic that ultimately killed an estimated 3 million Aztecs, one-third of the population. A
similar path of devastation was left among the people of the Inca Empire of South America.
Smallpox killed the Inca emperor Huayna Capac in 1525, along with an estimated 100,000 Incas
in the capital city of Cuzco. The Incas and Aztecs are only two of many examples of smallpox
cutting a swath through a native population in the Americas, easing the way for Europeans to
conquer and colonize new territory. It can truly be said that smallpox changed history.
Yet the story of smallpox is also the story of great biomedical advancement and of ultimate
victory. As the result of a worldwide effort of vaccination and containment, the last naturally
occurring infection of smallpox occurred in 1977. Stockpiles of the virus now exist only in
secured laboratories. Some experts, however, are concerned about the potential use of smallpox
as a weapon of bioterrorism. Thus, despite being deliberately and successfully eradicated,
smallpox may still pose a threat to humanity.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Measles, also rubeola, acute, highly contagious, fever-producing disease caused by a filterable
virus, different from the virus that causes the less serious disease German measles, or rubella.
Measles is characterized by small red dots appearing on the surface of the skin, irritation of the
eyes (especially on exposure to light), coughing, and a runny nose. About 12 days after first
exposure, the fever, sneezing, and runny nose appear. Coughing and swelling of the neck glands
often follow. Four days later, red spots appear on the face or neck and then on the trunk and
limbs. In 2 or 3 days the rash subsides and the fever falls; some peeling of the involved skin
areas may take place. Infection of the middle ear may also occur.
Measles was formerly one of the most common childhood diseases. Since the development of an
effective vaccine in 1963, it has become much less frequent. By 1988 annual measles cases in the
United States had been reduced to fewer than 3,500, compared with about 500,000 per year in
the early 1960s. However, the number of new cases jumped to more than 18,000 in 1989 and to
nearly 28,000 in 1990. Most of these cases occurred among inner-city preschool children and
recent immigrants, but adolescents and young adults, who may have lost immunity (see
Immunization) from their childhood vaccinations, also experienced an increase. The number of
new cases declined rapidly in the 1990s and by 1999 fewer than 100 cases were reported. The
reasons for this resurgence and subsequent decline are not clearly understood. In other parts of
the world measles is still a common childhood disease and according to the World Health
Organization (WHO), about 1 million children die from measles each year. In the U.S., measles
is rarely fatal; should the virus spread to the brain, however, it can cause death or brain damage
(see Encephalitis).
No specific treatment for measles exists. Patients are kept isolated from other susceptible
individuals, usually resting in bed, and are treated with aspirin, cough syrup, and skin lotions to
lessen fever, coughing, and itching. The disease usually confers immunity after one attack, and
an immune pregnant woman passes the antibody in the globulin fraction of the blood serum,
through the placenta, to her fetus.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(vi)
PIG IRON
The basic materials used for the manufacture of pig iron are iron ore, coke, and limestone. The
coke is burned as a fuel to heat the furnace; as it burns, the coke gives off carbon monoxide,
which combines with the iron oxides in the ore, reducing them to metallic iron. This is the basic
chemical reaction in the blast furnace; it has the equation: Fe2O3 + 3CO = 3CO2 + 2Fe. The
limestone in the furnace charge is used as an additional source of carbon monoxide and as a
“flux” to combine with the infusible silica present in the ore to form fusible calcium silicate.
Without the limestone, iron silicate would be formed, with a resulting loss of metallic iron.
Calcium silicate plus other impurities form a slag that floats on top of the molten metal at the
bottom of the furnace. Ordinary pig iron as produced by blast furnaces contains iron, about 92
percent; carbon, 3 or 4 percent; silicon, 0.5 to 3 percent; manganese, 0.25 to 2.5 percent;
phosphorus, 0.04 to 2 percent; and a trace of sulfur.
A typical blast furnace consists of a cylindrical steel shell lined with a refractory, which is any
nonmetallic substance such as firebrick. The shell is tapered at the top and at the bottom and is
widest at a point about one-quarter of the distance from the bottom. The lower portion of the
furnace, called the bosh, is equipped with several tubular openings or tuyeres through which the
air blast is forced. Near the bottom of the bosh is a hole through which the molten pig iron flows
when the furnace is tapped, and above this hole, but below the tuyeres, is another hole for
draining the slag. The top of the furnace, which is about 27 m (about 90 ft) in height, contains
vents for the escaping gases, and a pair of round hoppers closed with bell-shaped valves through
which the charge is introduced into the furnace. The materials are brought up to the hoppers in
small dump cars or skips that are hauled up an inclined external skip hoist.
Blast furnaces operate continuously. The raw material to be fed into the furnace is divided into a
number of small charges that are introduced into the furnace at 10- to 15-min intervals. Slag is
drawn off from the top of the melt about once every 2 hr, and the iron itself is drawn off or
tapped about five times a day.
The air used to supply the blast in a blast furnace is preheated to temperatures between
approximately 540° and 870° C (approximately 1,000° and 1,600° F). The heating is performed
in stoves, cylinders containing networks of firebrick. The bricks in the stoves are heated for
several hours by burning blast-furnace gas, the waste gases from the top of the furnace. Then the
flame is turned off and the air for the blast is blown through the stove. The weight of air used in
the operation of a blast furnace exceeds the total weight of the other raw materials employed.
An important development in blast furnace technology, the pressurizing of furnaces, was
introduced after World War II. By “throttling” the flow of gas from the furnace vents, the
pressure within the furnace may be built up to 1.7 atm or more. The pressurizing technique
makes possible better combustion of the coke and higher output of pig iron. The output of many
blast furnaces can be increased 25 percent by pressurizing. Experimental installations have also
shown that the output of blast furnaces can be increased by enriching the air blast with oxygen.
The process of tapping consists of knocking out a clay plug from the iron hole near the bottom of
the bosh and allowing the molten metal to flow into a clay-lined runner and then into a large,
brick-lined metal container, which may be either a ladle or a rail car capable of holding as much
as 100 tons of metal. Any slag that may flow from the furnace with the metal is skimmed off
before it reaches the container. The container of molten pig iron is then transported to the
steelmaking shop.
Modern-day blast furnaces are operated in conjunction with basic oxygen furnaces and
sometimes the older open-hearth furnaces as part of a single steel-producing plant. In such plants
the molten pig iron is used to charge the steel furnaces. The molten metal from several blast
furnaces may be mixed in a large ladle before it is converted to steel, to minimize any
irregularities in the composition of the individual melts.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
STAINLESS STEEL
Stainless steels contain chromium, nickel, and other alloying elements that keep them bright and
rust resistant in spite of moisture or the action of corrosive acids and gases. Some stainless steels
are very hard; some have unusual strength and will retain that strength for long periods at
extremely high and low temperatures. Because of their shining surfaces architects often use them
for decorative purposes. Stainless steels are used for the pipes and tanks of petroleum refineries
and chemical plants, for jet planes, and for space capsules. Surgical instruments and equipment
are made from these steels, and they are also used to patch or replace broken bones because the
steels can withstand the action of body fluids. In kitchens and in plants where food is prepared,
handling equipment is often made of stainless steel because it does not taint the food and can be
easily cleaned.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(VII)
Alloy, substance composed of two or more metals. Alloys, like pure metals, possess metallic
luster and conduct heat and electricity well, although not generally as well as do the pure metals
of which they are formed. Compounds that contain both a metal or metals and certain nonmetals,
particularly those containing carbon, are also called alloys. The most important of these is steel.
Simple carbon steels consist of about 0.5 percent manganese and up to 0.8 percent carbon, with
the remaining material being iron.
An alloy may consist of an intermetallic compound, a solid solution, an intimate mixture of
minute crystals of the constituent metallic elements, or any combination of solutions or mixtures
of the foregoing. Intermetallic compounds, such as NaAu2, CuSn, and CuAl2, do not follow the
ordinary rules of valency. They are generally hard and brittle; although they have not been
important in the past where strength is required, many new developments have made such
compounds increasingly important. Alloys consisting of solutions or mixtures of two metals
generally have lower melting points than do the pure constituents. A mixture with a melting
point lower than that of any other mixture of the same constituents is called a eutectic. The
eutectoid, the solid-phase analog of the eutectic, frequently has better physical characteristics
than do alloys of different proportions.
The properties of alloys are frequently far different from those of their constituent elements, and
such properties as strength and corrosion resistance may be considerably greater for an alloy than
for any of the separate metals. For this reason, alloys are more generally used than pure metals.
Steel is stronger and harder than wrought iron, which is approximately pure iron, and is used in
far greater quantities. The alloy steels, mixtures of steel with such metals as chromium,
manganese, molybdenum, nickel, tungsten, and vanadium, are stronger and harder than steel
itself, and many of them are also more corrosion-resistant than iron or steel. An alloy can often
be made to match a predetermined set of characteristics. An important case in which particular
characteristics are necessary is the design of rockets, spacecraft, and supersonic aircraft. The
materials used in these vehicles and their engines must be light in weight, very strong, and able
to sustain very high temperatures. To withstand these high temperatures and reduce the overall
weight, lightweight, high-strength alloys of aluminum, beryllium, and titanium have been
developed. To resist the heat generated during reentry into the atmosphere of the earth, alloys
containing heat-resistant metals such as tantalum, niobium, tungsten, cobalt, and nickel are being
used in space vehicles.
A wide variety of special alloys containing metals such as beryllium, boron, niobium, hafnium,
and zirconium, which have particular nuclear absorption characteristics, are used in nuclear
reactors. Niobium-tin alloys are used as superconductors at extremely low temperatures. Special
copper, nickel, and titanium alloys, designed to resist the corrosive effects of boiling salt water,
are used in desalination plants.
Historically, most alloys have been prepared by mixing the molten materials. More recently,
powder metallurgy has become important in the preparation of alloys with special characteristics.
In this process, the alloys are prepared by mixing dry powders of the materials, squeezing them
together under high pressure, and then heating them to temperatures just below their melting
points. The result is a solid, homogeneous alloy. Mass-produced products may be prepared by
this technique at great savings in cost. Among the alloys made possible by powder metallurgy
are the cermets. These alloys of metal and carbon (carbides), boron (borides), oxygen (oxides),
silicon (silicides), and nitrogen (nitrides) combine the advantages of the high-temperature
strength, stability, and oxidation res
istance of the ceramic compound with the ductility and shock resistance of the metal. Another
alloying technique is ion implantation, which has been adapted from the processes used to
produce computer chips; beams of ions of carbon, nitrogen, and other elements are fired into
selected metals in a vacuum chamber to produce a strong, thin layer of alloy on the metal
surface. Bombarding titanium with nitrogen, for example, can produce a superior alloy for
prosthetic implants.
Sterling silver, 14-karat gold, white gold, and plantinum-iridium are precious metal alloys.
Babbit metal, brass, bronze, Dow-metal, German silver, gunmetal, Monel metal, pewter, and
solder are alloys of less precious metals. Commercial aluminum is, because of impurities,
actually an alloy. Alloys of mercury with other metals are called amalgams.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Amalgam
Mercury combines with all the common metals except iron and platinum to form alloys that are
called amalgams. In one method of extracting gold and silver from their ores, the metals are
combined with mercury to make them dissolve; the mercury is then removed by distillation. This
method is no longer commonly used, however.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(viii) Isotope, one of two or more species of atom having the same atomic number, hence
constituting the same element, but differing in mass number. As atomic number is equivalent to
the number of protons in the nucleus, and mass number is the sum total of the protons plus the
neutrons in the nucleus, isotopes of the same element differ from one another only in the number
of neutrons in their nuclei. See Atom.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Isobars
i•so•bar [ssə br]
(plural i•so•bars)
noun
1. line showing weather patterns: a line drawn on a weather map that connects places with equal
atmospheric pressure. Isobars are often used collectively to indicate the movement or formation
of weather systems.
2. atom with same mass number: one of two or more atoms or elements that have the same mass
number but different atomic numbers
[Mid-19th century. < Greek isobaros "of equal weight"]

Microsoft® Encarta® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.


(ix)
Vein (anatomy)
Vein (anatomy), in anatomy, blood vessel that conducts the deoxygenated blood from the
capillaries back to the heart. Three exceptions to this description exist: the pulmonary veins
return blood from the lungs, where it has been oxygenated, to the heart; the portal veins receive
blood from the pyloric, gastric, cystic, superior mesenteric, and splenic veins and, entering the
liver, break up into small branches that pass through all parts of that organ; and the umbilical
veins convey blood from the fetus to the mother's placenta. Veins enlarge as they proceed,
gathering blood from their tributaries. They finally pour the blood through the superior and
inferior venae cavae into the right atrium of the heart. Their coats are similar to those of the
arteries, but thinner, and often transparent. See Circulatory System; Heart; Varicose Vein.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Artery, one of the tubular vessels that conveys blood from the heart to the tissues of the body.
Two arteries have direct connection with the heart: (1) the aorta, which, with its branches,
conveys oxygenated blood from the left ventricle to every part of the body; and (2) the
pulmonary artery, which conveys blood from the right ventricle to the lungs, whence it is
returned bearing oxygen to the left side of the heart (see Heart: Structure and Function). Arteries
in their ultimate minute branchings are connected with the veins by capillaries. They are named
usually from the part of the body where they are found, as the brachial (arm) or the metacarpal
(wrist) artery; or from the organ which they supply, as the hepatic (liver) or the ovarian artery.
The facial artery is the branch of the external carotid artery that passes up over the lower jaw and
supplies the superficial portion of the face; the hemorrhoidal arteries are three vessels that supply
the lower end of the rectum; the intercostal arteries are the arteries that supply the space between
the ribs; the lingual artery is the branch of the external carotid artery that supplies the tongue.
The arteries expand and then constrict with each beat of the heart, a rhythmic movement that
may be felt as the pulse.
Disorders of the arteries may involve inflammation, infection, or degeneration of the walls of the
arterial blood vessels. The most common arterial disease, and the one which is most often a
contributory cause of death, particularly in old people, is arteriosclerosis, known popularly as
hardening of the arteries. The hardening usually is preceded by atherosclerosis, an accumulation
of fatty deposits on the inner lining of the arterial wall. The deposits reduce the normal flow of
the blood through the artery. One of the substances associated with atherosclerosis is cholesterol.
As arteriosclerosis progresses, calcium is deposited and scar tissue develops, causing the wall to
lose its elasticity. Localized dilatation of the arterial wall, called an aneurysm, may also develop.
Arteriosclerosis may affect any or all of the arteries of the body. If the blood vessels supplying
the heart muscle are affected, the disease may lead to a painful condition known as angina
pectoris. See Heart: Heart Diseases.
The presence of arteriosclerosis in the wall of an artery can precipitate formation of a clot, or
thrombus (see Thrombosis). Treatment consists of clot-dissolving enzymes called urokinase and
streptokinase, which were approved for medical use in 1979. Studies indicate that compounds
such as aspirin and sulfinpyrazone, which inhibit platelet reactivity, may act to prevent formation
of a thrombus, but whether they can or should be taken in tolerable quantities over a long period
of time for this purpose has not yet been determined.
Embolism is the name given to the obstruction of an artery by a clot carried to it from another
part of the body. Such floating clots may be caused by arteriosclerosis, but are most commonly a
consequence of the detachment of a mass of fibrin from a diseased heart. Any artery may be
obstructed by embolism; the consequences are most serious in the brain, the retina, and the
limbs. In the larger arteries of the brain,
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Aorta, principal artery of the body that carries oxygenated blood to most other arteries in the
body. In humans the aorta rises from the left ventricle (lower chamber) of the heart, arches back
and downward through the thorax, passes through the diaphragm into the abdomen, and divides
into the right and left iliac arteries at about the level of the fourth lumbar vertebra. The aorta
gives rise to the coronary arteries, which supply the heart muscle with blood, and to the
innominate, subclavian, and carotid arteries, which supply the head and arms. The descending
part of the aorta gives rise, in the thorax, to the intercostal arteries that branch in the body wall.
In the abdomen it gives off the coeliac artery, which divides into the gastric, hepatic, and splenic
arteries, which supply the stomach, liver, and spleen, respectively; the mesenteric arteries to the
intestines; the renal arteries to the kidneys; and small branches to the body wall and to
reproductive organs. The aorta is subject to a condition known as atherosclerosis, in which fat
deposits attach to the aortic walls. If left untreated, this condition may lead to hypertension or to
an aneurysm (a swelling of the vessel wall), which can be fatal.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
VALVES
In passing through the system, blood pumped by the heart follows a winding course through the
right chambers of the heart, into the lungs, where it picks up oxygen, and back into the left
chambers of the heart. From these it is pumped into the main artery, the aorta, which branches
into increasingly smaller arteries until it passes through the smallest, known as arterioles.
Beyond the arterioles, the blood passes through a vast amount of tiny, thin-walled structures
called capillaries. Here, the blood gives up its oxygen and its nutrients to the tissues and absorbs
from them carbon dioxide and other waste products of metabolism. The blood completes its
circuit by passing through small veins that join to form increasingly larger vessels until it reaches
the largest veins, the inferior and superior venae cavae, which return it to the right side of the
heart. Blood is propelled mainly by contractions of the heart; contractions of skeletal muscle also
contribute to circulation. Valves in the heart and in the veins ensure its flow in one direction.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Q10:
Gland
Gland, any structure of animals, plants, or insects that produces chemical secretions or
excretions. Glands are classified by shape, such as tubular and saccular, or saclike, and by
structure, such as simple and compound. Types of the simple tubular and the simple saccular
glands are, respectively, the sweat and the sebaceous glands (see Skin). The kidney is a
compound tubular gland, and the tear-producing glands are compound saccular (see Eye). The
so-called lymph glands are erroneously named and are in reality nodes (see Lymphatic System).
“Swollen glands” are actually infected lymph nodes.
Glands are of two principal types: (1) those of internal secretion, called endocrine, and (2) those
of external secretion, called exocrine. Some glands such as the pancreas produce both internal
and external secretions. Because endocrine glands produce and release hormones (see Hormone)
directly into the bloodstream without passing through a canal, they are called ductless. For the
functions and diseases of endocrine glands, see Endocrine System.
In animals, insects, and plants, exocrine glands secrete chemical substances for a variety of
purposes. In plants, they produce water, protective sticky fluids, and nectars. The materials for
the eggs of birds, the shells of mussels, the cocoons of caterpillars and silkworms, the webs of
spiders, and the wax of honeycombs are other examples of exocrine secretions.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Endocrine System
I INTRODUCTION
Endocrine System, group of specialized organs and body tissues that produce, store, and secrete
chemical substances known as hormones. As the body's chemical messengers, hormones transfer
information and instructions from one set of cells to another. Because of the hormones they
produce, endocrine organs have a great deal of influence over the body. Among their many jobs
are regulating the body's growth and development, controlling the function of various tissues,
supporting pregnancy and other reproductive functions, and regulating metabolism.
Endocrine organs are sometimes called ductless glands because they have no ducts connecting
them to specific body parts. The hormones they secrete are released directly into the
bloodstream. In contrast, the exocrine glands, such as the sweat glands or the salivary glands,
release their secretions directly to target areas—for example, the skin or the inside of the mouth.
Some of the body's glands are described as endo-exocrine glands because they secrete hormones
as well as other types of substances. Even some nonglandular tissues produce hormone-like
substances—nerve cells produce chemical messengers called neurotransmitters, for example.
The earliest reference to the endocrine system comes from ancient Greece, in about 400 BC.
However, it was not until the 16th century that accurate anatomical descriptions of many of the
endocrine organs were published. Research during the 20th century has vastly improved our
understanding of hormones and how they function in the body. Today, endocrinology, the study
of the endocrine glands, is an important branch of modern medicine. Endocrinologists are
medical doctors who specialize in researching and treating disorders and diseases of the
endocrine system.
II COMPONENTS OF THE ENDOCRINE SYSTEM
The primary glands that make up the human endocrine system are the hypothalamus, pituitary,
thyroid, parathyroid, adrenal, pineal body, and reproductive glands—the ovary and testis. The
pancreas, an organ often associated with the digestive system, is also considered part of the
endocrine system. In addition, some nonendocrine organs are known to actively secrete
hormones. These include the brain, heart, lungs, kidneys, liver, thymus, skin, and placenta.
Almost all body cells can either produce or convert hormones, and some secrete hormones. For
example, glucagon, a hormone that raises glucose levels in the blood when the body needs extra
energy, is made in the pancreas but also in the wall of the gastrointestinal tract. However, it is the
endocrine glands that are specialized for hormone production. They efficiently manufacture
chemically complex hormones from simple chemical substances—for example, amino acids and
carbohydrates—and they regulate their secretion more efficiently than any other tissues.
The hypothalamus, found deep within the brain, directly controls the pituitary gland. It is
sometimes described as the coordinator of the endocrine system. When information reaching the
brain indicates that changes are needed somewhere in the body, nerve cells in the hypothalamus
secrete body chemicals that either stimulate or suppress hormone secretions from the pituitary
gland. Acting as liaison between the brain and the pituitary gland, the hypothalamus is the
primary link between the endocrine and nervous systems.
Located in a bony cavity just below the base of the brain is one of the endocrine system's most
important members: the pituitary gland. Often described as the body’s master gland, the pituitary
secretes several hormones that regulate the function of the other endocrine glands. Structurally,
the pituitary gland is divided into two parts, the anterior and posterior lobes, each having
separate functions. The anterior lobe regulates the activity of the thyroid and adrenal glands as
well as the reproductive glands. It also regulates the body's growth and stimulates milk
production in women who are breast-feeding. Hormones secreted by the anterior lobe include
adrenocorticotropic hormone (ACTH), thyrotropic hormone (TSH), luteinizing hormone (LH),
follicle-stimulating hormone (FSH), growth hormone (GH), and prolactin. The anterior lobe also
secretes endorphins, chemicals that act on the nervous system to reduce sensitivity to pain.
The posterior lobe of the pituitary gland contains the nerve endings (axons) from the
hypothalamus, which stimulate or suppress hormone production. This lobe secretes antidiuretic
hormones (ADH), which control water balance in the body, and oxytocin, which controls muscle
contractions in the uterus.
The thyroid gland, located in the neck, secretes hormones in response to stimulation by TSH
from the pituitary gland. The thyroid secretes hormones—for example, thyroxine and three-
iodothyronine—that regulate growth and metabolism, and play a role in brain development
during childhood.
The parathyroid glands are four small glands located at the four corners of the thyroid gland. The
hormone they secrete, parathyroid hormone, regulates the level of calcium in the blood.
Located on top of the kidneys, the adrenal glands have two distinct parts. The outer part, called
the adrenal cortex, produces a variety of hormones called corticosteroids, which include cortisol.
These hormones regulate salt and water balance in the body, prepare the body for stress, regulate
metabolism, interact with the immune system, and influence sexual function. The inner part, the
adrenal medulla, produces catecholamines, such as epinephrine, also called adrenaline, which
increase the blood pressure and heart rate during times of stress.
The reproductive components of the endocrine system, called the gonads, secrete sex hormones
in response to stimulation from the pituitary gland. Located in the pelvis, the female gonads, the
ovaries, produce eggs. They also secrete a number of female sex hormones, including estrogen
and progesterone, which control development of the reproductive organs, stimulate the
appearance of female secondary sex characteristics, and regulate menstruation and pregnancy.
Located in the scrotum, the male gonads, the testes, produce sperm and also secrete a number of
male sex hormones, or androgens. The androgens, the most important of which is testosterone,
regulate development of the reproductive organs, stimulate male secondary sex characteristics,
and stimulate muscle growth.
The pancreas is positioned in the upper abdomen, just under the stomach. The major part of the
pancreas, called the exocrine pancreas, functions as an exocrine gland, secreting digestive
enzymes into the gastrointestinal tract. Distributed through the pancreas are clusters of endocrine
cells that secrete insulin, glucagon, and somastatin. These hormones all participate in regulating
energy and metabolism in the body.
The pineal body, also called the pineal gland, is located in the middle of the brain. It secretes
melatonin, a hormone that may help regulate the wake-sleep cycle. Research has shown that
disturbances in the secretion of melatonin are responsible, in part, for the jet lag associated with
long-distance air travel.
III HOW THE ENDOCRINE SYSTEM WORKS
Hormones from the endocrine organs are secreted directly into the bloodstream, where special
proteins usually bind to them, helping to keep the hormones intact as they travel throughout the
body. The proteins also act as a reservoir, allowing only a small fraction of the hormone
circulating in the blood to affect the target tissue. Specialized proteins in the target tissue, called
receptors, bind with the hormones in the bloodstream, inducing chemical changes in response to
the body’s needs. Typically, only minute concentrations of a hormone are needed to achieve the
desired effect.
Too much or too little hormone can be harmful to the body, so hormone levels are regulated by a
feedback mechanism. Feedback works something like a household thermostat. When the heat in
a house falls, the thermostat responds by switching the furnace on, and when the temperature is
too warm, the thermostat switches the furnace off. Usually, the change that a hormone produces
also serves to regulate that hormone's secretion. For example, parathyroid hormone causes the
body to increase the level of calcium in the blood. As calcium levels rise, the secretion of
parathyroid hormone then decreases. This feedback mechanism allows for tight control over
hormone levels, which is essential for ideal body function. Other mechanisms may also influence
feedback relationships. For example, if an individual becomes ill, the adrenal glands increase the
secretions of certain hormones that help the body deal with the stress of illness. The adrenal
glands work in concert with the pituitary gland and the brain to increase the body’s tolerance of
these hormones in the blood, preventing the normal feedback mechanism from decreasing
secretion levels until the illness is gone.
Long-term changes in hormone levels can influence the endocrine glands themselves. For
example, if hormone secretion is chronically low, the increased stimulation by the feedback
mechanism leads to growth of the gland. This can occur in the thyroid if a person's diet has
insufficient iodine, which is essential for thyroid hormone production. Constant stimulation from
the pituitary gland to produce the needed hormone causes the thyroid to grow, eventually
producing a medical condition known as goiter.
IV DISEASES OF THE ENDOCRINE SYSTEM
Endocrine disorders are classified in two ways: disturbances in the production of hormones, and
the inability of tissues to respond to hormones. The first type, called production disorders, are
divided into hypofunction (insufficient activity) and hyperfunction (excess activity).
Hypofunction disorders can have a variety of causes, including malformations in the gland itself.
Sometimes one of the enzymes essential for hormone production is missing, or the hormone
produced is abnormal. More commonly, hypofunction is caused by disease or injury.
Tuberculosis can appear in the adrenal glands, autoimmune diseases can affect the thyroid, and
treatments for cancer—such as radiation therapy and chemotherapy—can damage any of the
endocrine organs. Hypofunction can also result when target tissue is unable to respond to
hormones. In many cases, the cause of a hypofunction disorder is unknown.
Hyperfunction can be caused by glandular tumors that secrete hormone without responding to
feedback controls. In addition, some autoimmune conditions create antibodies that have the side
effect of stimulating hormone production. Infection of an endocrine gland can have the same
result.
Accurately diagnosing an endocrine disorder can be extremely challenging, even for an astute
physician. Many diseases of the endocrine system develop over time, and clear, identifying
symptoms may not appear for many months or even years. An endocrinologist evaluating a
patient for a possible endocrine disorder relies on the patient's history of signs and symptoms, a
physical examination, and the family history—that is, whether any endocrine disorders have
been diagnosed in other relatives. A variety of laboratory tests—for example, a
radioimmunoassay—are used to measure hormone levels. Tests that directly stimulate or
suppress hormone production are also sometimes used, and genetic testing for deoxyribonucleic
acid (DNA) mutations affecting endocrine function can be helpful in making a diagnosis. Tests
based on diagnostic radiology show anatomical pictures of the gland in question. A functional
image of the gland can be obtained with radioactive labeling techniques used in nuclear
medicine.
One of the most common diseases of the endocrine systems is diabetes mellitus, which occurs in
two forms. The first, called diabetes mellitus Type 1, is caused by inadequate secretion of insulin
by the pancreas. Diabetes mellitus Type 2 is caused by the body's inability to respond to insulin.
Both types have similar symptoms, including excessive thirst, hunger, and urination as well as
weight loss. Laboratory tests that detect glucose in the urine and elevated levels of glucose in the
blood usually confirm the diagnosis. Treatment of diabetes mellitus Type 1 requires regular
injections of insulin; some patients with Type 2 can be treated with diet, exercise, or oral
medication. Diabetes can cause a variety of complications, including kidney problems, pain due
to nerve damage, blindness, and coronary heart disease. Recent studies have shown that
controlling blood sugar levels reduces the risk of developing diabetes complications
considerably.
Diabetes insipidus is caused by a deficiency of vasopressin, one of the antidiuretic hormones
(ADH) secreted by the posterior lobe of the pituitary gland. Patients often experience increased
thirst and urination. Treatment is with drugs, such as synthetic vasopressin, that help the body
maintain water and electrolyte balance.
Hypothyroidism is caused by an underactive thyroid gland, which results in a deficiency of
thyroid hormone. Hypothyroidism disorders cause myxedema and cretinism, more properly
known as congenital hypothyroidism. Myxedema develops in older adults, usually after age 40,
and causes lethargy, fatigue, and mental sluggishness. Congenital hypothyroidism, which is
present at birth, can cause more serious complications including mental retardation if left
untreated. Screening programs exist in most countries to test newborns for this disorder. By
providing the body with replacement thyroid hormones, almost all of the complications are
completely avoidable.
Addison's disease is caused by decreased function of the adrenal cortex. Weakness, fatigue,
abdominal pains, nausea, dehydration, fever, and hyperpigmentation (tanning without sun
exposure) are among the many possible symptoms. Treatment involves providing the body with
replacement corticosteroid hormones as well as dietary salt.
Cushing's syndrome is caused by excessive secretion of glucocorticoids, the subgroup of
corticosteroid hormones that includes hydrocortisone, by the adrenal glands. Symptoms may
develop over many years prior to diagnosis and may include obesity, physical weakness, easily
bruised skin, acne, hypertension, and psychological changes. Treatment may include surgery,
radiation therapy, chemotherapy, or blockage of hormone production with drugs.
Thyrotoxicosis is due to excess production of thyroid hormones. The most common cause for it
is Graves' disease, an autoimmune disorder in which specific antibodies are produced,
stimulating the thyroid gland. Thyrotoxicosis is eight to ten times more common in women than
in men. Symptoms include nervousness, sensitivity to heat, heart palpitations, and weight loss.
Many patients experience protruding eyes and tremors. Drugs that inhibit thyroid activity,
surgery to remove the thyroid gland, and radioactive iodine that destroys the gland are common
treatments.
Acromegaly and gigantism both are caused by a pituitary tumor that stimulates production of
excessive growth hormone, causing abnormal growth in particular parts of the body. Acromegaly
is rare and usually develops over many years in adult subjects. Gigantism occurs when the excess
of growth hormone begins in childhood.

Contributed By:
Gad B. Kletter
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Human hormones significantly affect the activity of every cell in the body. They influence
mental acuity, physical agility, and body build and stature. Growth hormone is a hormone
produced by the pituitary gland. It regulates growth by stimulating the formation of bone and the
uptake of amino acids, molecules vital to building muscle and other tissue.
Sex hormones regulate the development of sexual organs, sexual behavior, reproduction, and
pregnancy. For example, gonadotropins, also secreted by the pituitary gland, are sex hormones
that stimulate egg and sperm production. The gonadotropin that stimulates production of sperm
in men and formation of ovary follicles in women is called a follicle-stimulating hormone. When
a follicle-stimulating hormone binds to an ovary cell, it stimulates the enzymes needed for the
synthesis of estradiol, a female sex hormone. Another gonadotropin called luteinizing hormone
regulates the production of eggs in women and the production of the male sex hormone
testosterone. Produced in the male gonads, or testes, testosterone regulates changes to the male
body during puberty, influences sexual behavior, and plays a role in growth. The female sex
hormones, called estrogens, regulate female sexual development and behavior as well as some
aspects of pregnancy. Progesterone, a female hormone secreted in the ovaries, regulates
menstruation and stimulates lactation in humans and other mammals.
Other hormones regulate metabolism. For example, thyroxine, a hormone secreted by the thyroid
gland, regulates rates of body metabolism. Glucagon and insulin, secreted in the pancreas,
control levels of glucose in the blood and the availability of energy for the muscles. A number of
hormones, including insulin, glucagon, cortisol, growth hormone, epinephrine, and
norepinephrine, maintain glucose levels in the blood. While insulin lowers the blood glucose, all
the other hormones raise it. In addition, several other hormones participate indirectly in the
regulation. A protein called somatostatin blocks the release of insulin, glucagon, and growth
hormone, while another hormone, gastric inhibitory polypeptide, enhances insulin release in
response to glucose absorption. This complex system permits blood glucose concentration to
remain within a very narrow range, despite external conditions that may vary to extremes.
Hormones also regulate blood pressure and other involuntary body functions. Epinephrine, also
called adrenaline, is a hormone secreted in the adrenal gland. During periods of stress,
epinephrine prepares the body for physical exertion by increasing the heart rate, raising the blood
pressure, and releasing sugar stored in the liver for quick energy.

Insulin Secretion

Insulin Secretion
This light micrograph of a section of the human pancreas shows one of the islets of Langerhans,
center, a group of modified glandular cells. These cells secrete insulin, a hormone that helps the
body metabolize sugars, fats, and starches. The blue and white lines in the islets of Langerhans
are blood vessels that carry the insulin to the rest of the body. Insulin deficiency causes diabetes
mellitus, a disease that affects at least 10 million people in the United States.
Encarta Encyclopedia
Photo Researchers, Inc./Astrid and Hanns-Frieder Michler

Full Size

Hormones are sometimes used to treat medical problems, particularly diseases of the endocrine
system. In people with diabetes mellitus type 1, for example, the pancreas secretes little or no
insulin. Regular injections of insulin help maintain normal blood glucose levels. Sometimes, an
illness or injury not directly related to the endocrine system can be helped by a dose of a
particular hormone. Steroid hormones are often used as anti-inflammatory agents to treat the
symptoms of various diseases, including cancer, asthma, or rheumatoid arthritis. Oral
contraceptives, or birth control pills, use small, regular doses of female sex hormones to prevent
pregnancy.
Initially, hormones used in medicine were collected from extracts of glands taken from humans
or animals. For example, pituitary growth hormone was collected from the pituitary glands of
dead human bodies, or cadavers, and insulin was extracted from cattle and hogs. As technology
advanced, insulin molecules collected from animals were altered to produce the human form of
insulin.
With improvements in biochemical technology, many hormones are now made in laboratories
from basic chemical compounds. This eliminates the risk of transferring contaminating agents
sometimes found in the human and animal sources. Advances in genetic engineering even enable
scientists to introduce a gene of a specific protein hormone into a living cell, such as a bacterium,
which causes the cell to secrete excess amounts of a desired hormone. This technique, known as
recombinant DNA technology, has vastly improved the availability of hormones.
Recombinant DNA has been especially useful in producing growth hormone, once only available
in limited supply from the pituitary glands of human cadavers. Treatments using the hormone
were far from ideal because the cadaver hormone was often in short supply. Moveover, some of
the pituitary glands used to make growth hormone were contaminated with particles called
prions, which could cause diseases such as Creutzfeldt-Jakob disease, a fatal brain disorder. The
advent of recombinant technology made growth hormone widely available for safe and effective
therapy.
Q11:
Flower
I INTRODUCTION
Flower, reproductive organ of most seed-bearing plants. Flowers carry out the multiple roles of
sexual reproduction, seed development, and fruit production. Many plants produce highly visible
flowers that have a distinctive size, color, or fragrance. Almost everyone is familiar with
beautiful flowers such as the blossoms of roses, orchids, and tulips. But many plants—including
oaks, beeches, maples, and grasses—have small, green or gray flowers that typically go
unnoticed.
Whether eye-catching or inconspicuous, all flowers produce the male or female sex cells
required for sexual reproduction. Flowers are also the site of fertilization, which is the union of a
male and female sex cell to produce a fertilized egg. The fertilized egg then develops into an
embryonic (immature) plant, which forms part of the developing seed. Neighboring structures of
the flower enclose the seed and mature into a fruit.
Botanists estimate that there are more than 240,000 species of flowering plants. However,
flowering plants are not the only seed-producing plants. Pines, firs, and cycads are among the
few hundred plants that bear their seeds on the surface of cones, rather than within a fruit.
Botanists call the cone-bearing plants gymnosperms, which means naked seeds; they refer to
flowering plants as angiosperms, which means enclosed seeds.
Flowering plants are more widespread than any other group of plants. They bloom on every
continent, from the bogs and marshes of the Arctic tundra to the barren soils of Antarctica.
Deserts, grasslands, rainforests, and other biomes display distinctive flower species. Even
streams, rivers, lakes, and swamps are home to many flowering plants.
In their diverse environments, flowers have evolved to become irreplaceable participants in the
complex, interdependent communities of organisms that make up ecosystems. The seeds or fruits
that flowers produce are food sources for many animals, large and small. In addition, many
insects, bats, hummingbirds, and small mammals feed on nectar, a sweet liquid produced by
many flowers, or on flower products known as pollen grains. The animals that eat flowers, seeds,
and fruits are prey for other animals—lizards, frogs, salamanders, and fish, for example—which
in turn are devoured by yet other animals, such as owls and snakes. Thus, flowers provide a
bountiful feast that sustains an intricate web of predators and prey (see Food Web).
Flowers play diverse roles in the lives of humans. Wildflowers of every hue brighten the
landscape, and the attractive shapes and colors of cultivated flowers beautify homes, parks, and
roadsides. The fleshy fruits that flowers produce, such as apples, grapes, strawberries, and
oranges, are eaten worldwide, as are such hard-shelled fruits as pecans and other nuts. Flowers
also produce wheat, rice, oats, and corn—the grains that are dietary mainstays throughout the
world. People even eat unopened flowers, such as those of broccoli and cauliflower, which are
popular vegetables. Natural dyes come from flowers, and fragrant flowers, such as jasmine and
damask rose, are harvested for their oils and made into perfumes. Certain flowers, such as red
clover blossoms, are collected for their medicinal properties, and edible flowers, such as
nasturtiums, add color and flavor to a variety of dishes. Flowers also are used to symbolize
emotions, as is evidenced by their use from ancient times in significant rituals, such as weddings
and funerals.
II PARTS OF A FLOWER
Flowers typically are composed of four parts, or whorls, arranged in concentric rings attached to
the tip of the stem. From innermost to outermost, these whorls are the (1) pistil, (2) stamens, (3)
petals, and (4) sepals.
A Pistil
The innermost whorl, located in the center of the flower, is the female reproductive structure, or
pistil. Often vase-shaped, the pistil consists of three parts: the stigma, the style, and the ovary.
The stigma, a slightly flared and sticky structure at the top of the pistil, functions by trapping
pollen grains, the structures that give rise to the sperm cells necessary for fertilization. The style
is a narrow stalk that supports the stigma. The style rises from the ovary, a slightly swollen
structure seated at the base of the flower. Depending on the species, the ovary contains one or
more ovules, each of which holds one egg cell. After fertilization, the ovules develop into seeds,
while the ovary enlarges into the fruit. If a flower has only one ovule, the fruit will contain one
seed, as in a peach. The fruit of a flower with many ovules, such as a tomato, will have many
seeds. An ovary that contains one or more ovules also is called a carpel, and a pistil may be
composed of one to several carpels.
B Stamens
The next whorl consists of the male reproductive structures, several to many stamens arranged
around the pistil. A stamen consists of a slender stalk called the filament, which supports the
anther, a tiny compartment where pollen forms. When a flower is still an immature, unopened
bud, the filaments are short and serve to transport nutrients to the developing pollen. As the
flower opens, the filaments lengthen and hold the anthers higher in the flower, where the pollen
grains are more likely to be picked up by visiting animals, wind, or in the case of some aquatic
plants, by water. The animals, wind, or water might then carry the pollen to the stigma of an
appropriate flower. The placement of pollen on the stigma is called pollination. Pollination
initiates the process of fertilization.
C Petals
Petals, the next whorl, surround the stamens and collectively are termed the corolla. Many petals
have bright colors, which attract animals that carry out pollination, collectively termed
pollinators. Three groups of pigments—alone or in combination—produce a veritable rainbow of
petal colors: anthocyanins yield shades of violet, blue, and red; betalains create reds; and
carotenoids produce yellows and orange. Petal color can be modified in several ways. Texture,
for example, can play a role in the overall effect—a smooth petal is shiny, while a rough one
appears velvety. If cells inside the petal are filled with starch, they create a white layer that
makes pigments appear brighter. Petals with flat air spaces between cells shimmer iridescently.
In some flowers, the pigments form distinct patterns, invisible to humans but visible to bees, who
can see ultraviolet light. Like the landing strips of an airport, these patterns, called nectar guides,
direct bees to the nectar within the flower. Nectar is made in specialized glands located at or near
the petal’s base. Some flowers secrete copious amounts of nectar and attract big pollinators with
large appetites, such as bats. Other flowers, particularly those that depend on wind or water to
transport their pollen, may secrete little or no nectar. The petals of many species also are the
source of the fragrances that attract pollinators. In these species, the petals house tiny glands that
produce essential, or volatile, oils that vaporize easily, often releasing a distinctive aroma. One
flower can make dozens of different essential oils, which mingle to yield the flower’s unique
fragrance.
D Sepals
The sepals, the outermost whorl, together are called the calyx. In the flower bud, the sepals
tightly enclose and protect the petals, stamens, and pistil from rain or insects. The sepals unfurl
as the flower opens and often resemble small green leaves at the flower’s base. In some flowers,
the sepals are colorful and work with the petals to attract pollinators.
E Variations in Structure
Like virtually all forms in nature, flowers display many variations in their structure. Most
flowers have all four whorls—pistil, stamens, petals, and sepals. Botanists call these complete
flowers. But some flowers are incomplete, meaning they lack one or more whorls. Incomplete
flowers are most common in plants whose pollen is dispersed by the wind or water. Since these
flowers do not need to attract pollinators, most have no petals, and some even lack sepals.
Certain wind-pollinated flowers do have small sepals and petals that create eddies in the wind,
directing pollen to swirl around and settle on the flower. In still other flowers, the petals and
sepals are fused into structures called a floral tube.
Flowers that lack either stamens or a pistil are said to be imperfect. The petal-like rays on the
edge of a sunflower, for example, are actually tiny, imperfect flowers that lack stamens.
Imperfect flowers can still function in sexual reproduction. A flower that lacks a pistil but has
stamens produces pollen, and a flower with a pistil but no stamens provides ovules and can
develop into fruits and seeds. Flowers that have only stamens are termed staminate, and flowers
that have only a pistil are called pistillate.
Although a single flower can be either staminate or pistillate, a plant species must have both to
reproduce sexually. In some species with imperfect flowers, the staminate and pistillate flowers
occur on the same plant. Such plants, known as monoecious species, include corn. The tassel at
the top of the corn plant consists of hundreds of tiny staminate flowers, and the ears, which are
located laterally on the stem, contain clusters of pistillate flowers. The silks of corn are very long
styles leading to the ovaries, which, when ripe, form the kernels of corn. In dioecious species—
such as date, willow, and hemp—staminate and pistillate flowers are found on different plants. A
date tree, for example, will develop male or female flowers but not both. In dioecious species, at
least two plants, one bearing staminate flowers and one bearing pistillate flowers, are needed for
pollination and fertilization.
Other variations are found in the types of stems that support flowers. In some species, flowers
are attached to only one main stem, called the peduncle. In others, flowers are attached to smaller
stems, called pedicels, that branch from the peduncle. The peduncle and pedicels orient a flower
so that its pollinator can reach it. In the morning glory, for example, pedicels hold the flowers in
a horizontal position. This enables their hummingbird pollinators to feed since they do not crawl
into the flower as other pollinators do, but hover near the flower and lick the nectar with their
long tongues. Scientists assign specific terms to the different flower and stem arrangements to
assist in the precise identification of a flower. A plant with just one flower at the tip of the
peduncle—a tulip, for example—is termed solitary. In a spike, such as sage, flowers are attached
to the sides of the peduncle.
Sometimes flowers are grouped together in a cluster called an inflorescence. In an indeterminate
inflorescence, the lower flowers bloom first, and blooming proceeds over a period of days from
the bottom to the top of the peduncle or pedicels. As long as light, water, temperature, and
nutrients are favorable, the tip of the peduncle or pedicel continues to add new buds. There are
several types of indeterminate inflorescences. These include the raceme, formed by a series of
pedicels that emerge from the peduncle, as in snapdragons and lupines; and the panicle, in which
the series of pedicels branches and rebranches, as in lilac.
In determinate inflorescences, called cymes, the peduncle is capped by a flower bud, which
prevents the stem from elongating and adding more flowers. However, new flower buds appear
on side pedicels that form below the central flower, and the flowers bloom from the top to the
bottom of the pedicels. Flowers that bloom in cymes include chickweed and phlox.
III SEXUAL REPRODUCTION
Sexual reproduction mixes the hereditary material from two parents, creating a population of
genetically diverse offspring. Such a population can better withstand environmental changes.
Unlike animals, flowers cannot move from place to place, yet sexual reproduction requires the
union of the egg from one parent with the sperm from another parent. Flowers overcome their
lack of mobility through the all-important process of pollination. Pollination occurs in several
ways. In most flowers pollinated by insects and other animals, the pollen escapes through pores
in the anthers. As pollinators forage for food, the pollen sticks to their body and then rubs off on
the flower's stigma, or on the stigma of the next flower they visit. In plants that rely on wind for
pollination, the anthers burst open, releasing a cloud of yellow, powdery pollen that drifts to
other flowers. In a few aquatic plants, pollen is released into the water, where it floats to other
flowers.
Pollen consists of thousands of microscopic pollen grains. A tough pollen wall surrounds each
grain. In most flowers, the pollen grains released from the anthers contain two cells. If a pollen
grain lands on the stigma of the same species, the pollen grain germinates—one cell within the
grain emerges through the pollen wall and contacts the surface of the stigma, where it begins to
elongate. The lengthening cell grows through the stigma and style, forming a pollen tube that
transports the other cell within the pollen down the style to the ovary. As the tube grows, the cell
within it divides to produce two sperm cells, the male sex cells. In some species, the sperm are
produced before the pollen is released from the anther.
Independently of the pollen germination and pollen tube growth, developmental changes occur
within the ovary. The ovule produces several specialized structures—among them, the egg, or
female sex cell. The pollen tube grows into the ovary, crosses the ovule wall, and releases the
two sperm cells into the ovule. One sperm unites with the egg, triggering hormonal changes that
transform the ovule into a seed. The outer wall of the ovule develops into the seed coat, while the
fertilized egg grows into an embryonic plant. The growing embryonic plant relies on a starchy,
nutrient-rich food in the seed called endosperm. Endosperm develops from the union of the
second sperm with the two polar nuclei, also known as the central cell nuclei, structures also
produced by the ovary. As the seed grows, hormones are released that stimulate the walls of the
ovary to expand, and it develops into the fruit. The mature fruit often is hundreds or even
thousands of times larger than the tiny ovary from which it grew, and the seeds also are quite
large compared to the miniscule ovules from which they originated. The fruits, which are unique
to flowering plants, play an extremely important role in dispersing seeds. Animals eat fruits, such
as berries and grains. The seeds pass through the digestive tract of the animal unharmed and are
deposited in a wide variety of locations, where they germinate to produce the next generation of
flowering plants, thus continuing the species. Other fruits are dispersed far and wide by wind or
water; the fruit of maple trees, for example, has a winglike structure that catches the wind.
IV FLOWERING AND THE LIFE CYCLE
The life cycle of a flowering plant begins when the seed germinates. It progresses through the
growth of roots, stems, and leaves; formation of flower buds; pollination and fertilization; and
seed and fruit development. The life cycle ends with senescence, or old age, and death.
Depending on the species, the life cycle of a plant may last one, two, or many years. Plants called
annuals carry out their life cycle within one year. Biennial plants live for two years: The first
year they produce leaves, and in the second year they produce flowers and fruits and then die.
Perennial plants live for more than one year. Some perennials bloom every year, while others,
like agave, live for years without flowering and then in a few weeks produce thousands of
flowers, fruits, and seeds before dying.
Whatever the life cycle, most plants flower in response to certain cues. A number of factors
influence the timing of flowering. The age of the plant is critical—most plants must be at least
one or two weeks old before they bloom; presumably they need this time to accumulate the
energy reserves required for flowering. The number of hours of darkness is another factor that
influences flowering. Many species bloom only when the night is just the right length—a
phenomenon called photoperiodism. Poinsettias, for example, flower in winter when the nights
are long, while spinach blooms when the nights are short—late spring through late summer.
Temperature, light intensity, and moisture also affect the time of flowering. In the desert, for
example, heavy rains that follow a long dry period often trigger flowers to bloom.
V EVOLUTION OF FLOWERS
Flowering plants are thought to have evolved around 135 million years ago from cone-bearing
gymnosperms. Scientists had long proposed that the first flower most likely resembled today’s
magnolias or water lilies, two types of flowers that lack some of the specialized structures found
in most modern flowers. But in the late 1990s scientists compared the genetic material
deoxyribonucleic acid (DNA) of different plants to determine their evolutionary relationships.
From these studies, scientists identified a small, cream-colored flower from the genus Amborella
as the only living relative to the first flowering plant. This rare plant is found only on the South
Pacific island of New Caledonia.
The evolution of flowers dramatically changed the face of earth. On a planet where algae, ferns,
and cycads tinged the earth with a monochromatic green hue, flowers emerged to paint the earth
with vivid shades of red, pink, orange, yellow, blue, violet, and white. Flowering plants spread
rapidly, in part because their fruits so effectively disperse seeds. Today, flowering plants occupy
virtually all areas of the planet, with about 240,000 species known.
Many flowers and pollinators coevolved—that is, they influenced each other’s traits during the
process of evolution. For example, any population of flowers displays a range of color,
fragrance, size, and shape—hereditary traits that can be passed from one generation to the next.
Certain traits or combinations of traits appeal more to pollinators, so pollinators are more likely
to visit these attractive plants. The appealing plants have a greater chance of being pollinated
than others and, thus, are likely to produce more seeds. The seeds develop into plants that display
the inherited appealing traits. Similarly, in a population of pollinators, there are variations in
hereditary traits, such as wing size and shape, length and shape of tongue, ability to detect
fragrance, and so on. For example, pollinators whose bodies are small enough to reach inside
certain flowers gather pollen and nectar more efficiently than larger-sized members of their
species. These efficient, well-fed pollinators have more energy for reproduction. Their offspring
inherit the traits that enable them to forage successfully in flowers, and from generation to
generation, these traits are preserved. The pollinator preference seen today for certain flower
colors, fragrances, and shapes often represents hundreds of thousands of years of coevolution.
Coevolution often results in exquisite adaptations between flower and pollinator. These
adaptations can minimize competition for nectar and pollen among pollinators and also can
minimize competition among flowers for pollinators. Comet orchids, for example, have narrow
flowers almost a foot and a half long. These flowers are pollinated only by a species of hawk
moth that has a narrow tongue just the length of the flowers. The flower shape prevents other
pollinators from consuming the nectar, guarantees the moths a meal, and ensures the likelihood
of pollination and fertilization.
Most flowers and pollinators, however, are not as precisely matched to each other, but adaptation
still plays a significant role in their interactions. For example, hummingbirds are particularly
attracted to the color red. Hummingbird-pollinated flowers typically are red, and they often are
narrow, an adaptation that suits the long tongues of hummingbirds. Bats are large pollinators that
require relatively more energy than other pollinators. They visit big flowers like those of saguaro
cactus, which supply plenty of nectar or pollen. Bats avoid little flowers that do not offer enough
reward.
Other examples of coevolution are seen in the bromeliads and orchids that grow in dark forests.
These plants often have bright red, purple, or white sepals or petals, which make them visible to
pollinators. Night-flying pollinators, such as moths and bats, detect white flowers most easily,
and flowers that bloom at sunset, such as yucca, datura, and cereus, usually are white.
The often delightful and varied fragrances of flowers also reveal the hand of coevolution. In
some cases, insects detect fragrance before color. They follow faint aromas to flowers that are
too far away to be seen, recognizing petal shape and color only when they are very close to the
flower. Some night-blooming flowers emit sweet fragrances that attract night-flying moths. At
the other extreme, carrion flowers, flowers pollinated by flies, give off the odor of rotting meat to
attract their pollinators.
Flowers and their pollinators also coevolved to influence each other’s life cycles. Among species
that flower in response to a dark period, some measure the critical night length so accurately that
all species of the region flower in the same week or two. This enables related plants to
interbreed, and provides pollinators with enough pollen and nectar to live on so that they too can
reproduce. The process of coevolution also has resulted in synchronization of floral and insect
life cycles. Sometimes flowering occurs the week that insect pollinators hatch or emerge from
dormancy, or bird pollinators return from winter migration, so that they feed on and pollinate the
flowers. Flowering also is timed so that fruits and seeds are produced when animals are present
to feed on the fruits and disperse the seeds.
VI FLOWERS AND EXTINCTION
Like the amphibians, reptiles, insects, birds, and mammals that are experiencing alarming
extinction rates, a number of wildflower species also are endangered. The greatest threat lies in
the furious pace at which land is cleared for new houses, industries, and shopping malls to
accommodate rapid population growth. Such clearings are making the meadow, forest, and
wetland homes of wildflowers ever more scarce. Among the flowers so endangered is the rosy
periwinkle of Madagascar, a plant whose compounds have greatly reduced the death rates from
childhood leukemia and Hodgkin’s disease. Flowering plants, many with other medicinal
properties, also are threatened by global warming from increased combustion of fossil fuels;
increased ultraviolet light from ozone layer breakdown; and acid rain from industrial emissions.
Flowering plants native to a certain region also may be threatened by introduced species. Yellow
toadflax, for example, a garden plant brought to the United States and Canada from Europe, has
become a notorious weed, spreading to many habitats and preventing the growth of native
species. In some cases, unusual wildflowers such as orchids are placed at risk when they are
collected extensively to be sold.
Many of the threats that endanger flowering plants also place their pollinators at risk. When a
species of flower or pollinator is threatened, the coevolution of pollinators and flowers may
prove to be disadvantageous. If a flower species dies out, its pollinators will lack food and may
also die out, and the predators that depend on the pollinators also become threatened. In cases
where pollinators are adapted to only one or a few types of flowers, the loss of those plants can
disrupt an entire ecosystem. Likewise, if pollinators are damaged by ecological changes, plants
that depend on them will not be pollinated, seeds will not be formed, and new generations of
plants cannot grow. The fruits that these flowers produce may become scarce, affecting the food
supply of humans and other animals that depend on them.
Worldwide, more than 300 species of flowering plants are endangered, or at immediate risk of
extinction. Another two dozen or so are considered threatened, or likely to become extinct in the
near future. Of these species, fewer than 50 were the focus of preservation plans in the late
1990s. Various regional, national, and international organizations have marshaled their resources
in response to the critical need for protecting flowering plants and their habitats. In the United
States, native plant societies work to conserve regional plants in every state. The United States
Fish and Wildlife Endangered Species Program protects habitats for threatened and endangered
species throughout the United States, as do the Canadian Wildlife Service in Canada, the
Ministry for Social Development in Mexico, and similar agencies in other countries. At the
international level, the International Plant Conservation Programme at Cambridge, England,
collects information and provides education worldwide on plant species at risk, and the United
Nations Environmental Programme supports a variety of efforts that address the worldwide crisis
of endangered species.
Pollination
I INTRODUCTION
Pollination, transfer of pollen grains from the male structure of a plant to the female structure of
a plant. The pollen grains contain cells that will develop into male sex cells, or sperm. The
female structure of a plant contains the female sex cells, or eggs. Pollination prepares the plant
for fertilization, the union of the male and female sex cells. Virtually all grains, fruits,
vegetables, wildflowers, and trees must be pollinated and fertilized to produce seed or fruit, and
pollination is vital for the production of critically important agricultural crops, including corn,
wheat, rice, apples, oranges, tomatoes, and squash.
Pollen grains are microscopic in size, ranging in diameter from less than 0.01mm (about
0.0000004 in) to a little over 0.5 mm (about 0.00002 in). Millions of pollen grains waft along in
the clouds of pollen seen in the spring, often causing the sneezing and watery eyes associated
with pollen allergies. The outer covering of pollen grains, called the pollen wall, may be
intricately sculpted with designs that in some instances can be used to distinguish between plant
species. A chemical in the wall called sporopollenin makes the wall resistant to decay.
Although the single cell inside the wall is viable, or living, for only a few weeks, the distinctive
patterns of the pollen wall can remain intact for thousands or millions of years, enabling
scientists to identify the plant species that produced the pollen. Scientists track long-term climate
changes by studying layers of pollen deposited in lake beds. In a dry climate, for example, desert
species such as tanglehead grass and vine mesquite grass thrive, and their pollen drifts over
lakes, settling in a layer at the bottom. If a climate change brings increased moisture, desert
species are gradually replaced by forest species such as pines and spruce, whose pollen forms a
layer on top of the grass pollen. Scientists take samples of mud from the lake bottom and analyze
the pollen in the mud to identify plant species. Comparing the identified species with their
known climate requirements, scientists can trace climate shifts over the millennia.
II HOW POLLINATION WORKS
Most plants have specialized reproductive structures—cones or flowers—where the gametes, or
sex cells, are produced. Cones are the reproductive structures of spruce, pine, fir, cycads, and
certain other gymnosperms and are of two types: male and female. On conifers such as fir,
spruce, and pine trees, the male cones are produced in the spring. The cones form in clusters of
10 to 50 on the tips of the lower branches. Each cone typically measures 1 to 4 cm (0.4 to 1.5 in)
and consists of numerous soft, green, spirally attached scales shaped like a bud. Thousands of
pollen grains are produced on the lower surface of each scale, and are released to the wind when
they mature in late spring. The male cones dry out and shrivel up after their pollen is shed. The
female cones typically develop on the upper branches of the same tree that produces the male
cones. They form as individual cones or in groups of two or three. A female cone is two to five
times longer than the male cone, and starts out with green, spirally attached scales. The scales
open the first spring to take in the drifting pollen. After pollination, the scales close for one to
two years to protect the developing seed. During this time the scales gradually become brown
and stiff, the cones typically associated with conifers. When the seeds are mature, the scales of
certain species separate and the mature seeds are dispersed by the wind. In other species, small
animals such as gray jays, chipmunks, or squirrels break the scales apart before swallowing some
of the enclosed seeds. They cache, or hide, other seeds in a variety of locations, which results in
effective seed dispersal-and eventually germination-since the animals do not always return for
the stored seeds.
Pollination occurs in cone-bearing plants when the wind blows pollen from the male to the
female cone. Some pollen grains are trapped by the pollen drop, a sticky substance produced by
the ovule, the egg-containing structure that becomes the seed. As the pollen drop dries, it draws a
pollen grain through a tiny hole into the ovule, and the events leading to fertilization begin. The
pollen grain germinates and produces a short tube, a pollen tube, which grows through the tissues
of the ovule and contacts the egg. A sperm cell moves through the tube to the egg where it unites
with it in fertilization. The fertilized egg develops into an embryonic plant, and at the same time,
tissues in the ovule undergo complex changes. The inner tissues become food for the embryo,
and the outer wall of the ovule hardens into a seedcoat. The ovule thus becomes a seed—a tough
structure containing an embryonic plant and its food supply. The seed remains tucked in the
closed cone scale until it matures and the cone scales open. Each scale of a cone bears two seeds
on its upper surface.
In plants with flowers, such as roses, maple trees, and corn, pollen is produced within the male
parts of the plant, called the stamens, and the female sex cells, or eggs, are produced within the
female part of the plant, the pistil. With the help of wind, water, insects, birds, or small
mammals, pollen is transferred from the stamens to the stigma, a sticky surface on the pistil.
Pollination may be followed by fertilization. The pollen on the stigma germinates to produce a
long pollen tube, which grows down through the style, or neck of the pistil, and into the ovary,
located at the base of the pistil. Depending on the species, one, several, or many ovules are
embedded deep within the ovary. Each ovule contains one egg.
Fertilization occurs when a sperm cell carried by the pollen tube unites with the egg. As the
fertilized egg begins to develop into an embryonic plant, it produces a variety of hormones to
stimulate the outer wall of the ovule to harden into a seedcoat, and tissues of the ovary enlarge
into a fruit. The fruit may be a fleshy fruit, such as an apple, orange, tomato, or squash, or a dry
fruit, such as an almond, walnut, wheat grain, or rice grain. Unlike conifer seeds, which lie
exposed on the cone scales, the seeds of flowering plants are contained within a ripened ovary, a
fleshy or dry fruit.
III POLLINATION METHODS
In order for pollination to be successful, pollen must be transferred between plants of the same
species—for example, a rose flower must always receive rose pollen and a pine tree must always
receive pine pollen. Plants typically rely on one of two methods of pollination: cross-pollination
or self-pollination, but some species are capable of both.
Most plants are designed for cross-pollination, in which pollen is transferred between different
plants of the same species. Cross-pollination ensures that beneficial genes are transmitted
relatively rapidly to succeeding generations. If a beneficial gene occurs in just one plant, that
plant’s pollen or eggs can produce seeds that develop into numerous offspring carrying the
beneficial gene. The offspring, through cross-pollination, transmit the gene to even more plants
in the next generation. Cross-pollination introduces genetic diversity into the population at a rate
that enables the species to cope with a changing environment. New genes ensure that at least
some individuals can endure new diseases, climate changes, or new predators, enabling the
species as a whole to survive and reproduce.
Plant species that use cross-pollination have special features that enhance this method. For
instance, some plants have pollen grains that are lightweight and dry so that they are easily swept
up by the wind and carried for long distances to other plants. Other plants have pollen and eggs
that mature at different times, preventing the possibility of self-pollination.
In self-pollination, pollen is transferred from the stamens to the pistil within one flower. The
resulting seeds and the plants they produce inherit the genetic information of only one parent,
and the new plants are genetically identical to the parent. The advantage of self-pollination is the
assurance of seed production when no pollinators, such as bees or birds, are present. It also sets
the stage for rapid propagation—weeds typically self-pollinate, and they can produce an entire
population from a single plant. The primary disadvantage of self-pollination is that it results in
genetic uniformity of the population, which makes the population vulnerable to extinction by, for
example, a single devastating disease to
which all the genetically identical plants are equally susceptible. Another disadvantage is that
beneficial genes do not spread as rapidly as in cross-pollination, because one plant with a
beneficial gene can transmit it only to its own offspring and not to other plants. Self-pollination
evolved later than cross-pollination, and may have developed as a survival mechanism in harsh
environments where pollinators were scarce.
IV POLLEN TRANSFER
Unlike animals, plants are literally rooted to the spot, and so cannot move to combine sex cells
from different plants; for this reason, species have evolved effective strategies for accomplishing
cross-pollination. Some plants simply allow their pollen to be carried on the wind, as is the case
with wheat, rice, corn, and other grasses, and pines, firs, cedars, and other conifers. This method
works well if the individual plants are growing close together. To ensure success, huge amounts
of pollen must be produced, most of which never reaches another plant.
Most plants, however, do not rely on the wind. These plants employ pollinators—bees,
butterflies, and other insects, as well as birds, bats, and mice—to transport pollen between
sometimes widely scattered plants. While this strategy enables plants to expend less energy
making large amounts of pollen, they must still use energy to produce incentives for their
pollinators. For instance, birds and insects may be attracted to a plant by its tasty food in the
form of nectar, a sugary, energy-rich fluid that bees eat and also use for making honey. Bees and
other pollinators may be attracted by a plant’s pollen, a nutritious food that is high in protein and
provides almost every known vitamin, about 25 trace minerals, and 22 amino acids. As a
pollinator enters a flower or probes it for nectar, typically located deep in the flower, or grazes
on the pollen itself, the sticky pollen attaches to parts of its body. When the pollinator visits the
next flower in search of more nectar or pollen, it brushes against the stigma and pollen grains rub
off onto the stigma. In this way, pollinators inadvertently transfer pollen from flower to flower.
Some flowers supply wax that bees use for construction material in their hives. In the Amazonian
rain forest, the males of certain bee species travel long distances to visit orchid flowers, from
which they collect oil used to make a powerful chemical, called a pheromone, used to attract
female bees for mating. The bees carry pollen between flowers as they collect the oils from the
orchids.
Flowers are designed to attract pollinators, and the unique shape, color, and even scent of a
flower appeals to specific pollinators. Birds see the color red particularly well and are prone to
pollinating red flowers. The long red floral tubes of certain flowers are designed to attract
hummingbirds but discourage small insects that might take the nectar without transferring pollen.
Flowers that are pollinated by bats are usually large, light in color, heavily scented, and open at
night, when bats are most active. Many of the brighter pink, orange, and yellow flowers are
marked by patterns on the petals that can be seen only with ultraviolet light. These patterns act as
maps to the nectar glands typically located at the base of the flower. Bees are able to see
ultraviolet light and use the colored patterns to find nectar efficiently.
These interactions between plants and animals are mutualistic, since both species benefit from
the interaction. Undoubtedly plants have evolved flower structures that successfully attract
specific pollinators. And in some cases the pollinators may have adapted their behaviors to take
advantage of the resources offered by specific kinds of flowers.
V CURRENT TOPICS
Scientists control pollination by transferring pollen by hand from stamens to stigmas. Using
these artificial pollination techniques, scientists study how traits are inherited in plants, and they
also breed plants with selected traits—roses with larger blooms, for example, or apple trees that
bear more fruit. Scientists also use artificial pollination to investigate temperature and moisture
requirements for pollination in different species, the biochemistry of pollen germination, and
other details of the pollination process.
Some farmers are concerned about the decline in numbers of pollinating insects, especially
honey bees. In recent years many fruit growers have found their trees have little or no fruit,
thought to be the result of too few honey bee pollinators. Wild populations of honey bees are
nearly extinct in some areas of the northern United States and southern Canada. Domestic honey
bees—those kept in hives by beekeepers—have declined by as much as 80 percent since the late
1980s. The decline of wild and domestic honey bees is due largely to mite infestations in their
hives—the mites eat the young, developing bees. Bees and other insect pollinators are also
seriously harmed by chemical toxins in their environment. These toxins, such as the insecticides
Diazinon and Malathion, either kill the pollinator directly or harm them by damaging the
environment in which they live.
Fertilization
I INTRODUCTION
Fertilization, the process in which gametes—a male's sperm and a female's egg or ovum—fuse
together, producing a single cell that develops into an adult organism. Fertilization occurs in both
plants and animals that reproduce sexually—that is, when a male and a female are needed to
produce an offspring (see Reproduction). This article focuses on animal fertilization. For
information on plant fertilization see the articles on Seed, Pollination, and Plant Propagation.
Fertilization is a precise period in the reproductive process. It begins when the sperm contacts the
outer surface of the egg and it ends when the sperm's nucleus fuses with the egg's nucleus.
Fertilization is not instantaneous—it may take 30 minutes in sea urchins and up to several hours
in mammals. After nuclear fusion, the fertilized egg is called a zygote. When the zygote divides
to a two-cell stage, it is called an embryo.
Fertilization is necessary to produce a single cell that contains a full complement of genes. When
a cell undergoes meiosis, gametes are formed—a sperm cell or an egg cell. Each gamete contains
only half the genetic material of the original cell. During sperm and egg fusion in fertilization,
the full amount of genetic material is restored: half contributed by the male parent and half
contributed by the female. In humans, for example, there are 46 chromosomes (carriers of
genetic material) in each human body cell—except in the sperm and egg, which each have 23
chromosomes. As soon as fertilization is complete, the zygote that is formed has a complete set
of 46 chromosomes containing genetic information from both parents.
The fertilization process also activates cell division. Without activation from the sperm, an egg
typically remains dormant and soon dies. In general, it is fertilization that sets the egg on an
irreversible pathway of cell division and embryo development.
II THE FERTILIZATION PROCESS
Fertilization is complete when the sperm's nucleus fuses with the egg's nucleus. Researchers
have identified several specific steps in this process. The first step is the sperm approaching the
egg. In some organisms, sperm just swim randomly toward the egg (or eggs). In others, the eggs
secrete a chemical substance that attracts the sperm toward the eggs. For example, in one species
of sea urchin (an aquatic animal often used in fertilization research), the sperm swim toward a
small protein molecule in the egg's protective outer layer, or surface coat. In humans there is
evidence that sperm are attracted to the fluid surrounding the egg.
The second step of fertilization is the attachment of several sperm to the egg's surface coat. All
animal eggs have surface coats, which are variously named the vitelline envelope (in abalone and
frogs) or the zona pellucida (in mammals). This attachment step may last for just a few seconds
or for several minutes.
The third step is a complex process in which the sperm penetrate the egg’s surface coat. The
head, or front end, of the sperm of almost all animals except fish contains an acrosome, a
membrane-enclosed compartment. The acrosome releases proteins that dissolve the surface coat
of an egg of the same species.
In mammals, a molecule of the egg’s surface coat triggers the sperm's acrosome to explosively
release its contents onto the surface coat, where the proteins dissolve a tiny hole. A single sperm
is then able to make a slitlike channel in the surface coat, through which it swims to reach the
egg's cell membrane. In fish eggs that do not have acrosomes, specialized channels, called
micropyles, enable a single sperm to swim down through the egg's surface coat to reach the cell
membrane. When more than one sperm enters the egg, the resulting zygote typically develops
abnormally.
The next step in fertilization—the fusion of sperm and egg cell membranes—is poorly
understood. When the membranes fuse, a single sperm and the egg become one cell. This process
takes only seconds, and it is directly observable by researchers. Specific proteins on the surface
of the sperm appear to induce this fusion process, but the exact mechanism is not yet known.
After fusion of the cell membranes the sperm is motionless. The egg extends cytoplasmic fingers
to surround the sperm and pull it into the egg's cytoplasm. Filaments called microtubules begin
to grow from the inner surface of the egg cell's membrane inward toward the cell's center,
resembling spokes of a bicycle wheel growing from the rim inward toward the wheel's hub. As
the microtubules grow, the sperm and egg nuclei are pushed toward the egg's center. Finally, in a
process that is also poorly understood, the egg and sperm nuclear envelopes (outer membranes)
fuse, permitting the chromosomes from the egg and sperm to mix within a common space. A
zygote is formed, and development of an embryo begins.
III TYPES OF FERTILIZATION
Two types of fertilization occur in animals: external and internal. In external fertilization the egg
and sperm come together outside of the parents' bodies. Animals such as sea urchins, starfish,
clams, mussels, frogs, corals, and many fish reproduce in this way. The gametes are released, or
spawned, by the adults into the ocean or a pond. Fertilization takes place in this watery
environment, where embryos start to develop.
A disadvantage to external fertilization is that the meeting of egg and sperm is somewhat left to
chance. Swift water currents, water temperature changes, predators, and a variety of other
interruptions can prevent fertilization from occurring. A number of adaptations help ensure that
offspring will successfully be produced. The most important adaptation is the production of
literally millions of sperm and eggs—if even a tiny fraction of these gametes survive to become
zygotes, many offspring will still result.
Males and females also use behavioral clues, chemical signals, or other stimuli to coordinate
spawning so that sperm and eggs appear in the water at the same time and in the same place. In
animals that use external fertilization, there is no parental care for the developing embryos.
Instead, the eggs of these animals contain a food supply in the form of a yolk that nourishes the
embryos until they hatch and are able to feed on their own.
Internal fertilization takes place inside the female's body. The male typically has a penis or other
structure that delivers sperm into the female's reproductive tract. All mammals, reptiles, and
birds as well as some invertebrates, including snails, worms, and insects, use internal
fertilization. Internal fertilization does not necessarily require that the developing embryo
remains inside the female's body. In honey bees, for example, the queen bee deposits the
fertilized eggs into special compartments in the honeycomb. These compartments are supplied
with food resources for the young bees to use as they develop.
Various adaptations have evolved in the reproductive process of internal-fertilizing organisms.
Because the sperm and egg are always protected inside the male's and female's bodies—and are
deliberately placed into close contact during mating—relatively few sperm and eggs are
produced. Many animals in this group provide extensive parental care of their young. In most
mammals, including humans, two specialized structures in the female's body further help to
protect and nourish the developing embryo. One is the uterus, which is the cushioned chamber
where the embryo matures before birth; the other is the placenta, which is a blood-rich organ that
supplies nutrients to the embryo and also removes its wastes (see Pregnancy and Childbirth).
IV RESEARCH ISSUES
Although reproduction is well studied in many kinds of organisms, fertilization is one of the least
understood of all fundamental biological processes. Our knowledge of this fascinating topic has
been vastly improved by many recent discoveries. For example, researchers have discovered how
to clone the genes that direct the fertilization process.
Yet many important questions still remain. Scientists are actively trying to determine issues such
as how sperm and egg cells recognize that they are from the same species; what molecules sperm
use to attach to egg coats; and how signals on the sperm's surface are relayed inside to trigger the
acrosome reaction. With continued study, answers to these questions will one day be known.
Q12:
(i)
(ii) Research companies developing compressed natural gas (CNG) and methanol (most of which
is made from natural gas today but can be made from garbage, trees, or seaweed) have been
given government subsidies to get these efforts off the ground. But with oil prices still low,
consumers have not had much incentive to accept the inconveniences of finding supply stations,
more time-consuming fueling processes, reduced power output, and reduced driving range.
Currently, all the alternatives to gas have drawbacks in terms of cost, ease of transport, and
efficiency that prohibit their spread. But that could change rapidly if another oil crisis like that of
the 1970s develops and if research continues.
Any fuel combustion contributes to greenhouse gas emissions, however, and automakers imagine
that stricter energy-consumption standards are probable in the future. In the United States
onerous gasoline or energy taxes are less likely than a sudden tightening of CAFE standards,
which have not changed for cars since 1994. Such restriction could, for example, put an end to
the current boom in sales of large sport-utility vehicles that get relatively poor gas mileage.
Therefore, long-term research focuses on other means of propulsion, including cars powered by
electricity
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(iii) Polyvinyl chloride (PVC) is prepared from the organic compound CHCl). PVC is the most
widely used of the amorphous9vinyl chloride (CH2 plastics. PVC is lightweight, durable, and
waterproof. Chlorine atoms bonded to the carbon backbone of its molecules give PVC its hard
and flame-resistant properties.
In its rigid form, PVC is weather-resistant and is extruded into pipe, house siding, and gutters.
Rigid PVC is also blow molded into clear bottles and is used to form other consumer products,
including compact discs and computer casings.
PVC can be softened with certain chemicals. This softened form of PVC is used to make shrink-
wrap, food packaging, rainwear, shoe soles, shampoo containers, floor tile, gloves, upholstery,
and other products. Most softened PVC plastic products are manufactured by extrusion, injection
molding, or casting.
(iv)
(v) Antibiotics
I INTRODUCTION
Antibiotics (Greek anti, “against”; bios, “life”) are chemical compounds used to kill or inhibit the
growth of infectious organisms. Originally the term antibiotic referred only to organic
compounds, produced by bacteria or molds, that are toxic to other microorganisms. The term is
now used loosely to include synthetic and semisynthetic organic compounds. Antibiotic refers
generally to antibacterials; however, because the term is loosely defined, it is preferable to
specify compounds as being antimalarials, antivirals, or antiprotozoals. All antibiotics share the
property of selective toxicity: They are more toxic to an invading organism than they are to an
animal or human host. Penicillin is the most well-known antibiotic and has been used to fight
many infectious diseases, including syphilis, gonorrhea, tetanus, and scarlet fever. Another
antibiotic, streptomycin, has been used to combat tuberculosis.
II HISTORY
Although the mechanisms of antibiotic action were not scientifically understood until the late
20th century, the principle of using organic compounds to fight infection has been known since
ancient times. Crude plant extracts were used medicinally for centuries, and there is anecdotal
evidence for the use of cheese molds for topical treatment of infection. The first observation of
what would now be called an antibiotic effect was made in the 19th century by French chemist
Louis Pasteur, who discovered that certain saprophytic bacteria can kill anthrax bacilli. In the
first decade of the 20th century, German physician and chemist Paul Ehrlich began
experimenting with the synthesis of organic compounds that would selectively attack an
infecting organism without harming the host organism. His experiments led to the development,
in 1909, of salvarsan, a synthetic compound containing arsenic, which exhibited selective action
against spirochetes, the bacteria that cause syphilis. Salvarsan remained the only effective
treatment for syphilis until the purification of penicillin in the 1940s. In the 1920s British
bacteriologist Sir Alexander Fleming, who later discovered penicillin, found a substance called
lysozyme in many bodily secretions, such as tears and sweat, and in certain other plant and
animal substances. Lysozyme has some antimicrobial activity, but it is not clinically useful.
Penicillin, the archetype of antibiotics, is a derivative of the mold Penicillium notatum. Penicillin
was discovered accidentally in 1928 by Fleming, who showed its effectiveness in laboratory
cultures against many disease-producing bacteria. This discovery marked the beginning of the
development of antibacterial compounds produced by living organisms. Penicillin in its original
form could not be given by mouth because it was destroyed in the digestive tract and the
preparations had too many impurities for injection. No progress was made until the outbreak of
World War II stimulated renewed research and the Australian pathologist Sir Howard Florey and
German-British biochemist Ernst Chain purified enough of the drug to show that it would protect
mice from infection. Florey and Chain then used the purified penicillin on a human patient who
had staphylococcal and streptococcal septicemia with multiple abscesses and osteomyelitis. The
patient, gravely ill and near death, was given intravenous injections of a partly purified
preparation of penicillin every three hours. Because so little was available, the patient's urine was
collected each day, the penicillin was extracted from the urine and used again. After five days the
patient's condition improved vastly. However, with each passage through the body, some
penicillin was lost. Eventually the supply ran out and the patient died.
The first antibiotic to be used successfully in the treatment of human disease was tyrothricin,
isolated from certain soil bacteria by American bacteriologist Rene Dubos in 1939. This
substance is too toxic for general use, but it is employed in the external treatment of certain
infections. Other antibiotics produced by a group of soil bacteria called actinomycetes have
proved more successful. One of these, streptomycin, discovered in 1944 by American biologist
Selman Waksman and his associates, was, in its time, the major treatment for tuberculosis.
Since antibiotics came into general use in the 1950s, they have transformed the patterns of
disease and death. Many diseases that once headed the mortality tables—such as tuberculosis,
pneumonia, and septicemia—now hold lower positions. Surgical procedures, too, have been
improved enormously, because lengthy and complex operations can now be carried out without a
prohibitively high risk of infection. Chemotherapy has also been used in the treatment or
prevention of protozoal and fungal diseases, especially malaria, a major killer in economically
developing nations (see Third World). Slow progress is being made in the chemotherapeutic
treatment of viral diseases. New drugs have been developed and used to treat shingles (see
herpes) and chicken pox. There is also a continuing effort to find a cure for acquired
immunodeficiency syndrome (AIDS), caused by the human immunodeficiency virus (HIV).
III CLASSIFICATION
Antibiotics can be classified in several ways. The most common method classifies them
according to their action against the infecting organism. Some antibiotics attack the cell wall;
some disrupt the cell membrane; and the majority inhibit the synthesis of nucleic acids and
proteins, the polymers that make up the bacterial cell. Another method classifies antibiotics
according to which bacterial strains they affect: staphylococcus, streptococcus, or Escherichia
coli, for example. Antibiotics are also classified on the basis of chemical structure, as penicillins,
cephalosporins, aminoglycosides, tetracyclines, macrolides, or sulfonamides, among others.
A Mechanisms of Action
Most antibiotics act by selectively interfering with the synthesis of one of the large-molecule
constituents of the cell—the cell wall or proteins or nucleic acids. Some, however, act by
disrupting the cell membrane (see Cell Death and Growth Suppression below). Some important
and clinically useful drugs interfere with the synthesis of peptidoglycan, the most important
component of the cell wall. These drugs include the Β-lactam antibiotics, which are classified
according to chemical structure into penicillins, cephalosporins, and carbapenems. All these
antibiotics contain a Β-lactam ring as a critical part of their chemical structure, and they inhibit
synthesis of peptidoglycan, an essential part of the cell wall. They do not interfere with the
synthesis of other intracellular components. The continuing buildup of materials inside the cell
exerts ever greater pressure on the membrane, which is no longer properly supported by
peptidoglycan. The membrane gives way, the cell contents leak out, and the bacterium dies.
These antibiotics do not affect human cells because human cells do not have cell walls.
Many antibiotics operate by inhibiting the synthesis of various intracellular bacterial molecules,
including DNA, RNA, ribosomes, and proteins. The synthetic sulfonamides are among the
antibiotics that indirectly interfere with nucleic acid synthesis. Nucleic-acid synthesis can also be
stopped by antibiotics that inhibit the enzymes that assemble these polymers—for example,
DNA polymerase or RNA polymerase. Examples of such antibiotics are actinomycin, rifamicin,
and rifampicin, the last two being particularly valuable in the treatment of tuberculosis. The
quinolone antibiotics inhibit synthesis of an enzyme responsible for the coiling and uncoiling of
the chromosome, a process necessary for DNA replication and for transcription to messenger
RNA. Some antibacterials affect the assembly of messenger RNA, thus causing its genetic
message to be garbled. When these faulty messages are translated, the protein products are
nonfunctional. There are also other mechanisms: The tetracyclines compete with incoming
transfer-RNA molecules; the aminoglycosides cause the genetic message to be misread and a
defective protein to be produced; chloramphenicol prevents the linking of amino acids to the
growing protein; and puromycin causes the protein chain to terminate prematurely, releasing an
incomplete protein.
B Range of Effectiveness
In some species of bacteria the cell wall consists primarily of a thick layer of peptidoglycan.
Other species have a much thinner layer of peptidoglycan and an outer as well as an inner
membrane. When bacteria are subjected to Gram's stain, these differences in structure affect the
differential staining of the bacteria with a dye called gentian violet. The differences in staining
coloration (gram-positive bacteria appear purple and gram-negative bacteria appear colorless or
reddish, depending on the process used) are the basis of the classification of bacteria into gram-
positive (those with thick peptidoglycan) and gram-negative (those with thin peptidoglycan and
an outer membrane), because the staining properties correlate with many other bacterial
properties. Antibacterials can be further subdivided into narrow-spectrum and broad-spectrum
agents. The narrow-spectrum penicillins act against many gram-positive bacteria.
Aminoglycosides, also narrow-spectrum, act against many gram-negative as well as some gram-
positive bacteria. The tetracyclines and chloramphenicols are both broad-spectrum drugs because
they are effective against both gram-positive and gram-negative bacteria.
C Cell Death and Growth Suppression
Antibiotics may also be classed as bactericidal (killing bacteria) or bacteriostatic (stopping
bacterial growth and multiplication). Bacteriostatic drugs are nonetheless effective because
bacteria that are prevented from growing will die off after a time or be killed by the defense
mechanisms of the host. The tetracyclines and the sulfonamides are among the bacteriostatic
antiobiotics. Antibiotics that damage the cell membrane cause the cell's metabolites to leak out,
thus killing the organism. Such compounds, including penicillins and cephalosporins, are
therefore classed as bactericidal.
IV TYPES OF ANTIBIOTICS
Following is a list of some of the more common antibiotics and examples of some of their
clinical uses. This section does not include all antibiotics nor all of their clinical applications.
A Penicillins
Penicillins are bactericidal, inhibiting formation of the cell wall. There are four types of
penicillins: the narrow-spectrum penicillin-G types, ampicillin and its relatives, the penicillinase-
resistants, and the extended spectrum penicillins that are active against pseudomonas. Penicillin-
G types are effective against gram-positive strains of streptococci, staphylococci, and some
gram-negative bacteria such as meningococcus. Penicillin-G is used to treat such diseases as
syphilis, gonorrhea, meningitis, anthrax, and yaws. The related penicillin V has a similar range
of action but is less effective. Ampicillin and amoxicillin have a range of effectiveness similar to
that of penicillin-G, with a slightly broader spectrum, including some gram-negative bacteria.
The penicillinase-resistants are penicillins that combat bacteria that have developed resistance to
penicillin-G. The antipseudomonal penicillins are used against infections caused by gram-
negative Pseudomonas bacteria, a particular problem in hospitals. They may be administered as a
prophylactic in patients with compromised immune systems, who are at risk from gram-negative
infections.
Side effects of the penicillins, while relatively rare, can include immediate and delayed allergic
reactions—specifically, skin rashes, fever, and anaphylactic shock, which can be fatal.
B Cephalosporin
Like the penicillins, cephalosporins have a Β-lactam ring structure that interferes with synthesis
of the bacterial cell wall and so are bactericidal. Cephalosporins are more effective than
penicillin against gram-negative bacilli and equally effective against gram-positive cocci.
Cephalosporins may be used to treat strains of meningitis and as a prophylactic for orthopedic,
abdominal, and pelvic surgery. Rare hypersensitive reactions from the cephalosporins include
skin rash and, less frequently, anaphylactic shock.
C Aminoglycosides
Streptomycin is the oldest of the aminoglycosides. The aminoglycosides inhibit bacterial protein
synthesis in many gram-negative and some gram-positive organisms. They are sometimes used
in combination with penicillin. The members of this group tend to be more toxic than other
antibiotics. Rare adverse effects associated with prolonged use of aminoglycosides include
damage to the vestibular region of the ear, hearing loss, and kidney damage.
D Tetracyclines
Tetracyclines are bacteriostatic, inhibiting bacterial protein synthesis. They are broad-spectrum
antibiotics effective against strains of streptococci, gram-negative bacilli, rickettsia (the bacteria
that causes typhoid fever), and spirochetes (the bacteria that causes syphilis). They are also used
to treat urinary-tract infections and bronchitis. Because of their wide range of effectiveness,
tetracyclines can sometimes upset the balance of resident bacteria that are normally held in check
by the body's immune system, leading to secondary infections in the gastrointestinal tract and
vagina, for example. Tetracycline use is now limited because of the increase of resistant bacterial
strains.
E Macrolides
The macrolides are bacteriostatic, binding with bacterial ribosomes to inhibit protein synthesis.
Erythromycin, one of the macrolides, is effective against gram-positive cocci and is often used as
a substitute for penicillin against streptococcal and pneumococcal infections. Other uses for
macrolides include diphtheria and bacteremia. Side effects may include nausea, vomiting, and
diarrhea; infrequently, there may be temporary auditory impairment.
F Sulfonamides
The sulfonamides are synthetic bacteriostatic, broad-spectrum antibiotics, effective against most
gram-positive and many gram-negative bacteria. However, because many gram-negative bacteria
have developed resistance to the sulfonamides, these antibiotics are now used only in very
specific situations, including treatment of urinary-tract infection, against meningococcal strains,
and as a prophylactic for rheumatic fever. Side effects may include disruption of the
gastrointestinal tract and hypersensitivity.
V PRODUCTION
The production of a new antibiotic is lengthy and costly. First, the organism that makes the
antibiotic must be identified and the antibiotic tested against a wide variety of bacterial species.
Then the organism must be grown on a scale large enough to allow the purification and chemical
analysis of the antibiotic and to demonstrate that it is unique. This is a complex procedure
because there are several thousand compounds with antibiotic activity that have already been
discovered, and these compounds are repeatedly rediscovered. After the antibiotic has been
shown to be useful in the treatment of infections in animals, larger-scale preparation can be
undertaken.
Commercial development requires a high yield and an economic method of purification.
Extensive research may be needed to increase the yield by selecting improved strains of the
organism or by changing the growth medium. The organism is then grown in large steel vats, in
submerged cultures with forced aeration. The naturally fermented product may be modified
chemically to produce a semisynthetic antibiotic. After purification, the effect of the antibiotic on
the normal function of host tissues and organs (its pharmacology), as well as its possible toxic
actions (toxicology), must be tested on a large number of animals of several species. In addition,
the effective forms of administration must be determined. Antibiotics may be topical, applied to
the surface of the skin, eye, or ear in the form of ointments or creams. They may be oral, or given
by mouth, and either allowed to dissolve in the mouth or swallowed, in which case they are
absorbed into the bloodstream through the intestines. Antibiotics may also be parenteral, or
injected intramuscularly, intravenously, or subcutaneously; antibiotics are administered
parenterally when fast absorption is required.
In the United States, once these steps have been completed, the manufacturer may file an
Investigational New Drug Application with the Food and Drug Administration (FDA). If
approved, the antibiotic can be tested on volunteers for toxicity, tolerance, absorption, and
excretion. If subsequent tests on small numbers of patients are successful, the drug can be used
on a larger group, usually in the hundreds. Finally a New Drug Application can be filed with the
FDA, and, if this application is approved, the drug can be used generally in clinical medicine.
These procedures, from the time the antibiotic is discovered in the laboratory until it undergoes
clinical trial, usually extend over several years.
VI RISKS AND LIMITATIONS
The use of antibiotics is limited because bacteria have evolved defenses against certain
antibiotics. One of the main mechanisms of defense is inactivation of the antibiotic. This is the
usual defense against penicillins and chloramphenicol, among others. Another form of defense
involves a mutation that changes the bacterial enzyme affected by the drug in such a way that the
antibiotic can no longer inhibit it. This is the main mechanism of resistance to the compounds
that inhibit protein synthesis, such as the tetracyclines.
All these forms of resistance are transmitted genetically by the bacterium to its progeny. Genes
that carry resistance can also be transmitted from one bacterium to another by means of
plasmids, chromosomal fragments that contain only a few genes, including the resistance gene.
Some bacteria conjugate with others of the same species, forming temporary links during which
the plasmids are passed from one to another. If two plasmids carrying resistance genes to
different antibiotics are transferred to the same bacterium, their resistance genes can be
assembled onto a single plasmid. The combined resistances can then be transmitted to another
bacterium, where they may be combined with yet another type of resistance. In this way,
plasmids are generated that carry resistance to several different classes of antibiotic. In addition,
plasmids have evolved that can be transmitted from one species of bacteria to another, and these
can transfer multiple antibiotic resistance between very dissimilar species of bacteria.
The problem of resistance has been exacerbated by the use of antibiotics as prophylactics,
intended to prevent infection before it occurs. Indiscriminate and inappropriate use of antibiotics
for the treatment of the common cold and other common viral infections, against which they
have no effect, removes antibiotic-sensitive bacteria and allows the development of antibiotic-
resistant bacteria. Similarly, the use of antibiotics in poultry and livestock feed has promoted the
spread of drug resistance and has led to the widespread contamination of meat and poultry by
drug-resistant bacteria such as Salmonella.
In the 1970s, tuberculosis seemed to have been nearly eradicated in the developed countries,
although it was still prevalent in developing countries. Now its incidence is increasing, partly due
to resistance of the tubercle bacillus to antibiotics. Some bacteria, particularly strains of
staphylococci, are resistant to so many classes of antibiotics that the infections they cause are
almost untreatable. When such a strain invades a surgical ward in a hospital, it is sometimes
necessary to close the ward altogether for a time. Similarly, plasmodia, the causative organisms
of malaria, have developed resistance to antibiotics, while, at the same time, the mosquitoes that
carry plasmodia have become resistant to the insecticides that were once used to control them.
Consequently, although malaria had been almost entirely eliminated, it is now again rampant in
Africa, the Middle East, Southeast Asia, and parts of Latin America. Furthermore, the discovery
of new antibiotics is now much less common than in the past.
(vi) Ceramics
I INTRODUCTION
Ceramics (Greek keramos, "potter's clay"), originally the art of making pottery, now a general
term for the science of manufacturing articles prepared from pliable, earthy materials that are
made rigid by exposure to heat. Ceramic materials are nonmetallic, inorganic compounds—
primarily compounds of oxygen, but also compounds of carbon, nitrogen, boron, and silicon.
Ceramics includes the manufacture of earthenware, porcelain, bricks, and some kinds of tile and
stoneware.
Ceramic products are used not only for artistic objects and tableware, but also for industrial and
technical items, such as sewer pipe and electrical insulators. Ceramic insulators have a wide
range of electrical properties. The electrical properties of a recently discovered family of
ceramics based on a copper-oxide mixture allow these ceramics to become superconductive, or
to conduct electricity with no resistance, at temperatures much higher than those at which metals
do (see Superconductivity). In space technology, ceramic materials are used to make components
for space vehicles.
The rest of this article will deal only with ceramic products that have industrial or technical
applications. Such products are known as industrial ceramics. The term industrial ceramics also
refers to the science and technology of developing and manufacturing such products.
II PROPERTIES
Ceramics possess chemical, mechanical, physical, thermal, electrical, and magnetic properties
that distinguish them from other materials, such as metals and plastics. Manufacturers customize
the properties of ceramics by controlling the type and amount of the materials used to make
them.
A Chemical Properties
Industrial ceramics are primarily oxides (compounds of oxygen), but some are carbides
(compounds of carbon and heavy metals), nitrides (compounds of nitrogen), borides (compounds
of boron), and silicides (compounds of silicon). For example, aluminum oxide can be the main
ingredient of a ceramic—the important alumina ceramics contain 85 to 99 percent aluminum
oxide. Primary components, such as the oxides, can also be chemically combined to form
complex compounds that are the main ingredient of a ceramic. Examples of such complex
compounds are barium titanate (BaTiO3) and zinc ferrite (ZnFe2O4). Another material that may
be regarded as a ceramic is the element carbon (in the form of diamond or graphite).
Ceramics are more resistant to corrosion than plastics and metals are. Ceramics generally do not
react with most liquids, gases, alkalies, and acids. Most ceramics have very high melting points,
and certain ceramics can be used up to temperatures approaching their melting points. Ceramics
also remain stable over long time periods.
B Mechanical Properties
Ceramics are extremely strong, showing considerable stiffness under compression and bending.
Bend strength, the amount of pressure required to bend a material, is often used to determine the
strength of a ceramic. One of the strongest ceramics, zirconium dioxide, has a bend strength
similar to that of steel. Zirconias (ZrO2) retain their strength up to temperatures of 900° C (1652°
F), while silicon carbides and silicon nitrides retain their strength up to temperatures of 1400° C
(2552° F). These silicon materials are used in high-temperature applications, such as to make
parts for gas-turbine engines. Although ceramics are strong, temperature-resistant, and resilient,
these materials are brittle and may break when dropped or when quickly heated and cooled.
C Physical Properties
Most industrial ceramics are compounds of oxygen, carbon, or nitrogen with lighter metals or
semimetals. Thus, ceramics are less dense than most metals. As a result, a light ceramic part may
be just as strong as a heavier metal part. Ceramics are also extremely hard, resisting wear and
abrasion. The hardest known substance is diamond, followed by boron nitride in cubic-crystal
form. Aluminum oxide and silicon carbide are also extremely hard materials and are often used
to cut, grind, sand, and polish metals and other hard materials.
D Thermal Properties
Most ceramics have high melting points, meaning that even at high temperatures, these materials
resist deformation and retain strength under pressure. Silicon carbide and silicon nitride, for
example, withstand temperature changes better than most metals do. Large and sudden changes
in temperature, however, can weaken ceramics. Materials that undergo less expansion or
contraction per degree of temperature change can withstand sudden changes in temperature
better than materials that undergo greater deformation. Silicon carbide and silicon nitride expand
and contract less during temperature changes than most other ceramics do. These materials are
therefore often used to make parts, such as turbine rotors used in jet engines, that can withstand
extreme variations in temperature.
E Electrical Properties
Certain ceramics conduct electricity. Chromium dioxide, for example, conducts electricity as
well as most metals do. Other ceramics, such as silicon carbide, do not conduct electricity as
well, but may still act as semiconductors. (A semiconductor is a material with greater electrical
conductivity than an insulator has but with less than that of a good conductor.) Other types of
ceramics, such as aluminum oxide, do not conduct electricity at all. These ceramics are used as
insulators—devices used to separate elements in an electrical circuit to keep the current on the
desired pathway. Certain ceramics, such as porcelain, act as insulators at lower temperatures but
conduct electricity at higher temperatures.
F Magnetic Properties
Ceramics containing iron oxide (Fe2O3) can have magnetic properties similar to those of iron,
nickel, and cobalt magnets (see Magnetism). These iron oxide-based ceramics are called ferrites.
Other magnetic ceramics include oxides of nickel, manganese, and barium. Ceramic magnets,
used in electric motors and electronic circuits, can be manufactured with high resistance to
demagnetization. When electrons become highly aligned, as they do in ceramic magnets, they
create a powerful magnetic field which is more difficult to disrupt (demagnetize) by breaking the
alignment of the electrons.
III MANUFACTURE
Industrial ceramics are produced from powders that have been tightly squeezed and then heated
to high temperatures. Traditional ceramics, such as porcelain, tiles, and pottery, are formed from
powders made from minerals such as clay, talc, silica, and feldspar. Most industrial ceramics,
however, are formed from highly pure powders of specialty chemicals such as silicon carbide,
alumina, and barium titanate.
The minerals used to make ceramics are dug from the earth and are then crushed and ground into
fine powder. Manufacturers often purify this powder by mixing it in solution and allowing a
chemical precipitate (a uniform solid that forms within a solution) to form. The precipitate is
then separated from the solution, and the powder is heated to drive off impurities, including
water. The result is typically a highly pure powder with particle sizes of about 1 micrometer (a
micrometer is 0.000001 meter, or 0.00004 in).
A Molding
After purification, small amounts of wax are often added to bind the ceramic powder and make it
more workable. Plastics may also be added to the powder to give the desired pliability and
softness. The powder can then be shaped into different objects by various molding processes.
These molding processes include slip casting, pressure casting, injection molding, and extrusion.
After the ceramic is molded, it is heated in a process known as densification to make the material
stronger and more dense.
A1 Slip Casting
Slip casting is a molding process used to form hollow ceramic objects. The ceramic powder is
poured into a mold that has porous walls, and then the mold is filled with water. The capillary
action (forces created by surface tension and by wetting the sides of a tube) of the porous walls
drains water through the powder and the mold, leaving a solid layer of ceramic inside.
A2 Pressure Casting
In pressure casting, ceramic powder is poured into a mold, and pressure is then applied to the
powder. The pressure condenses the powder into a solid layer of ceramic that is shaped to the
inside of the mold.
A3 Injection Molding
Injection molding is used to make small, intricate objects. This method uses a piston to force the
ceramic powder through a heated tube into a mold, where the powder cools, hardening to the
shape of the mold. When the object has solidified, the mold is opened and the ceramic piece is
removed.
A4 Extrusion
Extrusion is a continuous process in which ceramic powder is heated in a long barrel. A rotating
screw then forces the heated material through an opening of the desired shape. As the continuous
form emerges from the die opening, the form cools, solidifies, and is cut to the desired length.
Extrusion is used to make products such as ceramic pipe, tiles, and brick.
B Densification
The process of densification uses intense heat to condense a ceramic object into a strong, dense
product. After being molded, the ceramic object is heated in an electric furnace to temperatures
between 1000° and 1700° C (1832° and 3092° F). As the ceramic heats, the powder particles
coalesce, much as water droplets join at room temperature. As the ceramic particles merge, the
object becomes increasingly dense, shrinking by up to 20 percent of its original size . The goal of
this heating process is to maximize the ceramic’s strength by obtaining an internal structure that
is compact and extremely dense.
IV APPLICATIONS
Ceramics are valued for their mechanical properties, including strength, durability, and hardness.
Their electrical and magnetic properties make them valuable in electronic applications, where
they are used as insulators, semiconductors, conductors, and magnets. Ceramics also have
important uses in the aerospace, biomedical, construction, and nuclear industries.
A Mechanical Applications
Industrial ceramics are widely used for applications requiring strong, hard, and abrasion-resistant
materials. For example, machinists use metal-cutting tools tipped with alumina, as well as tools
made from silicon nitrides, to cut, shape, grind, sand, and polish cast iron, nickel-based alloys,
and other metals. Silicon nitrides, silicon carbides, and certain types of zirconias are used to
make components such as valves and turbocharger rotors for high-temperature diesel and gas-
turbine engines. The textile industry uses ceramics for thread guides that can resist the cutting
action of fibers traveling through these guides at high speed.
B Electrical and Magnetic Applications
Ceramic materials have a wide range of electrical properties. Hence, ceramics are used as
insulators (poor conductors of electricity), semiconductors (greater conductivity than insulators
but less than good conductors), and conductors (good conductors of electricity).
Ceramics such as aluminum oxide (Al2O3) do not conduct electricity at all and are used to make
insulators. Stacks of disks made of this material are used to suspend high-voltage power lines
from transmission towers. Similarly, thin plates of aluminum oxide , which remain electrically
and chemically stable when exposed to high-frequency currents, are used to hold microchips.
Other ceramics make excellent semiconductors. Small semiconductor chips, often made from
barium titanate (BaTiO3) and strontium titanate (SrTiO3), may contain hundreds of thousands of
transistors, making possible the miniaturization of electronic devices.
Scientists have discovered a family of copper-oxide-based ceramics that become
superconductive at higher temperatures than do metals. Superconductivity refers to the ability of
a cooled material to conduct an electric current with no resistance. This phenomenon can occur
only at extremely low temperatures, which are difficult to maintain. However, in 1988
researchers discovered a copper oxide ceramic that becomes superconductive at -148° C (-234°
F). This temperature is far higher than the temperatures at which metals become superconductors
(see Superconductivity).
Thin insulating films of ceramic material such as barium titanate and strontium titanate are
capable of storing large quantities of electricity in extremely small volumes. Devices capable of
storing electrical charge are known as capacitors. Engineers form miniature capacitors from
ceramics and use them in televisions, stereos, computers, and other electronic products.
Ferrites (ceramics containing iron oxide) are widely used as low-cost magnets in electric motors.
These magnets help convert electric energy into mechanical energy. In an electric motor, an
electric current is passed through a magnetic field created by a ceramic magnet. As the current
passes through the magnetic field, the motor coil turns, creating mechanical energy. Unlike metal
magnets, ferrites conduct electric currents at high frequencies (currents that increase and
decrease rapidly in voltage). Because ferrites conduct high-frequency currents, they do not lose
as much power as metal conductors do. Ferrites are also used in video, radio, and microwave
equipment. Manganese zinc ferrites are used in magnetic recording heads, and bits of ferric
oxides are the active component in a variety of magnetic recording media, such as recording tape
and computer diskettes (see Sound Recording and Reproduction; Floppy Disk).
C Aerospace
Aerospace engineers use ceramic materials and cermets (durable, highly heat-resistant alloys
made by combining powdered metal with an oxide or carbide and then pressing and baking the
mixture) to make components for space vehicles. Such components include heat-shield tiles for
the space shuttle and nosecones for rocket payloads.
D Bioceramics
Certain advanced ceramics are compatible with bone and tissue and are used in the biomedical
field to make implants for use within the body. For example, specially prepared, porous alumina
will bond with bone and other natural tissue. Medical and dental specialists use this ceramic to
make hip joints, dental caps, and dental bridges. Ceramics such as calcium hydroxyl phosphates
are compatible with bone and are used to reconstruct fractured or diseased bone (See
Bioengineering; Dentistry).
E Nuclear Power
Engineers use uranium ceramic pellets to generate nuclear power. These pellets are produced in
fuel fabrication plants from the gas uranium hexafluoride (UF6). The pellets are then packed into
hollow tubes called fuel rods and are transported to nuclear power plants.
F Building and Construction
Manufacturers use ceramics to make bricks, tiles, piping, and other construction materials.
Ceramics for these purposes are made primarily from clay and shale. Household fixtures such as
sinks and bathtubs are made from feldspar- and clay-based ceramics.
G Coatings
Because ceramic materials are harder and have better corrosion resistance than most metals,
manufacturers often coat metal with ceramic enamel. Manufacturers apply ceramic enamel by
injecting a compressed gas containing ceramic powder into the flame of a hydrocarbon-oxygen
torch burning at about 2500° C (about 4500° F). The semimolten powder particles adhere to the
metal, cooling to form a hard enamel. Household appliances, such as refrigerators, stoves,
washing machines, and dryers, are often coated with ceramic enamel.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
(vii) Greenhouse Effect
I INTRODUCTION
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the
Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the
natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59
Fahrenheit degrees) cooler—too cold for most living organisms to survive.
The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing
increasingly concerned that human activities may be modifying this natural process, with
potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s,
humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas.
Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban
settlements, releases some of the same gases that trap heat in the atmosphere, including carbon
dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at
any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat
near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.
Scientists call this unnatural heating effect global warming and blame it for an increase in the
Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the
last nearly 100 years. Without remedial measures, many scientists fear that global temperatures
will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer
temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea
level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global
warming could also affect weather patterns causing, among other problems, prolonged drought
or increased flooding in some of the world’s leading agricultural regions.
II HOW THE GREENHOUSE EFFECT WORKS
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse
gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight
is composed of a range of radiant energies known as the solar spectrum, which includes visible
light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches
the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and
other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas
molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays.
The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi)
above the Earth’s surface.
About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the
atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb
about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most
effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the
Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-
wave infrared radiation, and this energy is released back into the atmosphere.
Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous
oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As
these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this
heat returns back to Earth to further warm the surface in what is known as the greenhouse effect,
and some of this heat is eventually released to space. This heat transfer creates equilibrium
between the total amount of heat that reaches the Earth from the Sun and the amount of heat that
the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy
between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can
support a wide variety of life.
The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much
of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are
called greenhouse gases. Without these gases, heat energy absorbed and reflected from the
Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable
temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C
(59°F).
To appreciate the importance of the greenhouse gases in creating a climate that helps sustain
most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains
low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect
resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an
atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents
heat radiated from the planet’s surface from escaping into space, resulting in surface
temperatures that average 462°C (864°F)—too hot to support life.
III TYPES OF GREENHOUSE GASES
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent).
These two most common atmospheric gases have chemical structures that restrict absorption of
infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the
atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are
manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed
by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse
gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and
hydrofluorocarbons (HFCs).
Since the 1700s, human activities have substantially increased the levels of greenhouse gases in
the atmosphere. Scientists are concerned that expected increases in the concentrations of
greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation,
leading to an artificial warming of the Earth’s surface.
A Water Vapor
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to
70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on
water vapor levels in the atmosphere. However, as human activities increase the concentration of
other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the
evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and
raise the amount of water vapor in the atmosphere.
B Carbon Dioxide
Carbon dioxide constantly circulates in the environment through a variety of natural processes
known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both
release carbon dioxide into the atmosphere. In respiration, animals break down food to release
the energy required to build and maintain cellular activity. A byproduct of respiration is the
formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes,
and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect
carbon dioxide and use it to make their own food, in the process incorporating carbon into new
plant tissue and releasing oxygen to the environment as a byproduct.
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing
power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and
natural gas; wood or wood products; and some solid wastes. When these products are burned,
they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for
lumber or to clear land for farming or building. This process, known as deforestation, can both
release the carbon stored in trees and significantly reduce the number of trees available to absorb
carbon dioxide.
As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster
than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier
ice that is many centuries old, scientists have determined that carbon dioxide levels in the
atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain
in the atmosphere for centuries, scientists expect these concentrations to double or triple in the
next century if current trends continue.
C Methane
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-
containing substances found in oxygen-free environments, such as wastes in landfills, release
methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct
of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when
they break down organic matter. Methane is also emitted during coal mining and the production
and transport of other fossil fuels.
Methane has more than doubled in the atmosphere since 1750, and could double again in the
next century. Atmospheric concentrations of methane are far less than carbon dioxide, and
methane only stays in the atmosphere for a decade or so. But scientists consider methane an
extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at
trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.
D Nitrous Oxide
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source
of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to
their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air.
Plowing fields also releases nitrous oxide.
Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is
smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more
effectively than carbon dioxide and can stay in the atmosphere for a century.
E Fluorinated Compounds
Some of the most potent greenhouse gases emitted are produced solely by human activities.
Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of
manufacturing processes. For each of these synthetic compounds, one molecule is several
thousand times more effective in trapping heat than a single molecule of carbon dioxide.
CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing
agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use
in most applications, CFCs are harmless in the lower atmosphere. However, in the upper
atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In
the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying
the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet
radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987
with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47
countries established control measures that limited the consumption of CFCs. By 1992 the
Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide,
except in certain developing countries and for use in special medical processes such as asthma
inhalers.
Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release
ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by
the year 2030, providing scientists some time to develop a new generation of safer, effective
chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short
time, are now considered the most effective and safest substitute for CFCs.
F Other Synthetic Chemicals
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In
2000 scientists observed rising concentrations of a previously unreported compound called
trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the
environment, the gas still poses a significant threat because it traps heat more effectively than all
other known greenhouse gases. The exact sources of the gas, undisputedly produced from
industrial processes, still remain uncertain.
IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect
radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting
volcanoes are examples of natural aerosols. Human activities, including the burning of fossil
fuels and slash-and-burn farming techniques used to clear forestland, contribute additional
aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas,
they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols
on climate change is still debated, but scientists believe that light-colored aerosols cool the
Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in
global temperature in the last century is lower than many scientists predicted when only taking
into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated
compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly
reduced warming.
However, scientists do not expe
ct that aerosols will ever play a significant role in offsetting global warming. As pollutants,
aerosols typically pose a health threat, and the manufacturing or agricultural processes that
produce them are subject to air-pollution control efforts. As a result, scientists do not expect
aerosols to increase as fast as other greenhouse gases in the 21st century.
V UNDERSTANDING THE GREENHOUSE EFFECT
Although concern over the effect of increasing greenhouse gases is a relatively recent
development, scientists have been investigating the greenhouse effect since the early 1800s.
French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is
conducted through different materials, was the first to compare the atmosphere to a glass vessel
in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof.
In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through
gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the
atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of
infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper
titled “On Radiation Through the Earth’s Atmosphere.”
Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in
electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated
that doubling the natural concentrations of carbon dioxide in the atmosphere would increase
global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is
not too far from today’s estimates using more sophisticated methods. Arrhenius correctly
predicted that when Earth’s temperature warms, water vapor evaporation from the oceans
increases. The higher concentration of water vapor in the atmosphere would then contribute to
the greenhouse effect and global warming.
The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were
virtually ignored for over half a century, until scientists began to detect a disturbing change in
atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of
Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the
atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above
sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were
315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each
year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By
1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31
years.
As other researchers confirmed these findings, scientific interest in the accumulation of
greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World
Meteorological Organization and the United Nations Environment Programme established the
Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international
collaboration of scientists to assess the scientific, technical, and socioeconomic information
related to the risk of human-induced climate change. The IPCC creates periodic assessment
reports on advances in scientific understanding of the causes of climate change, its potential
impacts, and strategies to control greenhouse gases. The IPCC played a critical role in
establishing the United Nations Framework Convention on Climate Change (UNFCCC). The
UNFCCC, which provides an international policy framework for addressing climate change
issues, was adopted by the United Nations General Assembly in 1992.
Today scientists around the world monitor atmospheric greenhouse gas concentrations and create
forecasts about their effects on global temperatures. Air samples from sites spread across the
globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of
greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to
determine their emissions. Scientists gather information about climate systems and use this
information to create and test computer models that simulate how climate could change in
response to changing conditions on the Earth and in the atmosphere. These models act as high-
tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models
can only provide approximations, and some of the predictions based on these models often spark
controversy within the science community. Nevertheless, the basic concept of global warming is
widely accepted by most climate scientists.
VI EFFORTS TO CONTROL GREENHOUSE GASES
Due to overwhelming scientific evidence and growing political interest, global warming is
currently recognized as an important national and international issue. Since 1992 representatives
from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas
emissions.
In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto
Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average
of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively,
negotiators developed a system in which nations that have no obligations or that have
successfully met their reduced emissions obligations could profit by selling or trading their extra
emissions quotas to other countries that are struggling to reduce their emissions. In 2004
Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than
126 countries have ratified the protocol. Australia and the United States are the only
industrialized nations that have failed to support it.
(viii) Greenhouse Effect
I INTRODUCTION
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the
Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the
natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59
Fahrenheit degrees) cooler—too cold for most living organisms to survive.
The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing
increasingly concerned that human activities may be modifying this natural process, with
potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s,
humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas.
Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban
settlements, releases some of the same gases that trap heat in the atmosphere, including carbon
dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at
any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat
near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.
Scientists call this unnatural heating effect global warming and blame it for an increase in the
Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the
last nearly 100 years. Without remedial measures, many scientists fear that global temperatures
will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer
temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea
level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global
warming could also affect weather patterns causing, among other problems, prolonged drought
or increased flooding in some of the world’s leading agricultural regions.
II HOW THE GREENHOUSE EFFECT WORKS
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse
gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight
is composed of a range of radiant energies known as the solar spectrum, which includes visible
light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches
the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and
other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas
molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays.
The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi)
above the Earth’s surface.
About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the
atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb
about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most
effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the
Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-
wave infrared radiation, and this energy is released back into the atmosphere.
Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous
oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As
these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this
heat returns back to Earth to further warm the surface in what is known as the greenhouse effect,
and some of this heat is eventually released to space. This heat transfer creates equilibrium
between the total amount of heat that reaches the Earth from the Sun and the amount of heat that
the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy
between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can
support a wide variety of life.
The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much
of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are
called greenhouse gases. Without these gases, heat energy absorbed and reflected from the
Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable
temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C
(59°F).
To appreciate the importance of the greenhouse gases in creating a climate that helps sustain
most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains
low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect
resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an
atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents
heat radiated from the planet’s surface from escaping into space, resulting in surface
temperatures that average 462°C (864°F)—too hot to support life.
III TYPES OF GREENHOUSE GASES
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent).
These two most common atmospheric gases have chemical structures that restrict absorption of
infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the
atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are
manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed
by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse
gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and
hydrofluorocarbons (HFCs).
Since the 1700s, human activities have substantially increased the levels of greenhouse gases in
the atmosphere. Scientists are concerned that expected increases in the concentrations of
greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation,
leading to an artificial warming of the Earth’s surface.
A Water Vapor
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to
70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on
water vapor levels in the atmosphere. However, as human activities increase the concentration of
other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the
evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and
raise the amount of water vapor in the atmosphere.
B Carbon Dioxide
Carbon dioxide constantly circulates in the environment through a variety of natural processes
known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both
release carbon dioxide into the atmosphere. In respiration, animals break down food to release
the energy required to build and maintain cellular activity. A byproduct of respiration is the
formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes,
and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect
carbon dioxide and use it to make their own food, in the process incorporating carbon into new
plant tissue and releasing oxygen to the environment as a byproduct.
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing
power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and
natural gas; wood or wood products; and some solid wastes. When these products are burned,
they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for
lumber or to clear land for farming or building. This process, known as deforestation, can both
release the carbon stored in trees and significantly reduce the number of trees available to absorb
carbon dioxide.
As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster
than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier
ice that is many centuries old, scientists have determined that carbon dioxide levels in the
atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain
in the atmosphere for centuries, scientists expect these concentrations to double or triple in the
next century if current trends continue.
C Methane
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-
containing substances found in oxygen-free environments, such as wastes in landfills, release
methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct
of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when
they break down organic matter. Methane is also emitted during coal mining and the production
and transport of other fossil fuels.
Methane has more than doubled in the atmosphere since 1750, and could double again in the
next century. Atmospheric concentrations of methane are far less than carbon dioxide, and
methane only stays in the atmosphere for a decade or so. But scientists consider methane an
extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at
trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.
D Nitrous Oxide
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source
of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to
their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air.
Plowing fields also releases nitrous oxide.
Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is
smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more
effectively than carbon dioxide and can stay in the atmosphere for a century.
E Fluorinated Compounds
Some of the most potent greenhouse gases emitted are produced solely by human activities.
Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of
manufacturing processes. For each of these synthetic compounds, one molecule is several
thousand times more effective in trapping heat than a single molecule of carbon dioxide.
CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing
agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use
in most applications, CFCs are harmless in the lower atmosphere. However, in the upper
atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In
the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying
the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet
radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987
with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47
countries established control measures that limited the consumption of CFCs. By 1992 the
Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide,
except in certain developing countries and for use in special medical processes such as asthma
inhalers.
Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release
ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by
the year 2030, providing scientists some time to develop a new generation of safer, effective
chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short
time, are now considered the most effective and safest substitute for CFCs.
F Other Synthetic Chemicals
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In
2000 scientists observed rising concentrations of a previously unreported compound called
trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the
environment, the gas still poses a significant threat because it traps heat more effectively than all
other known greenhouse gases. The exact sources of the gas, undisputedly produced from
industrial processes, still remain uncertain.
IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect
radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting
volcanoes are examples of natural aerosols. Human activities, including the burning of fossil
fuels and slash-and-burn farming techniques used to clear forestland, contribute additional
aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas,
they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols
on climate change is still debated, but scientists believe that light-colored aerosols cool the
Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in
global temperature in the last century is lower than many scientists predicted when only taking
into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated
compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly
reduced warming.
However, scientists do not expect that aerosols will ever play a significant role in offsetting
global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or
agricultural processes that produce them are subject to air-pollution control efforts. As a result,
scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century.
V UNDERSTANDING THE GREENHOUSE EFFECT
Although concern over the effect of increasing greenhouse gases is a relatively recent
development, scientists have been investigating the greenhouse effect since the early 1800s.
French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is
conducted through different materials, was the first to compare the atmosphere to a glass vessel
in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof.
In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through
gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the
atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of
infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper
titled “On Radiation Through the Earth’s Atmosphere.”
Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in
electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated
that doubling the natural concentrations of carbon dioxide in the atmosphere would increase
global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is
not too far from today’s estimates using more sophisticated methods. Arrhenius correctly
predicted that when Earth’s temperature warms, water vapor evaporation from the oceans
increases. The higher concentration of water vapor in the atmosphere would then contribute to
the greenhouse effect and global warming.
The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were
virtually ignored for over half a century, until scientists began to detect a disturbing change in
atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of
Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the
atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above
sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were
315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each
year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By
1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31
years.
As other researchers confirmed these findings, scientific interest in the accumulation of
greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World
Meteorological Organization and the United Nations Environment Programme established the
Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international
collaboration of scientists to assess the scientific, technical, and socioeconomic information
related to the risk of human-induced climate change. The IPCC creates periodic assessment
reports on advances in scientific understanding of the causes of climate change, its potential
impacts, and strategies to control greenhouse gases. The IPCC played a critical role in
establishing the United Nations Framework Convention on Climate Change (UNFCCC). The
UNFCCC, which provides an international policy framework for addressing climate change
issues, was adopted by the United Nations General Assembly in 1992.
Today scientists around the world monitor atmospheric greenhouse gas concentrations and create
forecasts about their effects on global temperatures. Air samples from sites spread across the
globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of
greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to
determine their emissions. Scientists gather information about climate systems and use this
information to create and test computer models that simulate how climate could change in
response to changing conditions on the Earth and in the atmosphere. These models act as high-
tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models
can only provide approximations, and some of the predictions based on these models often spark
controversy within the science community. Nevertheless, the basic concept of global warming is
widely accepted by most climate scientists.
VI EFFORTS TO CONTROL GREENHOUSE GASES
Due to overwhelming scientific evidence and growing political interest, global warming is
currently recognized as an important national and international issue. Since 1992 representatives
from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas
emissions.
In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto
Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average
of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively,
negotiators developed a system in which nations that have no obligations or that have
successfully met their reduced emissions obligations could profit by selling or trading their extra
emissions quotas to other countries that are struggling to reduce their emissions. In 2004
Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than
126 countries have ratified the protocol. Australia and the United States are the only
industrialized nations that have failed to support it.
(ix) Pasteurization
Pasteurization, process of heating a liquid, particularly milk, to a temperature between 55° and
70° C (131° and 158° F), to destroy harmful bacteria without materially changing the
composition, flavor, or nutritive value of the liquid. The process is named after the French
chemist Louis Pasteur, who devised it in 1865 to inhibit fermentation of wine and milk. Milk is
pasteurized by heating at a temperature of 63° C (145° F) for 30 minutes, rapidly cooling it, and
then storing it at a temperature below 10° C (50° F). Beer and wine are pasteurized by being
heated at about 60° C (140° F) for about 20 minutes; a newer method involves heating at 70° C
(158° F) for about 30 seconds and filling the container under sterile conditions.
(x) Immunization
I INTRODUCTION
Immunization, also called vaccination or inoculation, a method of stimulating resistance in the
human body to specific diseases using microorganisms—bacteria or viruses—that have been
modified or killed. These treated microorganisms do not cause the disease, but rather trigger the
body's immune system to build a defense mechanism that continuously guards against the
disease. If a person immunized against a particular disease later comes into contact with the
disease-causing agent, the immune system is immediately able to respond defensively.
Immunization has dramatically reduced the incidence of a number of deadly diseases. For
example, a worldwide vaccination program resulted in the global eradication of smallpox in
1980, and in most developed countries immunization has essentially eliminated diphtheria,
poliomyelitis, and neonatal tetanus. The number of cases of Haemophilus influenzae type b
meningitis in the United States has dropped 95 percent among infants and children since 1988,
when the vaccine for that disease was first introduced. In the United States, more than 90 percent
of children receive all the recommended vaccinations by their second birthday. About 85 percent
of Canadian children are immunized by age two.
II TYPES OF IMMUNIZATION
Scientists have developed two approaches to immunization: active immunization, which
provides long-lasting immunity, and passive immunization, which gives temporary immunity. In
active immunization, all or part of a disease-causing microorganism or a modified product of that
microorganism is injected into the body to make the immune system respond defensively.
Passive immunity is accomplished by injecting blood from an actively immunized human being
or animal.
A Active Immunization
Vaccines that provide active immunization are made in a variety of ways, depending on the type
of disease and the organism that causes it. The active components of the vaccinations are
antigens, substances found in the disease-causing organism that the immune system recognizes
as foreign. In response to the antigen, the immune system develops either antibodies or white
blood cells called T lymphocytes, which are special attacker cells. Immunization mimics real
infection but presents little or no risk to the recipient. Some immunizing agents provide complete
protection against a disease for life. Other agents provide partial protection, meaning that the
immunized person can contract the disease, but in a less severe form. These vaccines are usually
considered risky for people who have a damaged immune system, such as those infected with the
virus that causes acquired immunodeficiency syndrome (AIDS) or those receiving chemotherapy
for cancer or organ transplantation. Without a healthy defense system to fight infection, these
people may develop the disease that the vaccine is trying to prevent. Some immunizing agents
require repeated inoculations—or booster shots—at specific intervals. Tetanus shots, for
example, are recommended every ten years throughout life.
In order to make a vaccine that confers active immunization, scientists use an organism or part of
one that has been modified so that it has a low risk of causing illness but still triggers the body’s
immune defenses against disease. One type of vaccine contains live organisms that have been
attenuated—that is, their virulence has been weakened. This procedure is used to protect against
yellow fever, measles, smallpox, and many other viral diseases. Immunization can also occur
when a person receives an injection of killed or inactivated organisms that are relatively harmless
but that still contain antigens. This type of vaccination is used to protect against bacterial
diseases such as poliomyelitis, typhoid fever, and diphtheria.
Some vaccines use only parts of an infectious organism that contain antigens, such as a protein
cell wall or a flagellum. Known as acellular vaccines, they produce the desired immunity with a
lower risk of producing potentially harmful immune reactions that may result from exposure to
other parts of the organism. Acellular vaccines include the Haemophilus influenzae type B
vaccine for meningitis and newer versions of the whooping cough vaccine. Scientists use genetic
engineering techniques to refine this approach further by isolating a gene or genes within an
infectious organism that code for a particular antigen. The subunit vaccines produced by this
method cannot cause disease and are safe to use in people who have an impaired immune
system. Subunit vaccines for hepatitis B and pneumococcus infection, which causes pneumonia,
became available in the late 1990s.
Active immunization can also be carried out using bacterial toxins that have been treated with
chemicals so that they are no longer toxic, even though their antigens remain intact. This
procedure uses the toxins produced by genetically engineered bacteria rather than the organism
itself and is used in vaccinating against tetanus, botulism, and similar toxic diseases.
B Passive Immunization
Passive immunization is performed without injecting any antigen. In this method, vaccines
contain antibodies obtained from the blood of an actively immunized human being or animal.
The antibodies last for two to three weeks, and during that time the person is protected against
the disease. Although short-lived, passive immunization provides immediate protection, unlike
active immunization, which can take weeks to develop. Consequently, passive immunization can
be lifesaving when a person has been infected with a deadly organism.
Occasionally there are complications associated with passive immunization. Diseases such as
botulism and rabies once posed a particular problem. Immune globulin (antibody-containing
plasma) for these diseases was once derived from the blood serum of horses. Although this
animal material was specially treated before administration to humans, serious allergic reactions
were common. Today, human-derived immune globulin is more widely available and the risk of
side effects is reduced.
III IMMUNIZATION RECOMMENDATIONS
More than 50 vaccines for preventable diseases are licensed in the United States. The American
Academy of Pediatrics and the U.S. Public Health Service recommend a series of immunizations
beginning at birth. The initial series for children is complete by the time they reach the age of
two, but booster vaccines are required for certain diseases, such as diphtheria and tetanus, in
order to maintain adequate protection. When new vaccines are introduced, it is uncertain how
long full protection will last. Recently, for example, it was discovered that a single injection of
measles vaccine, first licensed in 1963 and administered to children at the age of 15 months, did
not confer protection through adolescence and young adulthood. As a result, in the 1980s a series
of measles epidemics occurred on college campuses throughout the United States among
students who had been vaccinated as infants. To forestall future epidemics, health authorities
now recommend that a booster dose of the measles, mumps, and rubella (also known as German
measles) vaccine be administered at the time a child first enters school.
Not only children but also adults can benefit from immunization. Many adults in the United
States are not sufficiently protected against tetanus, diphtheria, measles, mumps, and German
measles. Health authorities recommend that most adults 65 years of age and older, and those
with respiratory illnesses, be immunized against influenza (yearly) and pneumococcus (once).
IV HISTORY OF IMMUNIZATION
The use of immunization to prevent disease predated the knowledge of both infection and
immunology. In China in approximately 600 BC, smallpox material was inoculated through the
nostrils. Inoculation of healthy people with a tiny amount of material from smallpox sores was
first attempted in England in 1718 and later in America. Those who survived the inoculation
became immune to smallpox. American statesman Thomas Jefferson traveled from his home in
Virginia to Philadelphia, Pennsylvania, to undergo this risky procedure.
A significant breakthrough came in 1796 when British physician Edward Jenner discovered that
he could immunize patients against smallpox by inoculating them with material from cowpox
sores. Cowpox is a far milder disease that, unlike smallpox, carries little risk of death or
disfigurement. Jenner inserted matter from cowpox sores into cuts he made on the arm of a
healthy eight-year-old boy. The boy caught cowpox. However, when Jenner exposed the boy to
smallpox eight weeks later, the child did not contract the disease. The vaccination with cowpox
had made him immune to the smallpox virus. Today we know that the cowpox virus antigens are
so similar to those of the smallpox virus that they trigger the body's defenses against both
diseases.
In 1885 Louis Pasteur created the first successful vaccine against rabies for a young boy who had
been bitten 14 times by a rabid dog. Over the course of ten days, Pasteur injected progressively
more virulent rabies organisms into the boy, causing the boy to develop immunity in time to
avert death from this disease.
Another major milestone in the use of vaccination to prevent disease occurred with the efforts of
two American physician-researchers. In 1954 Jonas Salk introduced an injectable vaccine
containing an inactivated virus to counter the epidemic of poliomyelitis. Subsequently, Albert
Sabin made great strides in the fight against this paralyzing disease by developing an oral
vaccine containing a live weakened virus. Since the introduction of the polio vaccine, the disease
has been nearly eliminated in many parts of the world.
As more vaccines are developed, a new generation of combined vaccines are becoming available
that will allow physicians to administer a single shot for multiple diseases. Work is also under
way to develop additional orally administered vaccines and vaccines for sexually transmitted
infections. Possible future vaccines may include, for example, one that would temporarily
prevent pregnancy. Such a vaccine would still operate by stimulating the immune system to
recognize and attack antigens, but in this case the antigens would be those of the hormones that
are necessary for pregnancy.

Вам также может понравиться