Вы находитесь на странице: 1из 209

Sustainability and Green Technology

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An e-book on strategies for sustainability authored by the Class of 2012 Massachusetts Academy of Mathematics and Science ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chapter

Solar Energy 4 Paul Harrington, Anna Brill, Juliana Kew, and Derrick Harney Wind Energy Cai Debenham and Rebecca Church 19

Chapter

Chapter

Ocean Energy 29 Ashleigh Panagiotou, Curtis Belmonte, and Eva Jaskoviak Energy Conservation Andrew Meloche, Mary Reynolds, and Amy Rockwood Electric Cars and Automotive Design Ryan Norby, Daniel Kasza, and Linda Xu Air Quality and Pollution Megan Steele, Richer Leung, and Jessica Latta 43

Chapter

Chapter

57

Chapter

70

Chapter

Bioremediation 85 Jeffrey Copeland, Christian DiMare, and Nicholas Donnelly Recycling and Waste Management 96 Christopher Starbard, Megan McHugh, and Stefan Cepko Biofuels Alexander Witt, Kris Skende, and Andrei Cimpan Water Quality and Pollution Conor Pappas, Luke Derry, and Ram Nambi 110

Chapter

Chapter

Chapter

10

125

Chapter

11

Fuel Cells 137 Gregory Granito, Arjun Grama, Vincent Capone, and Derek Bain Agriculture Adan Rivas, George Slavin, and Ryan Kutzko Endangered Plants and Animals Haylee Caravalho, Hunter Lindquist, and Ryan Nutile Invasive Plants and Animals Benjamin Wright, Joseph Gencarelli, and Zachary King 146

Chapter

12

Chapter

13

163

Chapter

14

180

Chapter

15

Restoration Ecology 190 Kathleen Ross, Grace DiFrancesco, and Susmitha Saripalli

Chapter 1 Solar Energy


Paul Harrington, Anna Brill, Juliana Kew, and Derrick Harney

Introduction Solar power is quickly becoming a major contender in the race for renewable energy. About 89 petawatts of power are available at any given point in time, 6,000 times the power consumed by all human civilization. In 14.5 seconds, the earth absorbs as much solar energy as humanity consumes in 24 hours (Naam, 2011). Developing efficient methods to convert the immense potential of solar energy into usable power is vital to the future of renewable resources. Antoine-Csar Becquerel first harnessed solar energy in 1839. He discovered the photovoltaic effect while experimenting with an electrode and voltage was produced when light fell on the electrode. These original solar cells were incredibly inefficient, only converting about 1 percent of light into electrical energy (Solar Cell, 2011.) Another method of harvesting the energy of the sun is, rather than converting sunlight to electrical energy, to use sunlight to generate heat. Passive solar energy is used to heat and cool buildings. The site, structure, and materials of the building are all used to maximize the heating and lighting effect of the sunlight falling on it, thereby lowering, or even eliminating the fuel requirement. All of these technologies have been around for a very long time, but only lately has interest picked up and therefore advances in the field. In ancient civilizations, the sun was revered as a god because the people knew and respected its immense power. The first practical use of solar power was the drying of food in order to preserve it. As time progressed, solar power spread into different trades. Solar furnaces were built that used sunlight to heat metals so that they could be forged. One of the first large scale solar furnaces was built by the French chemist Lavoisier who used compound lenses in order to reach temperatures of approximately 1750 Celsius. Once steam engines were invented, different versions were developed that used solar power to turn water into steam. Printing presses and other devices were made that utilized this solar steam power system but they were deemed impractical because of their high costs. Eventually, engineers were able to improve upon solar steam power and an American inventor, John Ericsson, was able to build the first working engine solely powered by solar energy. Over the next few decades, many inventors such as Frank Shuman and Father Himalaya improved solar power technology. Eventually, engineers were able to find the crown jewel of solar power, photovoltaics. In 1839, Becquerel discovered photovoltaics using selenium. Eventually, this same effect was observed in silicon cells, which are the ones that are used currently (Kalogirou, 2009). Knowledge of the photoelectric effect has advanced at an increasing rate. French physicist Edmund Becquerel first noticed this effect in 1839. Later, the German physicist Albert Einstein explained the nature of light and the phenomenon on which the photoelectric effect is based on in 1905, winning a Nobel Prize in physics. The first functional

module was built by Bell Laboratories in 1954 and billed as a solar battery. After space programs began using the technology, its cost began to decrease and its effectiveness increased (Knier, 2002). Government Policy The vast amount of energy that the sun continually radiates comes from nuclear fusion, which is the process of two atoms combining to form one single atom. In solar nuclear fusion, hydrogen atoms are smashed together to form helium atoms. When the two atoms collide and combine their nuclei, vast amounts of energy are released. In order for these types of reactions to occur, intense temperatures and pressures must be reached and a high density of hydrogen atoms is needed. The sun is able to fulfill all of these conditions simply because of its sheer size. The gravity that is exerted on the particles generates enough heat for the reactions to occur because of the huge pressure inside the sun. The atom density is also caused by the suns gravity because the attraction between atoms forces them close together. Energy from solar fusion is generated inside the core of the sun and it is then moved to the surface by the convection cycles within the sun. Once it reaches the surface, the energy radiates outward from the sun as heat and light. After eight minutes of flying through space at the speed of light, this energy reaches the Earth and can be harnessed by many different solar systems (Layton & Freudenrich, 2000). Solar power can be harnessed directly using photovoltaic cells but it is also produces other forms of energy. Wind is generated by uneven heat distribution in the atmosphere, which is caused by indirect rays of the sun. Plants rely on the sun to produce their food which animals rely on to survive. Fossil fuels are formed from the decayed bodies of plants and animals, therefore, energy gained from fossil fuels is also derived from the sun (Sen 2008). Aside from the small amount of energy from the Earths core, any kind of energy that exists on earth can be derived from the sun. The 1,370 watts of power per square meter on the Earth from the sun is called the solar constant because it is a static value. It comes in the form of electromagnetic radiation ranging from ultraviolet down to infrared. All the radiation, ultraviolet, visible, and infrared, come from nuclear fusion occurring within the sun. Four hydrogen protons are bound together by the immense temperature and pressure at the core of the sun, becoming a single helium alpha particle, and releasing two positrons and two neutrinos (Nawrocka, 2007). The positrons have the same mass as electrons and equal but opposite charges. They annihilate instantly along with two electrons. The neutrinos have the same mass as electrons; however, they have no charge and are emitted from the sun at the speed of light. They can, with difficulty, be detected and used to make inferences regarding the state of the core of the sun. The difference between the mass of the four hydrogen protons and the mass of the helium is accounted for by Einstein in E=mc2. It is released as energy in the photons or in the velocity of the neutrinos (Caughlan & Hartmann, 2008). From this reaction comes all the heat and light of the sun. Photovoltaic cells are small cells made largely of silicon, which are arranged into a grid pattern to generate electricity when exposed to direct sunlight. They operate by separating positive charges from negative to generate a current (Stillmann, 2008). When photons, the packets of energy emitted by the sun, impact the electrons within the photovoltaic cell, the

electrons are energized and achieve a higher level of potential. At the higher potential, the electrons are able to break free of the nucleus with which they associate. They then traverse the material as free electrons until they reach a gate called a diode. The diode allows electrons to pass in only one direction, creating to a buildup of negative charge on one side. Conductive wire connects the positive and negative terminals of the cell and provides a path along which the congregated electrons flow. This flow of electrons creates current. The formation of the diodes in a photovoltaic cell requires explanation that is more intricate. It begins with the material that forms the cell, in most cases a silicon crystal. Silicon is desirable because it contains four valence electrons and second because it is comparatively cheap. The four valence electrons of a silicon atom can form four bonds with other silicon atoms to construct a neat, static crystal. None of these electrons are free moving, which makes for a very stable structure but an inefficient solar cell because there is nothing with which the photons can interact. To correct this fault the silicon compounds are doped with other elements. Phosphorous, arsenic, and boron are three elements that can be mixed with silicon to change the properties of the crystal. Phosphorous and arsenic have five valence electrons compared to silicon, which has four. Wherever they join the lattice, there is an extra electron held loosely in place. Boron, in contrast, has only three valence electrons. Where it bonds with four silicon atoms, there is an imbalance and a tendency for the atom to attract stray electrons. When phosphorous or arsenic are added to silicon, the result is N-type silicon (N for negative, referencing the abundance of free electrons); when boron is added, P-type silicon is the result (P for positive). It is important to note that both types of silicon are neutrally charged; only the atomic bonds are imbalanced. In a photovoltaic cell, layers of N-type and P-type silicon alternate throughout the structure. When the two contact, the electrons from the N-layers flood into the P-layers to fill the gaps left in the covalent bonds. Many cross at the junction of two layers, and a negative charge accumulates in the P-layers. An electric field forms here, between the positive and negative regions, providing the voltage necessary for power. This is also a diode; when photons liberate electrons from the silicon crystal, they are able to flow from the P-layers into the Nlayers but not vice-versa. Negative charge builds in each N-layer, and conductive diodes between them allow the charges to merge. These charges are then connected to the mirror image on the positive side. The electrons flow from the negative to the positive to produce current (Neufville, 2008). Many photovoltaic cells can be connected together to harvest more useable amounts of energy by organizing the cells into strips and sheets, and exposing a larger area produce more electricity (Starr, Fravel, Sekhar, Glasscock, & Reddoch, 2008). To increase the efficiency of the cells, a non-reflective layer is often added in front of the silicon units. (Solar Cell, 2011) Silicon itself is highly reflective and would rebuff a large fraction of the light hitting it before the photons could strike any electrons, but a non-reflective material traps the photons. Above this layer is a protective shield such as glass to ward off the harmful effects of environment and weather. Photovoltaic cells rely on direct sunlight. Unfortunately, solar panels cannot be constantly angled toward the sun, which means that unaided solar cells are not operating at peak

power throughout the day. Currently, there are two different solutions to this problem: solar trackers and solar concentrators (Knier, 2002). Solar trackers, or heliostats, mimic the ability of some plants to follow the sun. They can increase the overall efficiency of photovoltaic cells by ensuring that they are always directly facing the incoming sunlight. Unfortunately, they do require motive and processing power, which detracts from the benefits gained (Solar Concentrators, n.d.). Solar concentrators direct the sunlight from a large area onto a smaller area. They increase the efficiency of a photovoltaic cell by concentrating sunlight with a Fresnel lens on a part of the cell. This increases the power of the photoelectric effect as well as reducing the number of panels required. In addition, solar concentrators are completely passive. The system does not move, so it does not require any data processing (Solar Concentrators, n.d.). Photovoltaic solar panels have several issues that need to be addressed if they are ever to be considered as replacements for oil. Current models of photovoltaic grids average approximately 15% efficiency, which has not progressed far from the 11% efficiency achieved at the discovery of silicon photovoltaic cells in 1958. Although laboratory tests can achieve efficiencies of up to 30%, this technology is not yet available on a commercial level and may not be for some time. The cost of electricity from a photovoltaic unit is also a major barrier at approximately $5,000 per kW. This price is relatively reasonable and not nearly as high as those from the 20th century (approximately $1,000,000 per kW); however, it still requires more expenses and effort than electricity generated from fossil fuels. The surface area required to install a photovoltaic panel is another issue that must be overcome. In order for photovoltaic panels to function properly houses must be designed for their use with large south-facing surfaces to capture as much sunlight as possible. The roofs on these houses must also meet specifications in size and pitch in order for the solar panels to work at peak efficiency. Even then, the panels must be replaced every few years because the silicon cells within the panels decay with use (Kalogirou 2009). If these issues can be overcome, solar power will likely replace oil as a power generator; however, until that time, solar power from photovoltaic cells cannot compete with the relatively cheap prices of oil power production. Solar Heating Solar heating, either active or passive, harvests the heat produced by the sun. In active solar heating, the sunlight falls on a blackened metal plate, sandwiched between two glass plates, and pipes full of water beneath the plates absorb the heat. The now hot water is either transferred to a holding tank or used immediately to heat a building. Passive solar energy uses architectural design to maximize the amount of heat absorbed from the sun. For instance, a wellinsulated house with windows facing the south can effectively capture heat from the sun and reduce or even eliminate heating and lighting expenses. Interior surfaces also play a huge part in solar heating. Often, brick or stone is used on the walls to reradiate heat throughout the building (Solar heating, 2011). A study conducted in Australia over a period of ten years looked at the uses of solar heating and the impacts on the environment and economy. Prior to this study, most homes in Australia were heated by burning coal. Researchers studied five different hot water systems over 10 years. They were an electric-boosted solar hot water system, gas-boosted solar

hot water system, electric storage hot water system, gas storage hot water system and a gas instantaneous hot water system. All of the systems were used to provide heat for a standard fourperson home. The results of the study found that a typical payback period for the solar heating systems ranged from half a year to no more than two years. It was also shown that electric storage systems performed better than the other systems, but by not a very wide margin (Crawford, 2003).

This is an example a house that uses solar energy capturing system. The roof is equipped with photovoltaic panels and the entire house is oriented to capture the most sunlight possible. Solar power also can be harnessed to separate pure water from seawater. Ocean water can be collected in tanks and exposed to intense sunlight in order to evaporate the water. The water vapor is then collected and condensed into pure water. The salt is left behind in the vat and can be collected for use as well. This method of solar desalinization has been used for many centuries. Sailors used solar desalinization to obtain potable water from the salty ocean water around them. Eventually, this technology was brought to a much larger scale with sizeable glass lenses and high volume tanks. While this technology is still being researched, it is currently not efficient enough to be used commercially (Kalogirou, 2009). Solar power can be used in order to remove the moisture from crops. This is the oldest form of utilizing solar power. The electromagnetic radiation produced by the sun causes the water in plant matter to evaporate leaving behind only the biomass. Dried plant matter stays preserved for far longer than plant matter that still contains all of its water. Farmers expose plants to intense sunlight in order to evaporate most of the water contained within the plant to prolong the time that the plant stays fresh. Solar power is used either solely or as a supplement to an additional heat source in order to reduce the amount of energy needed from the original heat source (Kalogirou, 2009). Current Technologies Applications of typical solar panels are limited because they are heavy and expensive, with a limited range of possible installation locations. A recent prototype of a see through solar

window, developed by New Energy Technologies, eliminates a number of problems with typical solar panels. The Solar Window, as it is called, is window coated with a transparent, electricitygenerating substance. In September, New Energy Technologies released a four-inch square prototype of their window, and in February, a twelve-inch square prototype was released. This advancement uses the worlds smallest functional solar cells, which are less than a quarter the size of a grain of rice (Harrell, 2011). These cells are then mixed up in a batch of clear, semiconducting material and sprayed on windows. If see through solar windows were installed in only one tenth of the estimated five million commercial buildings in America and more than eighty million single homes, more than eight million houses would be powered mainly by clean solar energy (Harrell, 2011).

This image shows a plate of glass coated with solar cells. Traditional solar panels run at about 20% efficiency, which critics point out as a flaw in harvesting solar energy. However, efficiency is not the only factor that makes solar power competitive on the energy market. The other factor is the price. Even though some solar panels might not achieve the same efficiency as more expensive versions, inexpensive solar panels are still commercially competitive. Thin-film solar panels use much less silicon than the traditional solar panels you see on rooftops. The average efficiency of thin-film solar panels is less than those arrays made of crystalline silicon. However, because the solar panels use much less silicon, they are much more inexpensive to produce. This means that companies like First Solar and Nanosolar have been able to distribute more solar panels that are inexpensive and therefore more houses and businesses are being powered by thin-film solar panels despite their reduced efficiency (Walsh, 2010). Recently, design start-up SMIT has been developing a solar panel to mimic ivy plants. Solar panels are seen most often on rooftops, but SMIT hopes to expand the range of solar panels to vertical walls. The setback to installing solar panels on vertical walls is that at high noon the solar panels will be parallel to the sun and therefore will not be able to collect as much sunlight as solar panels installed on rooftops. SMIT has taken inspiration from ivy plants, which grow on vertical walls but still obtain the amount of sunlight they need to survive. The product developed by SMIT contains thousands of four-ounce photovoltaic cells arranged on a steel-mesh grid in a pattern determined by analysis to collect the most sunlight. According to SMIT 4,000 leaves can cover two three-story walls and generate ten kilowatt-hours of power a day, a third of an average homes needs (LaBarre, 2010).

This solar cell mimics the ability of ivy on walls to maximize the solar energy absorbed by each leaf. Solar energy can also be converted into a type of energy that is more easily stored. An innovative technology developed by MIT uses solar cells and catalysts to split water into hydrogen and oxygen and then stores the energy released. A solar cell about the size of a playing card is placed in a bucket of water in bright sunlight. With the help of catalysts, the cell splits the water into hydrogen and oxygen and then stores the energy in a fuel cell. With just one bucket of water, the solar cell could produce enough power to supply a house in a developing country with electricity for one day (Adhikari, 2011). This technology has been around for about ten years, but the previous versions were very unstable and expensive. The team at MIT has developed a newer version that uses widely available resources. It can also use almost any water source. With these updates to a relatively old design, the team at MIT has helped develop a technology with the ability to be widely available in just a few years. Developing less expensive solar cells, and therefore solar panels, is very important to the popularization of solar energy. The standard method of building solar panels is outdated and very expensive. Printable solar panels have become an appealing alternative to the complicated and expensive process of building typical solar panels. An ink developed by Xerox helps make this process less expensive. Previously, printable ink had been impossible to make because the liquid silver used as a signal conductor needed to be at a high heat. Unfortunately, the high heat needed for the liquid silver to conduct rendered the plastics used in other parts of the solar panels unusable. The new silver compound developed by Xerox allows circuits to be printed at room temperature, greatly reducing the cost of printing solar cells (Fox, 2009). Now companies like Nanosolar who sit on the very cutting edge of solar energy technology can print circuits, and therefore solar cells, inexpensively.

A device that prints solar cells.

Applications Solar energy can be used to power cars, airplanes, and ships, but applications to these conventional forms of transportation have been somewhat limited. One problem is that solar cells are expensive and sometimes fragile, so people hesitate to put them on moving platforms; however, the cells are improving all the time. Another issue is the inconsistency of the sun. Some days will experience less light, and even sunny days give way to night. To be viable, a system needs batteries to store the power of the sun for future use, but batteries add weight and complexity. Nevertheless, solar-powered transportation is progressing rapidly.

Constance Solar Shuttle, of the same model as the Serpentine, ferries 40 passengers. The sun can also power boats. The Serpentine Solar Shuttle runs at only 5 mph and has a maximum range of 82 miles, but it is completely solar-powered and admirably suited to its task as a passenger ferry. It carries 42 people and functions even on overcast days (Richard, 2009).

Solar powered planes are another interesting application of recent developments of solar technology. Odysseus, developed by DARPA, is one such solar powered plane with the goal of staying aloft for five years. The plane uses three wing segments arranged in a vertical zigzag shape designed to absorb the maximum amount of sunlight. Each of the three pieces is designed to take off separately and dock mid-air where the winds are not as violent on the 164-foot long structure. The three pieces are interchangeable and can rotate with each other to change the zigzag shape according to the level of sunlight. It is very important for the plane to absorb the maximum amount of solar energy because storage options for energy are very limited on a plane that has to be less than 7,000 pounds (Platoni, 2009). If Odysseus completes its five-year voyage, the possibilities for future applications are limitless.

A giant solar plane that is planned to stay aloft for five consecutive years. In 2010, a Swiss plane completed the first 24-hour flight powered exclusively by the sun. The Solar Impulse includes wings spanning 207 feet, covered by 12,000 photovoltaic cells. Batteries weighing 882 lbs., a quarter of the net weight, keep plane and pilot aloft through the night (Engeler, 2010). Solar technology has great potential for application in space. Devices in or beyond Earth orbit are not affected by atmospheric interference, meaning that no light is reflected before they have a chance to collect it. They also can be much closer to the sun, where they will intercept more sunlight before it can diffuse through space. The earliest solar cells were employed on satellites in Earth orbit, even before the cells became efficient or robust enough to function on the surface of Earth. Orbiting arrays of cells are unaffected by weather and vandalism and have relatively direct contact with sunlight, as compared with cells on the surface of the Earth, shielded by a constant layer of gases. The International Space Station generates 75 to 90 kilowatts of power with the acre-spanning solar array attached to it. This organization of solar cells is the most powerful design that has been sent into orbit; it comprises eight flexible wings containing two solar array blankets each. A blanket has 82 panels of 200 solar cells, for a total of 262,400 cells used to power the entire space station. The arrays are mechanized to track the sun as the station orbits, ensuring that they are always operating at the maximum possible efficiency (International Space Station, 2010).

Under the correct circumstances, solar energy can be used directly as a locomotive force. The pressure of the photons emitted by the sun exerts a small force on everything in its path. Usually this force is negligible, but proper engineering can render it useful. To make use of the force, spacecraft must have vast, light wings, called solar sails, attached to very light bodies. The gossamer-thin sails receive the force of many photons over such a large wing area, and the higher force coupled with a lower mass produces a noticeable acceleration. Over time, this acceleration builds the velocity of the craft to a useful level. German astronomer Johannes Kepler first conceived the idea of powering a space ship with a sail in the 1600s. He saw the deflection or wafting of comet tails and surmised that there existed a solar wind that could be captured with the proper sail. Because a sail for sunlight must be incredibly delicate, the first mission to utilize photons for propulsion did not carry any actual sails. This was Mariner 10, which in 1974 was suffering a lack of gas for attitude control on a mission to Mercury. To compensate, controllers angled the solar arrays Mariner was carrying toward the sun. This sufficed for the minute corrections necessary because Mariner was so near the sun and thus had access to high levels of radiation (Coulter, 2008).

An artists rendering of the IKAROS space probe. IKAROS, a probe launched by Japan in May 2010, demonstrates the use of a solar sail. It is a hybrid craft; the sail is covered by a film of flexible solar cells, which provides electricity and powers a secondary ion propulsion system (IKAROS, 2011). The success of IKAROS has inspired a second craft, this one launched by NASA in January 2011. The NanoSail-D is a breadbox-sized device with a 10 square foot sail of reflective polymer fabric. It orbits the Earth using the power of the sun, but within 120 days aerodynamic drag causes the orbit to decay and NanoSail-D to fall towards Earth before burning up in the atmosphere. Such a solar sail would be the perfect propulsion for temporary or disposable satellites; it provides electrical and kinetic energy for a certain amount of time and then incinerates the waste. With a solar sail attached, satellites can serve their purpose and then easily and neatly clear the way for the next ones (Phillips, 2011).

Solar ponds are a cheap way to access solar power. A simple collection facility is required to manage piping fresh water through the bottom layer of a salt lake or pond. The heated water produced is renewable because the pond area that collects solar energy is so large. The quantity of energy is almost inexhaustible, so energy can be collected day and night, even when the sun is not heating up the pond. This makes solar ponds a powerful option for developing countries because the plants are cheap to create and because the installment of such a facility does not decrease the other values of the salt lake (Rechten, n.d.). The more exact name for solar ponds, salt gradient solar ponds, describes the positioning of energy that solar pond accesses. A salty body of water divides itself into three main parts. The topmost layer, which is the least salty, has the least heat energy, because of the proximity to wind. The bottommost layer, which is the most salty, has the most heat energy, because sunlight that hits the soil at the bottom of the pond is converted to heat. The middle layer prevents the water in the bottom layer from circulating with the topmost layer because of the difference in salinity. The water in the bottom layer retains its heat and could possibly attain a temperature of one hundred degrees Celsius. However, the average temperature is around seventy degrees; eighty degrees can occur easily in the tropics, and sixty can be found even in southern Australian winters (Rechten, n.d.). Solar ponds, however, can be improved. Energy only accumulates at the bottom of the pond if the pond is mainly translucent and the material on the bottom able to absorb light. Adding chemicals to the pond to improve its translucency increases power output but is detrimental to the environment. Placing black material on the bottom of a pond kills any plant growth that exists there. Fortunately, increasing the amount of fresh water piped through the pond to be heated does not significantly damage the ecosystem. Another less invasive improvement is the addition of membranes to the pond, to prevent water from being excited by wind. This prevents unnecessary losses of energy from heat transferring to the cooler air (Rechten, n.d.). To convert heat energy to electrical energy, most generators have turbines that are turned by steam. The heat energy boils the water, and produces high-pressure steam. In solar ponds, the heat produced is not concentrated enough to boil water. Instead, in a device called a Rankine engine, the heat is used to evaporate chemicals similar to those found in a refrigerator. The highpressure gas that is produced is used to turn the turbine (Harrison, 2011). The energy harnessed from a solar pond can be utilized in various ways. Leaving the energy as heat, it can be used to warm greenhouses during the winter, or as a heating system for a larger building. Other interesting applications include using the heat to flash dry spices and vegetables that have come out of storage. The heat can also be changed into electricity, which is much more versatile (Rechten, n.d.). Passive solar building is an architectural method that takes advantage of natural sunlight. Some passive solar designs are very old, such as the Oculus of the Pantheon in Rome. Other examples include the Roman domus, or house, which had a central opening to let light into the house. A greenhouse is another example of passive solar heating, which uses translucent glass to let in light while trapping heat. Today, architects use solar design to reduce the cost of running a

home. Some designs are so robust that homes can become energy neutral (Passive Solar Design, 2011). The architectural theory of solar design has five main concepts. The first is aperture, or collector, for light. The second is an absorber, which is a device to absorb sunlight. The third is thermal mass, or materials that retain heat. The fourth is distribution, or circulation, of heat. The fifth is control, or an ability to monitor how the design behaves (Five Elements, 2011). The first element, aperture, is about bringing energy into the home. In a typical design, the opening is facing within thirty degrees of true south. In addition, it must not be shaded by other objects from nine in the morning until three in the afternoon, which is the length of the average heating session. It is during that time that windows are allowing sunlight to add energy to the homes environment (Five Elements, 2011). The second element, the absorber, is what grabs the energy that the apertures bring in. The material could be a dark carpet, a floor, a partition, or the side of a water tank. Anything that accepts light and turns it into heat energy can act as an absorber. This is the main transition of energy in passive solar design; it is the change from light energy to heat energy (Five Elements, 2011). The third element, thermal mass, is the storage device for the heat energy generated by the absorber. More often than not, the thermal mass is the same material as the absorber. However, the difference is that thermal mass refers to the material that is not on the surface; rather, it refers to the matter behind or below an exposed surface. Also, the thermal mass releases the heat slowly over the day (Five Elements, 2011). The fourth element, distribution, is the method of moving the heat around the house. A strictly passive design will make use of only the three basic heat transfer methods: convection, thermal radiation, and conduction. Convection is heat transfer through diffusion; the warmer, more active particles move around the colder, less active ones. Thermal radiation is electromagnetic waves emitted from matter due to heat, such as visible light from an incandescent bulb or infrared radiation from a raging bonfire. Conduction is the heat transfer through material as it approaches a thermal equilibrium. The distinction from convection and radiation is that regions of matter transfer the heat, rather than particles or waves ( Five Elements, 2011). The fifth element, control, is the modularity of the design. This refers primarily to the aperture. During the winter months, the aperture should be exposed to full sunlight, for maximum heating power. In the warmer summer months, an overhang, preventing the house from overheating, would shade the aperture. Other sensors such as differential thermostats or simple devices such as fans could be used to control airflow within the house, maximizing the efficiency of heating (Five Elements, 2011). Simple applications of solar power include devices that make no use of processors or other electronics. Most of these devices are very similar; they make use of concentrated sunlight to generate heat energy. They include solar oven, solar water boilers, solar furnaces, water distillation, and water disinfection. Solar ovens rely on reflection to concentrate light into an object such as a pot or bowl. The concentrated light becomes heat energy, and can concentrate

heat to generate temperatures over one hundred degrees Celsius. Any device that uses such a method to cook food is considered a solar oven. If the device boils water instead, it is known as a solar water boiler. A solar furnace is similar to a solar oven. However, the solar furnace operates on a much larger scale. By concentrating more light, the furnace can reach a very high temperature, enough to melt and mold steel or other metals. With relatively inexpensive materials, such as a six foot by six foot space and a few thousand one inch mirrors, it is possible to create a focal spot with a brightness approximately four thousand times the natural lighting of the sun. For industrial projects, more space and more mirrors must be used (Beaty, 1996). Solar energy can be harnessed for water distillation through evaporation. After placing translucent material over a moist patch of ground, in a manner that does not allow air to escape, the evaporated water with become a liquid on the surface of the material. This water can then be gathered into a cup or another container of some sort. The water produced is pure, and can be safely drunk. It is clear that our current energy resources are growing thin and that renewable sources of energy need to be found and utilized effectively. At present, the use of solar energy is limited by the cost of the technology to harness it; collection, conversion, and storage are all expensive. Great advances in the field of solar technology have been made in the past few decades. Photovoltaic cells, for instance, reach 30% efficiency in laboratory settings, compared to the 4% Bell Labs was seeing in 1954. The theory behind solar energy is almost perfect: free, nearly limitless, easily accessible, zero-impact energy. All that remains is to improve the practice of collection to the point at which it becomes economically viable. The future of solar energy looks bright.

Bibliography Adhikari, R. (2011, March 28). Power plant. Retrieved from http://www.technewsworld.com/story/ Power-Plant-One-Small-Leaf-Could-Electrify-an-Entire-Home-72156.html "Battery, Electric 18 August 2009. HowStuffWorks.com. <http://www.howstuffworks.com> 07 April 2011. Beaty, W. (1996). Infinitely large solar furnace. Retrieved from http://www.amasci.com/ amateur/mirror.html Caughlan G. R., Hartmann D. H. (2008). Proton-proton Chain. AccessScience. http://www.accessscience.com Concentrated Solar Gets a Boost. (2008). Sustainable Design Update. http://www.sustainabledesignupdate.com

Consumer energy tax incentives. (n.d.). Retrieved from http://www.energy.gov/taxbreaks.htm Coulter, D. (2008). A Brief History of Solar Sails. NASA Science: Science News. http://science.nasa.gov Crawford, R. Graham J. (2003). Net energy analysis of solar and conventional domestic hot water systems in Melbourne, Australia. Solar Energy, 76, 1-3, 159-163. Engeler, E. (2010). Solar Plane Completes Historic 24-Hour Flight. msnbc. http://www.msnbc.msn.com Faiers, A, & Neame, C. (2006). Consumer attitudes towards domestic solar power systems. Energy Policy, 34(14), 1797-1806. Five elements of passive solar design. (2011, February 9). Retrieved from http://www.energysavers.gov/your_home/designing_remodeling/index.cfm/mytopic=102 70 Fox, S. (2009, October 27). Xerox's fabric-printable circuitry coming to production, heralds electronic clothing. Popular Science, 169, Retrieved from http://www.popsci.com/technology/article/2009-10/printable-circuitry-enable-trulyubiquitous-computing Freudenrich, Ph.D., Craig. "How Nuclear Fusion Reactors Work" 11 August 2005. HowStuffWorks.com. <http://science.howstuffworks.com/fusion-reactor.htm> 07 April 2011. Harrison, J. (2011, February 5). Rankine Engines. Retrieved from http://www.microchap.info/ rankine_engine.htm Harrell, E. (2011, February). See-thru solar panels? Retrieved from http://ecocentric.blogs.time.com/2011/02/03/see-thru-solar-panels/ International Space Station Solar Arrays Fully Unfurled and Tensioned Following Spacewalk Repair. (2010). Spaceref. http://www.spaceref.com Japan Aerospace Exploration Agency, Public Affairs Department. (2009). Small solar power sail demonstrator IKAROS Tokyo, Japan: Retrieved http://www.jspec.jaxa.jp /e/activity/ikarosleaflet.pdf Kalogirou, Soteris A. (2009). Solar Energy Engineering - Processes and Systems.. Elsevier. Online version available at: http://www.knovel.com Knier, G. (2002). How do photovoltaics work?. Retrieved from http://science.nasa.gov/science-news/science-at-nasa/2002/solarcells/ from

LaBarre, S. (2010, August 31). Lightweight solar panels. Popular Science, 495, Retrieved from http://www.popsci.com/gadgets/article/2010-07/power-plant Layton, Julia, and Craig Freudenrich, Ph.D.. "How the Sun Works 17 October 2000. HowStuffWorks.com. <http://science.howstuffworks.com> 07 April 2011. Lerner K. L., Wilmoth B. (Ed.). (2003). Solar Energy. World of Earth Science. Vol. 2. Detroit: Gale. 534-536. Gale Virtual Reference Library. http://go.galegroup.com/ Naam, R. (2011). Smaller, cheaper, faster: Does Moore's law apply to solar cells? Scientific American. Retrieved from http://www.scientificamerican.com Nawrocka, M. S. (2007). Solar Energy. Encyclopedia of Environment and Society. SAGE Publications. Retrieved from http://www.sage-ereference.com/environment/ Article_n1004.html Neufville, J. P. (2008). Photovoltaic effect. AccessScience. McGraw-Hill Companies. http://www.accessscience.com Passive solar home design. (2011, February 9). Retrieved from http://www.energysavers.gov/your_home/designing_remodeling/index.cfm/mytopic=102 50 Phillips, T. (2011). Solar Sail Stunner. NASA Science: Science News. http://science.nasa.gov Plantoni, K. (2009, July 22). Giant solar plane. Popular Science, Retrieved from http://www.popsci.com /military-aviation-amp-space/article/2009-06/forever-plane Rechten, H. (n.d.). Solar ponds. Retrieved from http://www.solarponds.com/ Richard, M. G. (2009). 7 Awesome Solar Boats You Must See. Treehugger. http://www.treehugger.com San Martin, R. (2008) Solar energy, in AccessScience, McGraw-Hill Companies. Retrieved March 24, 2011 from: http://www.accessscience.com Sen, Z. (2008). Solar energy fundamentals and modeling techniques. London: Springer-Verlag. Small Solar Power Sail Demonstrator IKAROS. (2011). Japan Aerospace Exploration Agency. http://www.jaxa.jp Sodis. (2011, February 28). Retrieved from http://www.sodis.ch/index_EN

Solangie, K.H. Islam, M.R. Saidur R. Rahim, N.A. Fayaz, H. (2011). A review on global solar energy policy. Renewable and Sustainable Energy Reviews, 15, 2149-2163. Solar Cell. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/552875/solar-cell Solar concentrators. (n.d.). Retrieved from http://sunsolarconcentrator.com/ Solar Energy (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com /EBchecked/topic/552905/solar-energy Solar heating. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/552946/solar-heating Starr E. C., Fravel M. T., Sekhar N., Glasscock D. L., Reddoch T. W. (2008). Electric Power Generation. AccessScience. McGraw-Hill Companies. http://www.accessscience.com Stillman, G. E. (2008). Photovoltaic cell. AccessScience. McGraw-Hill Companies. http://www.accessscience.com Upcoming opportunities. (n.d.). Retrieved from http://www1.eere.energy.gov/ solar/upcoming_opportunities.html Walsh, B. (2010, December). Solar Tower. Retrieved from http://www.time.com/time/specials/packages/article/0,28804,2030137_2030135_202168 3,00.html Walsh, B. (2010, December). Thin-film solar. Retrieved from http://www.time.com/time/specials/packages/article/0,28804,2030137_2030135_202168 1,00.html Zweibel, K, Mason, J, & Fthenakis, V. (2008, January). A solar grand plan. Scientific American, 298, 64-73.

Chapter 2 Wind Energy


Cai Debenham and Rebecca Church Introduction When people think of air, they generally think of it as a gas, but some would consider that it is also a fluid. Sure enough, air particles are in a gaseous form as opposed to a liquid one, but the particles behave as any other fluid would. Objects can move through it, it can have a flow, and it is affected by temperature. Warmer air is lighter and rises in the presence of denser, colder air. As the warm air rises it leaves behind a pocket, but the colder air fills in the gap; this same principal is what causes the phenomenon of wind. As the sun heats up a large surface of land, the heat is dissipated into the air around it. As this air rises, the motion of the denser air that replaces it is wind, and motion implies that energy is involved. It is this mechanical energy that human beings have been making use of since 3000 BCE (Layton, 2011). Although the developed world uses fossil fuels as its primary means of energy production, wind energy is beneficial in many ways. It offers remarkable environmental and social gains. As a result of federal aid and an overall positive reputation with the public, wind turbines are a valuable and revolutionary energy alternative (Xie & Billinton, 2011). Wind farms provide several economic benefits. Costs for the installation, insurance, maintenance, repairs, and land rent for wind turbine farms have been decreasing due to the improvement of machine efficiency and increase in hours that machines can operate without needing downtime. Even without the decreasing costs considered, wind energy methods and fossil fuel energy production are at similar costs (Kaldellis & Zafirakis, 2011). The use of wind energy can lessen expenses for electricity overall (Traber & Kemfert, 2011). Continued government support through feed-in-tariffs, investment and production tax incentives, tradable green certificates, and tendering systems has greatly increased the economic advantage of wind energy production (Kaldellis & Zafirakis, 2011). The most helpful of the government aid has been the feed-in-tariff, which removes the obstacle of market pricing and sets a definite price for wind energy. Then power market contributors can buy renewable energy units and then the energy is passed on to consumers. This method of government support has greatly increased expansion and development in many countries including Denmark, Germany, Spain, the United States, and China (Traber & Kemfert, 2011). Wind energy also provides the benefit of hundreds of thousands of employment opportunities in Europe, and approximately 85,000 in the United States. The vast work field, involved government aid, and improved cost efficiency of technology all contribute to the economic rewards of wind energy (Kaldellis & Zafirakis, 2011).

Along with having a positive impact on the economy, renewable energy through the form of wind turbines is by far better for the environment than the standard fossil fuel method of energy production. Because the energy is converted from the wind, it is renewable and does not burn through resources. It also does not produce carbon emissions that are detrimental to the atmosphere (Traber & Kemfert, 2011). Because wind turbines are a carbon-free form of energy production and are capable of recycling natural resources into energy, they offer an environmental advantage over fossil fuels. The Department of Civil and Environmental Engineering at Stanford University has been gathering evidence of the benefits of wind energy through quantifying the worlds wind power potential. They have located areas with the highest potential for low-cost wind power generation in Tasmania in Australia, northeast and northwest coasts of North America, the southern tip of South America, and Northern Europe along the North Sea. They also established that roughly 20% of this wind power could supply enough energy for all of the worlds energy demand and more than seven times its electricity needs (Archer & Jacobson, 2011).

History of Wind Energy Over five thousand years ago, people on the Persian-Afghan border began using one of the first forms of wind energy technology: vertical-axis windmills. These had vertical shafts and drove millstones directly without any gearing to improve production rates. The structure was surrounded by four walls with two off-center openings that allowed wind to flow and turn the mill in one direction at a time. As opposed to later mills the grinding stone was actually position on top of the sails (Landis, Russell, Seale, Wailes, & Woodruff, 2011). This idea evolved into horizontal-axis windmills in the Netherlands and Mediterranean (Kaldellis & Zafirakis, 2011). At the end of the eleventh century CE, the crusaders of Europe brought the idea back with them and developed windmills with overall structures similar to modern horizontal-axis wind turbines, also known as HAWTs (Layton, 2011). Systems like these positioned the mill below the sails and implemented gearing to improve efficiency. To ensure that the horizontal shaft would face the wind, the structure was supported on a post that served as an axis of rotation. With time, brake systems were also developed to prevent damage to the mill in the presence of more powerful winds. The mill would then remain locked until a heavy lever was lifted so that it could not be jarred back into motion. The tower mill of the fourteenth century improved upon this model by with the attachment of the sails to a cap at the top of the structure that served as the new axis of rotation. With this modification the entire structure did not have to be turned towards the wind. At first, the caps were still turned by hand from inside the mill, but in 1745 Edmund Lee developed a tail system that would automatically point the shaft in the ideal direction. Later on, from 1772 to 1807, people began addressing the problem of the sails tearing during storms; both manual and automatic solutions were explored. By 1784, windmill use began to decline when a steam engine successfully powered a windless mill (Landis, 2011). However, wind energy was rediscovered in 1890 as a means of generating electrical energy. This wind turbine simply replaced the mill with a generator; by 1931 airfoil blades were incorporated to improve efficiency and sturdiness. Although they were on the decline, interest in wind energy was reinvigorated during the oil crisis of 1970. Following the airfoil designs these

modern structures made use of lift in order to turn the blades (Budenholzer & Landis, 2011). From the 1850s until the 1970s, small machines used wind energy to pump water. It was in Cleveland, Ohio, in 1888 that the first wind machine generated electricity (Kaldellis & Zafirakis, 2011). American scientists conceived new ideas inspired by airplane propellers and wings; this led to the design and improvement of wind generators in the United States. World War I brought on the use of more wind energy machines in Denmark, and, along with France, Germany, and the United Kingdom, European investigators continued developments with wind generators well after World War II. The advanced Gedser mill of Denmark and improved horizontal axis designs in Germany during the 1960s aided the development of horizontal-axis windmills operating in the 1970s (Kaldellis & Zafirakis, 2011). Due to the oil crisis of 1973, the customer base for wind turbines expanded into agriculture and utility interconnected wind farm applications. As a result, the government of the United States made an in investment in energy credits. The federal aid caused an outbreak in the wind energy technology; from 1981 to 1990, 16,000 wind energy machines were installed in California (Kaldellis & Zafirakis, 2011). On the European front of the wind energy movement, the spread of wind farms increased from the 1980s to 1990s and Europe has become the continent most involved with wind energy for the past twenty years. Wind energy conversion methods have been improving throughout the past 5,000 years; however, modern developed society still relies on primarily fossil fuels to produce electrical energy (Kaldellis & Zafirakis, 2011).

Modern Wind Turbines The most common type of wind turbine, and the one most people think of, is the Horizontal Axis Wind Turbine (HAWT). The first HAWTs were windmills which people used for hundreds of years. These windmills have many simple rectangular blades which operate slowly, but have a lot of torque. The generic modern HAWT, designed by Johannes Juul in the 1950s, consists of three aerodynamic blades, a generator, and a gearbox, all standing on top of a tower that is thin tubular and white. HAWT propellers come in two variations, upwind and downwind. The downwind version is rotated into place automatically by the wind, at the cost of higher on the structure. The upwind version has to be rotated manually, but has a much lighter stress load. Another common type of wind turbine is the Vertical Axis Wind Turbine (VAWT). These have the advantage of being able to be situated in places where a HAWT just would not fit, such as in a city. The two main types of VAWTs are the Darrieus and Savonius turbines. The Savonius turbine consists of a cylinder cut in half and the two slides offset halfway. This creates two scoops that use the power of wind drag to rotate the device and generate electricity. The device can easily start from rest and is very simple to create, however it inefficient and is also usually mounted directly to the ground where wind speeds are low.

The Darrieus turbine consists of multiple curved, vertical airfoils. It is much more efficient but it causes a very high stress load and requires the use of expensive materials. As well, the Darrieus is incapable of self-starting, although combined with a Savonius it creates an efficient combination. Physics of Modern Wind Energy Technology Lift is only present amidst a moving fluid; the key component to consider is the difference in relative speeds of the fluid and the object. Lift analysis can call for complex mathematics, but two simpler explanations exist as well. The first, the Longer Path explanation, demonstrates that if two particles came to the front of a wing and met at the back, then the one that travels the greater distance will move faster. Based on Bernoullis equation, as a fluid speeds up its pressure is decreased. In this manner, a properly shaped wing will cause the structure to be sucked upwards by the difference in air pressures. The error in this concept is that the air particles would not necessarily rejoin at the back of the wing as is assumed. In fact, the explanation implies that airplanes should not be able to fly upside down when we know that this is not the case. However, it is true that there is a difference in wind speeds that causes lift due to change in pressure. The second false description, the Newtonian explanation, is based on Newtons third law of equal and opposite forces. By considering air as individual particles that are deflected down from the wings lower surface, it is determined that each impact also bumps the air foil upwards by a marginal amount. However, this explanation neglects the upper portion of the wing and was also negated by one of Eulers discoveries. He found that fluids approaching an object are deflected away before any collisions take place. The Newtonian explanation only holds true at exceedingly high speeds (approaching hypersonic) with low air densities. Lift is actually caused by the turning of a moving fluid. As air approaches the upper portion of the blade it is compressed into the air above it; thus forming a pocket of low air density immediately over the airfoil. The difference in pressure brings some of the air back downwards, but it does this at the rear of the wing and leaves most of the air pocket in the middle of the blade. The speed of the air approaching the bottom surface of the wing is reduced and the flow is directed downwards from the structure.

As oncoming air approaches the wing it divides into two groups that form a difference in air pressure above and below the wing.

Toward the back of the wing the air patterns slowly revert to their original conditions. Thus the overall difference in pressure causes the upwards lift of the blade with greater effect spurred by the upper portion (Adkins & Brain, 2011).

Common Issues In spite of the promising future in wind energy technology and its countless benefits and advantages, as with any revolution or advancement, there are many obstacles slowing development. However, different forms of wind-turbine systems are designed to overcome separate obstacles and address different needs (Liserre, Cardenas, Molinas, & Rodriguez, 2011). As one example of worry brought on by wind turbines, Dr. Nina Pierpont has pointed out potential negative effects on health. Pierpont is an American doctor who conducted provocative studies in the field of wind turbines and discovered a disturbing condition she terms Wind Turbine Syndrome. This is an ailment in which the organs used for stability, movement, and sense of location are disrupted by the low frequency noise from wind turbines. However, this information is still controversial and debatable; Pierponts research is both praised and criticized and should be neither rejected nor taken as fact (Graham, 2011). Lack of an efficient power transmission system is also one of the leading problems for wind energy in general. Electricity is generated at locations of peak efficiency, but must be transported to areas of maximum population. Traditional overhead power lines bring electricity over hundreds of miles to supply major cities with clean power. However, there are a few startup companies that are working on new solutions. Many of the leading designs focus on establishing power lines that predominantly run underwater. These cables are out of sight so as not to obstruct the lifestyles of locals. They follow the natural routes of rivers so that they may also incorporate hydro power on the way. Furthermore, they may work with high voltages in direct current for maximum efficiency. One project is aiming to transport over 1,000 MW of wind and hydro energy into New York City (Blackwell, 2010) (New source of clean power, 2011). However, above ground there are still many projects and difficulties. In Texas, a number of start-up wind farms have faced an unknown future due to a lack in power lines and economic comparisons. The state is now working towards $5 billion in transmission cables to connect renewable energy sources to the city grids. However, it is uncertain whether or not an ambitious plan that would cut through some of the more scenic routes will actually be accepted. Just the year before, a similar project that had been running since 2007 was shut down by the local citizens who complained that it would be interfering with their land. Taking the land by eminent domain is looked down upon, even as a last resort, but it is feared that as time goes on and as companies grow desperate the idea could become a reality. In fact, Governor Rick Perry declared that solidifying the rights of private property owners is a legislative emergency and hopes that the state lawmakers take up the case without delay. The people are not against the wind farms in general, but require that any projects be designed to go around their territory (Galbraith, 2011).

Another notable issue that has been associated with wind power has been lack of faultride-through performance. When the rotor blades are not turning or a power line has a short, the power grid would experience a drop in total voltage. As a result of this issue, wind energy has come to represent a minute portion of the electricity running through the grid. However, modern wind energy converters can now demonstrate FACTS and STATCOM performances. These features allow AC based wind farms to produce a constant flow of electricity regardless of external conditions. Originally, these would be monitored and maintained manually, but now it is possible to make these features automatic. Additionally, current can be automatically adjusted to halt flow of electricity from the turbine to the grid when a power fault is expected. Upon installation, target voltage and a fault trigger can be established so that the system will route electricity to an alternate location when power is below a set value. As a result of this, the turbine does not stop producing usable energy in the presence of external disturbances. Alternatively, it is also possible for a zero power mode to be instantiated to halts all electrical output of the turbine without the need for a manual brake that would need to be manually reset. Because these systems can be applied directly to a turbine or set of turbines, it is not necessary to establish an expensive external control unit; thus, wind electrical maintenance can cost less than that of other forms of power. Physical maintenance can still be problematic, but as was already mentioned there are several different braking systems to avoid damage under high winds (Beekmann, Marques, Quitmann, & Wachtel, 2009).

Emerging Technology Wind energy is a capable source for renewable energy, but the uncontrollable nature of the wind presents concerns. Unsteady winds do not provide the best aerodynamic performance for wind turbines and sudden, uneven gusts can lower the productivity of the blades. Scientists at the L.C. Smith College of Engineering and Computer Science at Syracuse University are developing ways of overcoming problems with the instability of wind. Their method predicts the circumstances of wind flow over blades based on their measurements. It simultaneously processes this information and applies it to the actuation on the blades. This effectively manipulates the airflow, improves the effectiveness of the turbines, and reduces noise and vibrations (Syracuse University, 2010). This new method being tested for the future in wind energy technology addresses the issues of both the unsteady nature of wind and the alleged cause of Pierponts Wind Turbine Syndrome. While HAWT turbines dominate the land, the sea has become the target of several, promising VAWT projects. It is difficult enough to run repairs on a land based turbine 100 m high, but doing this on the water poses extra impediments. However, vertical axis designs have the turbine at the base where all of the hardware can be easily accessed for maintenance. Also, the weight of the sails in a horizontal-axis system is variable, whereas the center of gravity for a vertical-axis turbine is always directly over the center. In this manner the main support structure does not need to withstand changes in weighting. The Novel Offshore Vertical Axis (NOVA Aerogenerator) project aims to investigate and develop such a machine for the sea based wind belts around the UK.

. The Aerogenerator offshore wind turbine would produce a maximum power output of 5 MW. The original Aerogenerator design has two dominant blades in a V-shape with perpendicular sail all along them. The arms would make use of tension wires for evenly distributed support. Although the structure appears quite different from the traditional Darrieus style rotor, the two are actually quite similar. The V-wings mimic the lower portion and the sails bear the same effects as the upper half. The only portions truly missing are the center beam and the top bearing. Lift is generated along the wings and sails as with any other wind harvester and causes the turbine to spin. However, as a vertical axis turbine, it takes the force of this wind in any direction. This consistency, along with its low profile, can also prevent the turbine from interfering with radar signals. HAWT style wind farms are constantly changing direction and skew radar signals; however, the Aerogenerator design always has the same overall pattern and does not cause a security gap. The NOVA project is actually run as a collaboration of various institutes and companies. Combining experience and resources the consortium originally planned to produce 1GW of electricity with 200 of the 5MW generators by 2020 (Offshore wind, 2010). After conducting eighteen months of feasibility studies, the overall design was improved to produce twice the energy in half the weight of the original. It is also half the weight of HAWT of equal production, in addition to the preferable weight distribution.

The Aerogenerator X will have a peak power output of 10MW and span 270m.

The Aerogenerator X maintains all of the benefits of the older model while adding some of its own. The system is also more cost effective. What remains are studies on deep sea potential and effects of installation on local marine organisms. However, in the update one problem did arise; by removing the support cables between the wings, the overall weight would be the same, but individual stresses become an issue. With the wings now weighing down past the base, individual blade structure had to be improved, but this also meant adding back some weight and restructuring the wings. Material thickness was increased to tackle the issue, but this increased strain on the drive train, modification of which lead to difficulty at greater depths of water. Despite this, the new design managed to be more cost effective and more efficient; thus, parliaments aims to install 12GW worth of offshore wind power. The first Aerogenerator X is targeted for implementation in 2014; however, the final plan for power transmission has yet to be solidified (Aerogenerator x, 2010). In The Distant Future One team of four recently developed a novel idea in power transmission that rather than tackling the common issues of placement and installation, seeks to revamp preexisting lines. The overall idea is that vertical-axis wind turbines could be implemented straight into the far reaching transmission towers.

This concept design incorporates wind turbines straight into the transmission towers. Because most wind farms are already in ideal locations for harvesting, the lines would be in suitable regions for extra production. It is uncertain whether or not all towers would be able to support the vibrations of the turbines, but support structures feasibility tests should yield results shortly. Even if only a small amount of electricity is gathered from each pylon, when multiplied across the millions constructed excessive energy could be harvested internationally. In France alone it is predicted that the modifications would cover 5% of their energy needs, whereas in America wind energy currently represents less than 3% of our electricity (Hurst, 2009). Conclusion The political decisions in the Republic of South Africa provide a prime example of social and political problems hindering the development of wind energy. While the energy feed-in tariff in 2009 aided the movement, legal aspects have also become issues in the advancement of wind energy. Accepted elements of a sustainable development are also involved in slowing the

movement. One impediment is the idea that the development must be considered with intergenerational views; the technology that will not be available until later in the future cannot be used to increase development now. While old views are a difficult obstacle for the wind energy movement to surpass, the evident economic and environmental benefits are pushing the renewable energy movement slowly and steadily forward (Duguay, 2011). Hopefully the promising outlook for renewable wind energy in South Africa is only foreshadowing for progress to be made in the rest of the world. Bibliography Adkins, B, & Brain, M. (2011). How airplanes work. Retrieved from http://science.howstuffworks.com/transport/flight/modern/airplane8.htm Aerogenerator x: world's biggest offshore wind turbines unveils. (2010, December 1). NBM & CW Archer, C.L., & Jacobson, M.Z. (2011). Evaluation of global wind power. Informally published manuscript, Department of Civil and Environmental engineering, Stanford University, Standford, CA. Retrieved from http://www.wind energie.de/fileadmin/dokumente/ Themen_A-Z/Potenzial%20der%20EE/Stanford_global_winds.pdf Beekmann, A, Marques, J, Quitmann, E, & Wachtel, S. (2009). Wind converters with facts capabilities for optimized integration of wind power into transmission and distribution systems. Proceedings of the 2009 cigre/ieee pes joint symposium (pp. 1-1). Calgary Blackwell, R. (2010, March 18). Underwater line would take power to new york. Globe and Mail. Budenholzer, R, & Landis, F. (2011). Turbine. Britannica academic edition. Retrieved from http://www.britannica.com/EBchecked/topic/609552/turbine/45700/Development-ofwind-turbines Duguay, P.M. (2011). Wind power to the people: overcoming legal, policy and social barriers to wind energy development in South Africa. Journal of World Energy Law and Business, 4(1), 1-31. Galbraith, K. (2011, January 20). Lack of transmission lines is restricting wind power. The Texas Tribune, p. A21. Graham, L. (2011, March 10). Wind turbines bad for health, says us doctor. The Australian, p. 2. Hansen M.O.L. and Madsen H.Aa. and Srensen J.N. and Srensen N. and Voutsinas S. (2006, June). State of the art in wind turbine aerodynamics and aeroelasticity. Progress in Aerospace Sciences, 42(4), 285-330, doi: 10.1016/j.paerosci.2006.10.002 Hurst, T. (2009). New design integrates wind turbines into transmission towers. Clean Technica, Retrieved from http://cleantechnica.com/2009/07/06/new-design-integrates-wind-turbines -into-transmission-towers/

Janssen, S.A. (2009). Exposure-response relationships for annoyance by wind turbine noise: a comparison with other stationary sources. Proceedings of the EURONOISE, 8th European conference on noise control, Edinburgh, 2628 October 2009 Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-5264 Kaldellis, J. K., & Zafirakis, D. (2011). The wind energy (r)evolution: A short review of a long history. Renewable Energy, 36, 1887-1901. Landis, F, Russell, C, Seale, R, Wailes, R, & Woodruff, E. (2011). Energy Conversion. Brtitannica academic edition. Retrieved from http://www.britannica.com/EBchecked/ topic/187279/energy-conversion/45919/History-of-energy- conversiontechnology?anchor=toc45919 Layton, J. (2011). How wind power works. Retrieved from http://science.howstuffworks.com/ environmental/green-science/wind-power.htm Liserre, M., Cardenas, R., Molinas, M., & Rodriguez, J. (2011). Overview of Multi-MW Wind Turbines and Wind Parks. IEEE Transactions on Industrial Electronics, 58(4), 1081-1095. New ideas enhance efficiency of wind turbines. ScienceDaily. Retrieved March 24, 2011, from http://www.sciencedaily.com/releases/2010/12/101220110856.htm New source of clean power for New York City one step closer to reality.(2011). Retrieved from http://www.chpexpress.com/press-releases/032111.php Offshore wind; the shape of wings to come. (2010, February 22). The Engineer, 22. Syracuse University (2010, December 22). Traber, T., & Kemfert, C. (2011). Gone with the wind? Electricity market prices and incentives to invest in thermal power plants under increasing wind energy supply. Energy Economics, 33(2), 249-256. Turbine. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/ EBchecked/topic/609552/turbine Wagner, H.J., & Mathur, J. (2009). Introduction to wind energy systems basics, technology and operation. doi: 10.1007/978-3-642-02023-0 Wind power. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/ EBchecked/topic/645063/wind-power Xie, K., & Billinton, R. (2011). Energy and reliability benefits of wind energy conversions. Renewable Energy, 36, 1983-1988.

Chapter 3 Ocean Energy


Ashleigh Panagiotou, Curtis Belmonte, and Eva Jaskoviak

Introduction Encompassing approximately two-thirds of the world, the ocean holds immense power. The crashing force of waves can erode cliffs, and powerful tides can shape beaches. As sun rays hit the surfaces of seas, thermal energy is transferred to the water in the form of heat. Even the simple mixing of rivers into oceans can yield tremendous amounts of energy. As our global society continues to move away from its traditional reliance upon nonrenewable fossil fuels, scientists and engineers are constantly seeking alternative clean and sustainable sources of energy. In recent years, researchers have begun to look to the ocean as an increasingly viable source of abundant power. Between their powerful waves and tides, diverse climate conditions, and unique chemical composition, the oceans have the potential to provide tremendous amounts of power to meet our ever-increasing energy demands. For this reason, ocean energy, or marine power, in its various forms has become a topic of great interest among modern advocates of green energy. Although marine power has only recently been considered a potential permanent solution to the global energy crisis, interest in utilizing the energy of the ocean has pervaded our society for hundreds of years. As early as the eighteenth century, humans had begun to look more intently at the immense force produced by ocean waves as they crashed up on the shore. Many felt that if this potent force of nature could be harnessed, it would see countless practical applications as a commercially viable source of energy. Early attempts to harness the force of

Waves crash as locals and tourists enjoy themselves in the waters of Dreamland Beach in Bali. waves and the motion of the tides in the late eighteenth to early nineteenth century were followed a century later by research into utilizing the thermal energy that resulted from temperature differences inherent in the ocean (Harris, 2009). An additional source of marine power, the

salinity that is characteristic of ocean water, evaded thorough consideration and research until as recently as the advent of the twenty-first century (Reuters, 2009). Especially now, when energy costs are rapidly rising, what if the motion of the same seas that you use for recreation could be captured and power everything in your house? As the popularity of green technology grows, more companies are looking to harness that power. Tidal and Wave Energy Imagine you were to stumble across an underwater wind farm. It would look very similar to wind turbine farms found on the coasts of oceans with one key difference: The turbines sit arranged in rows on the floor of an ocean or river and generate electricity from the force of water rushing past them, rather than from wind. The motion of incoming and outgoing tides, or tidal action, is converted into usable electric power by devices that are very similar to those that harness wind energy. These water farms are already a reality in some areas. Thirty feet under the East River in Manhattan, six of these water turbines sit beneath the surface rotating with the current. The six turbines, all of which are sixteen feet in diameter, can rotate up to 32 times a minute. They sit on a platform 30 feet below the water that can turn to change the direction the turbines are facing in order to harness the greatest amount of mechanical power (It came from the sea, 2008). Weighing the Options While marine current turbines are one solution, there exist many other devices and technologies to harness this hydropower. A device proposed by a group from Singapore comprises a floating platform affixed to the ocean floor by concrete blocks with electrical generators and water turbines (Hartono, 2000). A tidal barrage system uses a dam-like structure to block receding water, which is release at low tide. When the water is released, the water flows past and moves a turbine (Tidal power, 2011). Tidal fences are a concept similar to that of tidal barrage. Instead of a dam, however, a row of turnstile like turbines rotate as water flows through (Ocean tidal power, 2011). Why Not Wave Energy? As with other forms of energy generation, this form of marine power has its advantages and disadvantages. As abundant as water is, not all sites can house devices to harness the power of the tides. Tidal turbines work best if they are closer to shore at a depth between 65 and 98 feet (Ocean tidal energy, 2011). Some devices, including those manufactured by Verdant for the East River need to rotate a specific number of times per minute in order to make up for the cost of the turbines or other devices. The most practical sites for these farms are bodies of water with high flow velocities, but even with the right site, the hydroelectric power system is inevitably costly. However, with the turbines subject to a high flow velocity, the energy they harness and versatility of the resulting electrical energy potentially far outweighs this financial cost. Unlike wind, water is very predictable. A scientist can always predict the tidal patterns and often can anticipate the water flow for a day by watching the water further out in the ocean for a couple of days in advance (It came from the sea, 2008).

A tidal barrage system blocks water using a type of dam and releases it at low tide. When the water is released, the water flows past a turbine. There is also a real and immediate concern that this technology may prove damaging to ocean ecosystems. Tidal barrage systems, for example, can disrupt natural ocean environments while the huge structures are built and are in use. Turbines are widely considered to be the least damaging of all the technologies available. Even tidal fences have the possibility to cause harm to animals migrating in the water, although the harm is thought to be minimal and these devices can be designed in ways to minimize the danger. Making Waves in the Future Wave power is similar to tidal power but relies on the pressure differences created by waves beneath the surface of the ocean. Offshore wave energy is collected at depths of more than 130 feet, but onshore systems are situated right along shorelines. Some offshore devices power pumps utilizing the up and down motion of waves, while others capture the pressure of water with long tubes that move a turbine. For example, specially engineered ships can collect water from the surface of the ocean and pass it through turbines in their hulls. Off the coast of Portugal in Aguadora, huge tubes float on the surface of the water that move with the ocean waves that power generators.

Wave power generators off the coast of Portugal in Aguadora. These large tubes move with the waves and power generators.

Onshore system technologies collect the energy of breaking waves. One of these devices, a tapchan, is a tapered tube that supplies a reservoir above sea level with water. The tapered channel forces waves to increase in height as they move towards a cliff encompassing the reservoir. The water fed into the reservoir then passes through water turbines to produce energy. A simpler design of an onshore system is a pendular device. This box mechanism had a flap inside that moves when water travels into the box. The movement of the flap powers a pump and generator (Ocean wave power, 2011). Unlike tidal power devices, wave power devices generally cost less because of the relatively low amount of fuel they demand. These systems have a hard time competing with other power sources, however, and, like the other forms of marine power, there is a shortage of suitable sites for these technologies. Although sites are indeed hard to find, the World Energy Council has found that Australia has many areas along the coast that would be suitable to capture ocean energy. All along the southern coastline there exists the potential to harness power and energy. An experimental wave energy generator was recently installed at Port Kembala in Australia. Although it produced energy on a smaller scale, it showed that this form of energy can be used in the future. Although wave energy is still in its beginnings and has a long way to go before the technology produces enough power to be truly cost-effective, it certainly holds potential (Wave energy to be exploited, 2010). Ocean Thermal Energy The earliest documented proposal of harnessing the thermal energy of the ocean is that of the eminent French physicist Jacques Arsene d'Arsonval in 1881 (Chiles, 2009). The basis for the technology that would accomplish such a goal was simple: Basic physics tells us that the density of water is dependent upon its temperature, such that water that is at a higher temperature will be less dense than water that is at a relatively lower temperature. Because a substance that is less dense is also more buoyant, warmer ocean water tends to rise to the surface, whereas colder ocean water sinks below it. The result is a significant difference in temperature between surface water and deeper ocean water. Using a heat engine, a device that operates on the energy produced by the relative temperature differential between two substances, these intrinsic marine temperature differences may be harnessed to produce power in the form of mechanical or electrical energy (Harris, 2009). Forty-nine years after dArsonval first advocated the prospect of ocean thermal energy conversion (OTEC), a student of his, George Claude, became the first to construct a working OTEC commercial plant. Built in Cuba, Claudes experimental plant subsequently produced as many as 22,000 watts of electrical power while operational (Chiles, 2009). By this point in time, however, OTEC technology was still very much in its early prototypal phase and could not compete commercially with either wind farms or traditional fossil fuel plants (Harris, 2009). Since then, countries and states worldwide, including Denmark, France, Germany, Italy, Japan, Russia, the United Kingdom (Bruch, 1994), and Hawaii (U.S. Energy Information Administration, Office of Electricity, Renewables and Uranium Statistics, 2011) have launched OTEC programs and even constructed operational OTEC plants. Along with these plants came new technological achievements that improved upon the efficiency of traditional OTEC methods and allowed the plants to achieve long-term commercial success.

The Cyclical Design The year 1967 saw a particularly important advancement in OTEC technology. In this year, James Hilbert Anderson patented the design of the first closed cycle OTEC system, drastically improving upon the conversion efficiency of current systems (U.S. Patent No. 3,312,054, 1967). This system comprised a fluid with a low boiling pointoften ammoniathat would evaporate when heated by warm ocean water. The resulting gas would exert enough pressure to power a turbine within the system. The gas would then pass over the turbine and into a separate chamber that would use colder ocean water to cool the gas, causing it to condense and return to the original chamber to initiate the next cycle (Harris, 2009).

A diagram of a closed cycle OTEC system. In this design, warm water is used to power an evaporator, which vaporizes a volatile liquid contained inside in order to produce sufficient pressure to power a turbine. In a matter of decades, another OTEC system, the open cycle system, provided an alternative method for converting ocean water temperature differences into usable energy (Vega, 1999). Rather than utilizing a substance with a particularly low boiling point, the open cycle system would power a turbine by evaporating warm ocean surface water directly. To do this, the system would make use of a vacuum to reduce the pressure within the warm water chamber until the water began to boil. Once the resulting water vapor had passed over the turbine, cold ocean water would cool the gas, causing it to condense into water and thus reinitiate the cycle (Harris, 2009). From these two designs emerged a third, even more efficient OTEC apparatus known as a hybrid system. This system utilized facets of the designs of each of the two previous systems in order to maximize the conversion efficiency for a given water temperature differential. Like the closed cycle design, the hybrid system would evaporate a substance with a low boiling point to power an internal turbine; however, like the open cycle design, a vacuum would first evaporate warm ocean surface water, and the resulting warm vapor would provide the heat to evaporate the aforementioned volatile liquid (Harris, 2009).

Rise of the Modern OTEC Plant Adopting these cycle designs, a variety of OTEC plants continued to arise worldwide, based either on land or offshore on the ocean itself. Land-based plants offered relative ease of construction, greater protection from unpredictable and potentially violent ocean weather patterns, and more direct access to the onshore facilities to which they provided power. However, floating offshore plants promised more direct access to the water used to power their thermal cycles, potentially optimizing output for larger-scale operation, such as the Hawaiian plant (Vega, 1999).

Design of an onshore OTEC plant that makes use of the water not only for initial energy generation, but also for subsequent desalination and irrigation applications. Modern OTEC plants, such as that operated by the Tokyo Electric Power Company off the coast of the island of Nauru, make use of these designs to produce electrical power outputs in excess of 100,000 watts, far greater than that of Claudes original 1930 plant (Bruch, 1994). However, these data pale in comparison to the true potential of OTEC technology. Modern scientists estimate the theoretical capacity of global OTEC systems to be 1,000 gigawatts and the theoretical global annual production potential to be as high as 10,000 terawatt-hours (International Energy Administration, 2007). Reconciling Theory and Reality So what keeps us from making full use of this tremendous store of energy? The problem lies with the inherent inefficiency of current OTEC systems. The output of these thermal plants and the devices that run them is limited by their energy conversion efficiency, the percentage of potential energy resulting from oceanic temperature differences that they can physically produce. In general, as the water temperature differential used to power an OTEC device decreases, the efficiency of the system also decreases. These plants are therefore most practical in tropical locations where the difference in water temperature between the surface and deep ocean water is greatest, typically between 20 and 25C (Nihous, 2007). At these temperatures, the maximum

potential efficiency of an OTEC system ranges from six to seven percent. This limiting value is known as the Carnot efficiency of the system (Berger & Berger, 1986). However, OTEC systems operating at these temperature differentials have so far failed to achieve their theoretical potentials. In fact, the typical operating efficiency of tropical OTEC plants falls 3-6% short of the theoretical maximum. This is the result of several physical factors that limit the ideal operation of these particular heat engines. The first and most notable is an effect known as microbial fouling (Berger & Berger, 1986). Ocean water contains a variety of plants, algae, and microorganisms that lower the effective thermal conductivity of the water itself. This presents a real problem for traditional OTEC systems, which operate on the principle of using ocean water as a thermal exchange medium to cool down or heat up the various moving fluids that provide power to the devices. In fact, research indicates that microbial fouling may lower the efficiency of OTEC systems by as much as 50%, as it also tends to form microbial layers that are quite difficult to remove on the surface of the heat engines, further impairing their performance (Berger & Berger, 1986). An additional factor compromising the efficiency is that of parasitic power loss. During the typical operation of many OTEC devices, internal components, such as the compressor that is used to expel waste water from the system, draw some of the power produced by the engine. This allows the system to operate more effectively but also limits the amount of output energy that can be effectively supplied to external sources (Bharathan, 2011). Although they are quite efficient compared to closed cycle systems alone, open and hybrid cycle OTEC systems provide their own set of difficulties. In particular, the vacuum that is used to evaporate the warm ocean water that powers the cycle is subject to various operational inefficiencies. In order to allow for maximum efficiency, the vacuum chamber must be sealed to prevent leakage and achieve minimum pressure. If even a slight air or water leak occurs, this efficiency can be drastically compromised. Additionally, during the process of using deep ocean water to cool and condense the gaseous working fluid, the low pressure of the vacuum chamber may cause dissolute gas to evolve from the liquid. In general, gaseous substances are intrinsically less thermally conductive than liquids, so this excess gas in the chamber can decrease the thermal energy transferred by the cool ocean water heat exchanger and thus lower the efficiency of the device as a whole (Vega, 1999). The Heated Environmental Debate Even if we one day manage to overcome these factors limiting the efficiency of OTEC systems, however, we will not have overcome the only challenge to their effective implementation into our society. For all of those who advocate its potential as the green energy technology of the future, there are equally many who oppose OTEC technology, believing that it may prove harmful to the environment. Some environmentalists worry about the use of refrigerants such as R-22 and R-134a in closed and hybrid cycle OTEC systems, fearing that leakage of these substances may contribute to ozone layer depletion or global warming. Others also express concern over the need to pump large volumes of deep ocean water to the surface to power these systems. By doing so, some argue, we may disrupt the delicate ocean ecology by

redistributing heat throughout different marine zones and releasing potentially contaminated discharge water back into the environment (Fujita, 1990). Whats in Store for OTEC? As of now, marine thermal energy harnessing is still an emerging technology. Although it has its disadvantages, it nevertheless shows great potential for the energy-conscious world of tomorrow. Efforts to improve upon the currently realized efficiency and practicality of the technology are already underway. Especially of interest are potential hybrid energy conversion systems that would combine the power-harnessing potential of OTEC systems with that of solar energy panels or even marine wind, tidal, and osmotic energy conversion systems (What is sea solar power?, 2007). Additionally, scientists and engineers have proposed designs to minimize the environmental impact of this technology. One proposed variation of the traditional OTEC system would utilize the temperature difference between the ocean surface water and frigid air in arctic regions (Akulichev, Ilyin, & Tikmenov, 1983). This method would potentially increase the energy yield of the system by providing a greater temperature differential while simultaneously obviating the need to pump large volumes of water from the ocean depths. Others have suggested introducing organisms into the environments affected by OTEC systems in order to replenish nutrients that may be lost from the water during the conversion process and thus minimize international impact (Fujita, 1990). With more and more of the world developing an interest in marine power as a viable alternative to fossil fuels, we may soon see ocean energy plants becoming an even bigger part of our economy and our society. Interest in the technology has already reached a global scale with China predicting the construction of 100,000 kW tidal power plants by the year 2020 (Wang, Yuan, & Jiao 2011) and the International European Ocean Energy Association aiming to construct plants sufficient to generate 3.6 GW of energy by 2020 and 188 GW by 2050 (European ocean energy roadmap, 2010). Preliminary estimates even suggest that as much as 10% of U.S. future energy needs could be met by marine power plants (Yardley, 2007). With numbers like these, it seems like the future of our global energy supply may truly lie deep within the ocean. Osmotic Power Osmotic energy harnessing is made possible by a simple natural phenomenon: When water from a river comes into contact with water from the sea, the freshwater tends to move towards the salt water, and the salt within the ocean water tends to migrate towards the freshwater. The high chemical potential difference of the waters is turned into kinetic energy in an attempt to achieve a balance between the two. As a result, a larger pool of brackish water is formed. Consequently, this phenomenon can be manipulated through a process known as osmosis. In his 2003 paper, R.J. Aaberg explains the process, in which a semi-permeable membrane is placed between bodies of fresh and salt water: The freshwater will first pass through the

membrane to dilute the saltwater, but the membrane will then trap the salt water, preventing it from remixing the the initially salty water and the two from achieving equilibrium. As a consequence of the membrane, the saltwater body will always have a higher chemical concentration than the freshwater, and a consistent flux of freshwater will pass through the membrane in an effort to nullify the chemical potential difference. These osmotic forces will cause an increase in the pressure of the brackish water, from which, a form of power production known as pressure retarded osmosis (PRO) is produced (Skilhagen, Dugstad, & Aaberg, 2008). Inside an Osmotic Energy Plant Engineers have developed a power plant that would use PRO to generate energy. Theoretically, the plant could produce an osmotic power of as much as twenty bars, or two million Newtons per square meter. This is the equivalent of the hydraulic power produced by a 204-meter waterfall (Seppl & Lampinen, 1999). Builders will need to construct the plant near a source of both fresh and salt water, likely necessitating close proximity to both a river and the open ocean. A series of pumps and pipes placed in these sources would then retrieve the water needed for the process. The plant would also likely be located below sea level. This would allow the waters to travel through the pipes by the natural force of gravity. Because this design would not require pumps to apply work to draw the water into the plant, this would mean tremendous savings on the energy required for operation. The figure below shows an artistic interpretation of what such a plant would resemble.

Diagram of a PRO plant location. A PRO power plant would need to be located near sources of both fresh and salt water. In the conventional design, river water enters the plant through a series of pipes. It continues to travel until it reaches a water filter where most of the dirt, silt, and other impurities are removed, leaving only pure water. This ensures the highest possible chemical potential difference between the two flows. Similarly, the seawater travels through a different series of pipes as it enters the plant. The brine then follows the pipes until it reaches a secondary water filter. This filter serves only to remove any unwanted objects or organisms from the seawater, as these undesirable entities could compromise the entire process.

After the filtration process, the water remains in a constant state of motion. Eventually, the water approaches a large chamber that is divided by a semi-permeable membrane, as seen in the figure below. It is here that the osmosis process is utilized. Both the fresh and salt water sources travel in a path parallel to the membrane modules. In accordance with osmosis, the freshwater travels across the membrane to reach equilibrium with the saltwater. Furthermore, because the river water has passed through a filter, the chemical potential difference between the two waters is greater than normal, resulting in an increase in the amount of freshwater that passes to the other side. However, once there, it is trapped by the membrane. As more freshwater enters the saltwater side of the chamber, pressure begins to build within the confinements.

Diagram of a PRO system. Freshwater flows across a membrane, resulting in pressurized brackish water that turns a turbine to produce energy. The brackish water is under extreme pressure as it leaves the chamber and continues to travel down the pipes. Now that it holds much potential energy, most of the water is sent through a pipe with a turbine. The force of the water spins the blades on the turbine, thus producing power. It is this power that is then converted into usable energy and stored for commercial or private use. To make the plant more efficient, however, the remaining brackish water is sent back through to the beginning of the plant where it encounters a pressure exchanger (Cath, Childress, & Emilelech, 2006). This step is key to the practicality of the plant because the osmotic forces alone are not enough to produce the power needed for the plant to be a worthwhile investment. When the pressure produced from the first cycle is exchanged with that of the incoming brine, the saltwater is pre-pressurized, causing the end pressure to be even higher than it was the first time. When this pressurized water is then cycled back into the system again, it builds on itself more and more, until the pressure produced is enough to make a reasonable amount of energy. To complete the process of the power plant, the freshwater is returned subtly back into the river environment and the brackish water is brought back to the sea. Between extracting and reintegration, the water itself undergoes few changes; therefore, the pollution is minimal. The PRO power plant is a green technology that may prove beneficial in the future for this reason.

The Future of Osmosis Scientists have known of osmosis for many decades, but the technology needed to utilize it has only recently become available. Since then, it has proven especially difficult to develop a truly efficient semi-permeable membrane. Statkraft is the company that has focused the most on developing the necessary membrane technology; the higher the salt retention and flux of the membrane, the more power produced (Skilhagen, Dugstad, & Aaberg, 2008). However, although they have developed a membrane that has the desired power density of four Watts per square meter, its components are extremely expensive. In addition, the membrane design is unreliable and would likely have to be replaced every several years. Thus, this particular membrane would currently prove impractical to produce on a large scale (Gerstandt et. al., 2008). Many hope that osmotic power plants see greater commercialization soon, but compared to solar panels and wind turbines, the equipment involved is often larger and more costly. Furthermore, this specific type of ocean energy is not yet efficient enough for widespread use; it can only produce half the power of conventional hydraulic plants. For a small, coastal country like Norway, multiple osmotic plants would serve to power approximately 10% of the country, but osmotic power holds relatively little value for the larger or land-locked regions of the world (Thorsen & Holt, 2009). One main benefit of osmotic power is that it is extremely environmentally friendly. It does not take enough water from a river or sea to change drastically the physical nature of the environment surrounding it, and the process would release no radioactive toxins or harmful pollutants into the environment. Furthermore, with the location of the plant being best suited underground, the effect on the topical ecosystem would be minimized. As technology improves and the membrane design becomes more efficient and less expensive, osmotic power may prove to be a very efficient green energy source. Conclusion All new forms of energy start off small. At present, the possibility of globally-dominant, large-scale, ocean energy plants is just that: an attractive possibility. As with any new technology, existing marine energy devices possess limitations. Their construction and maintenance may currently prove financially demanding, and not all sites are suitable for their effective implementation. However, these problems will likely soon see solutions, and ocean energy conversion may soon produce as much as, if not more electricity than other forms of green energy, such as traditional wind power systems. In many ways, the various types of marine energy have clear advantages over these current power sources. Osmosis, for example, is unlikely to interrupt delicate ecosystems by producing harmful byproducts or emissions, and although OTEC, tidal, and wave energy do pose the risk of secondary environmental impact, their pros may soon far outweigh their cons. So, the next time you visit the beach, take a look around. For every surfer that is carried to shore on a wave, just imagine: that same wave may one day power your computer or some, if not all of the appliances in your house. It may not happen for several years, but it is certainly far from impossible.

Bibliography Aaberg, R.J. (2003). Osmotic power: a new and powerful renewable energy source? Refocus. Akulichev V. A., Ilyin A. K., and Tikmenov, V. V. (1983). The possibility of using OTEC systems in arctic regions. Proceedings of the Oceans '83 MTS/IEEE conference. San Francisco, CA: IEEE. Anderson J. H. and Anderson J. H. (1967). U.S. Patent No. 3,312,054. Washington D.C.: U.S. Patent and Trademark Office. Berger L. R. and Berger J. A. (1986). Countermeasures to microbiofouling in simulated ocean thermal energy conversion heat exchangers with surface and deep ocean waters in Hawaii. Applied and Environmental Microbiology 51(6): 11861198. Bharathan, D. U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy. (2011). Staging Rankine cycles using ammonia for OTEC power production (NREL/TP Publication No. 5500-49121). Golden, Colorado: National Renewable Energy Laboratory. Bruch, V. L. (1994). An assessment of research and development leadership in ocean energy technologies. Sandia National Laboratories: Energy Policy and Planning Department. http://www.osti.gov/bridge/servlets/purl/10154003-z9rVWD/native/10154003.PDF. Cath, T.Y., Childress, A.E., & Emilelech, M. (2006). Forward osmosis: principles, applications, and recent developments. Journal of Membrane Science, 281, 70-87. Chiles, J. (2009). The other renewable energy. Invention and Technology 23(4), 2435. Fujita, R. M. (1990). Ocean energy raises environment issues. Retrieved March 24, 2011, from The New York Times: http://www.nytimes.com Gerstandt, K., Peinemann, k.V., Skilhagen, S.E., Thorsen, T., & Holt, T. (2008). Membrane processes in energy supply for an osmotic power plant. Desalination , 224, 64-70. Greenemeier, L. (2008, March 10). It came from the sea--renewable energy, that is. Scientific American. Hartono, W. (2002). A floating tied platform for generating energy from ocean current. Renewable Energy, 25(1), 15-20. Harris, W. (2009). How ocean power works. Retrieved on March 24, 2011, from http://science. howstuffworks.com/environmental/green-tech/energyproduction/ocean-power.htm

International Energy Administration. (2007). Implementing agreement on ocean energy systems: Annual report 2007. Retrieved April 14, 2011, from www.iea-oceans.org/_fich/6/IEAOES_Annual_Report_2007.pdf Introducing osmotic power. (2009). Osmotic Power Inc. Retrieved from http://www osmoticpower.com/ Moskwa, W. (2009, November 24). Norway opens world's first osmotic power plant. Reuters, Retrieved from http://www.reuters.com/ Nihous, G. C. (2007). An estimate of Atlantic Ocean thermal energy conversion (OTEC) resources. Ocean Engineering, 34, 22102221. Ocean Energy. (2011). The california energy commission. Retrieved March 24, 2011, from http://www.energy.ca.gov/oceanenergy/index.html Oceans of energy: European ocean energy roadmap 2010-2050. (2010). Imprimerie Bietlot, Belgium: European Ocean Energy Association. Ordonez, I. (2008, December 6). Everybody into the ocean. The Wall Street Journal, p. R6. zger, M., & Sen, Z. (2008). Return period and risk calculations for ocean wave energy applications. Ocean Engineering, 35(17-18), 1700-1706. Renewable energy. (2011). In Energy Savers. Retreived from http://www.energysavers. gov/renewable_energy/ zger, M. (2011). Prediction of ocean wave energy from meteorological variables by fuzzy logic modeling. Expert Systems with Applications, 38(5), 6269-6274. Reuters. (2009). World's first osmotic power plant opens. Retrieved April 15, 2011, from CNET News: http://news.cnet.com Seppl, A., & Lampinen, M.J. (1999). Thermodynamic optimizing of pressure-retarded osmosis power generation systems. Journal of Membrane Science, 161, 115-138. Skilhagen, S.E., Dugstad, J.E., & Aaberg, R.E. (2008). Osmotic power power production based on the osmotic pressure difference between waters with varying salt gradients. Desalination , 220, 476-482. Thorsen, T., & Holt, T. (2009). The potential for power production from salinity gradients by pressure retarded osmosis. Journal of Membrane Science, 335, 103-110. Tidal power. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/ EBchecked/topic/595132/tidal-power

Wave energy to be exploited in Australia. (2010, August 19). Retrieved from http://www.ourenergy.com/news/wave_energy_to_be_exploited_in_australia.html Wave power. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/ EBchecked/topic/1487247/wave-power U.S. Energy Information Administration, Office of Electricity, Renewables and Uranium Statistics. (2011). Electric power monthly April 2011 with data for January 2011 (DOE/EIA Publication No. 0226). Washington, DC: Government Printing Office. Vega, L. A. (1999). Open cycle OTEC. Retrieved March 24, 2011, from OTEC News: http://www.otecnews.org/articles/vega/05_open_cycle.html Wang, S., Yuan, P., Li, D., and Jiao, Y. (2011). An overview of ocean renewable energy in China. Renewable and Sustainable Energy Reviews, 15(5), 91-111. What is sea solar power? (2007). Retrieved March 24, 2011, from http://www.seasolarpower. com/index.html Yardley, W. (2007). Efforts to harvest oceans energy open new debate front. Retrieved March 24, 2011, from The New York Times: http://www.nytimes.com

Chapter 4 Energy Conservation


Andrew Meloche, Mary Reynolds, and Amy Rockwood Introduction The standard of living today in the majority of the world is heavily dependent on the consumption of non-renewable resources. As the population continues to expand at an exponential rate, the natural resources of the Earth diminish even more rapidly. It is not until recent decades that we have become aware of the eminent crisis that shall occur if we perpetuate the reliance on non-renewable resources. However, such vast changes in thinking and way of life take time to permeate throughout the general public. For example, everyone is not going to buy a more gas efficient car or find other means of transportation within the next year or so. Yet, with the high gas prices and incentives from the government, the switch is slowly being made in some countries quicker than others (Himler, 2007). New technology is currently being researched and manufactured to reduce the consumption of non-renewable energy. In spite of it being a global issue, conserving energy does require the participation of each individual to contribute to the larger goal. Background It is hard to pinpoint exactly when this energy crisis began; however, when the oilproducing countries stopped selling to Western nations in 1973, and as a result people started to become infuriated by the shortage, the world realized that there was a problem. Today the energy crisis is a progressively worsening problem. Fortunately, researchers are developing new ideas to prevent the complete depletion of precious resources. They have realized the need to reduce the consumption of non-renewable resources, sources that have fueled our world since the beginning of Industrial Revolution, almost 300 years ago. These sources include coal, oil, natural gas, and more recently nuclear power. However, the top researchers have tapped into new sources of energy, such as solar power, wind power, geothermal heat, biofuels, and hydropower all of which are renewable sources. These sources and consciousness to become greener to help protect the planet sparked the need to develop green technology. With such an era of advancements, an evolution of energy efficient technology is taking over. Buildings, homes, cars, electronic devices, paper, household appliances, manufactured items, and numerous other items used daily are all becoming more energy efficient.

The 1973 energy crisis caused global oil shortages.

Energy is used in every aspect of life today. Officially, according to the U.S. Department of Energy, energy consumption is divided into four classifications: the residential, commercial, industrial, and transportation sectors of the economy. Of these four categories, industry consumes the most amount of energy, about 9% more than that of residential use. On average, each American uses an equivalent of about seven gallons of gasoline per day (Energy consumption, n.d.). As may be evident from this fact, there is an underlying link between the economic standing and energy consumption of a country. In some cases, there may be more consumption in a third world country due to a lack of efficiency. However, the vast populations of larger and more technologically advanced countries, more often than not, cause the energy consumption to be far greater than in smaller countries (Guanghua, Shuwen, Yixin, Yongxia, & Yunzhu, 2011). Efficient use is often overlooked when discussing methods to solve energy crises that we face today. Pollution, global warming, and especially fossil fuel depletion are all potential problems that can be addressed by efficient use of the energy available to us today. Energy consumption per capita has stayed consistent over the past few decades (Energy use, 2011). This may be because the increase in energy use of the country corresponded by the increase in population. In 2007, the United States used 7,759 kilograms of oil equivalent per capita, compared to the 4,019 kilograms of oil equivalent used per capita in Japan throughout the same year. The United States uses far more energy than most other countries in the world today. The technological advances of the twenty-first centuries have been both harmful and beneficial to the environment. These new gadgets have tremendously increased the global energy consumption rates, but have brought the attention of a global energy crisis. Energy conservation is one of the biggest global issues facing the world today. Approximately 12000 teraWatts (TW), or 12 trillion kilowatts (KW) is consumed annually (BP, 2011). This estimation is projected to inflate to 16000 TW by 2030 (BP, 2011). The energy resources that the world uses today will not be able to meet these increasing demands; shortages of these valuable resources are depleting at a rapid pace. When the resources are exhausted, the world will not be able to function with the devices used today; therefore, it is necessary to do preserve these valuable resources through effective energy conservation programs. Alternative energy is another key factor in conservation. By 2050, oil production throughout the world will be only half of what it is today (Zhang & Neelameggham, 2011). Because oil is becoming more scarce, alternate sources of energy need to be discovered and implemented soon if our economy and society are going to continue to grow. The production of renewable energy is one of the keys to energy conservation (Zhang & Neelameggham, 2011).

Methods of Conserving Energy Ways of conserving energy can be subdivided into several categories. The first being efficient energy use, which refers to using existing forms of energy and making it last longer. A good example of this is a car that has a higher mileage per gallon. In addition, there is decreased energy consumption, meaning a device requires less energy to run. For example, the modern

Energy Star washer and dryers run on less electricity and water. The final category is reduced consumption from conventional means. This category of energy conservation is not necessary about new efficient products but about how the energy is being used by the already existent appliances. For example, devices do not have to be constantly plugged into an outlet when not in use. In each of these categories progress has been made and already incorporated into daily life.

Green Architecture Green building is becoming more popular around the world. It involves using proper structuring and resource-efficient methods throughout the construction and eventual repairs of a building. Anything involved with the construction, repair, operation, renovation, and even destruction of the building is green based on three features. Green building is dependent on the energy use of the building, the efficiency of its occupants, and the waste expelled from the use of energy (Wald, 2007). Architects are using renewable energy sources, constructing new buildings using less material efficiently, and designing different constructional designs effectively; these designs and technologies are implemented in the establishment of new shopping areas, corporate headquarters, college campuses, laboratories, homes, and more. Passive systems require the input of no energy. A passive system uses the energy from the sun in productive architectural designs. Passive solar is applied to the structural design of the windows, walls, and other components of buildings to collect and store solar energy. The materials used, designs of the buildings, and the location of the building are all taken into account for the maximum amount of solar energy to be captured. Additionally, architects are switching their approaches to old techniques to develop new designs to save energy. For example, traditional glass buildings are not energy efficient; however, new techniques of doubling glass layers with an air ventilation reduces the cost of mechanical systems to cool or heat throughout the building, while still giving the chic look of a glass building (Gonchar, 2010). Designs and materials used are just the beginning of efficient energy construction. In most cases, the design will take advantage of the differences in sunlight in each of the seasons. A traditional design would have an overhang in front of a window facing east. As the sun rises in the morning, the rays of the winter sun will reach beneath the overhang and the interior of the building will be warmed naturally. In the summer, the overhang will block the high sun and the interior of the building will be out of the direct rays ("Five elements of," 2008). Moreover, green architecture includes modifying the materials used. This incorporates keeping the energy enclosed within the building in a lightweight, airtight, compact insulation instead of little or no insulation (image below) (Cockram, 2011). Materials, designs, and energy sources of a building are all being transformed into greener technologies used to operate a green building.

The amount of energy wasted through inefficient insulation. Another offset of creating a more efficient building is to reduce the amount of energy used to ventilate an edifice. Warming or cooling a building is consumes approximately one-sixth of American energy (Roof products, n.d.). To fix this tremendous consumption of energy, contractors are fixing old duct systems and roofs with modern technology to help increase the efficiency of the air systems and to keep the heat in the building (Duct sealing, n.d.). In addition, some appliances use intelligence to keep them efficient. For example, air conditioners and heating systems are set to use more energy during times when they will be put to better use. Because it is hotter in the afternoon hours during the summer, an air conditioner may be programmed to use more power at this time to cool the air and make the area more comfortable. On the other hand, an electric heating system in a modern house may be programmed to turn itself on and off based on the occupancy of the home (Turner & Doty, 2007). It may turn on in the morning when the homeowners wakes up, then shut off when they leave for work. Once they return it will start to heat the house again until a designated shut-off time. Smarter technologies such as these are the beginnings of energy conservation and efficient use. Some heating systems apply the energy of the sun to heat liquid, which is later used to warm the building. Additionally, because most buildings discharge hot waste gases this can also be used with the solar energy to heat the fluids to be used as the temperature source of the building (Nenstein, 1978). Much thought is put into building design and placement before it is built. Depending upon the function of the building, passive-cooling systems may be advantageous. A passive cooling system is any system that does not require energy to operate. A few examples would be natural night ventilation, which opens the ventilation system of a building to the night air, which is generally cooler than that of the daytime (Breesch, Bossaer, & Janssens, 2005). With this system implemented, the building is subject to outdoor air that will cool it without spending the energy needed to operate air conditioners the next day. Green architecture is becoming a common occurrence in almost all modern buildings constructed today. Transportation As one of the four sectors of energy use, transportation has had many structural reforms made in order to increase efficiency and decrease energy use (Dyer, 2007). With the use of

composite materials, automobiles and other classes of transportation have become more efficient with their resource usage. There has been a lot of work done on the automobile since its introduction to increase its use of energy. More advanced tires, which decrease the friction between the road and the car, changes in motor oil formulas, which decreases the internal friction in the engine, and also breakthroughs in the aerodynamics of the vehicles. Furthermore, ball bearings decrease the friction in the rotatory joints of many pieces of equipment used today, including automobiles. Advances in decreasing the surface area and weight of automobiles, even at small scales, have drastically affected the energy used by transportation.

This energy efficient train is made of a lighter-weight material that helps reduce the friction between the cart and the rails, saving energy. Furthermore, car companies are developing new hybrid car models are made with better miles per gallon (mpg) and a heavier reliance on electric energy, which tremendously reduces the oil consumed as fuel for a vehicle (Hybrid Car, n.d.). Recently, the U.S. government passed a bill that requires cars to have better miles per gallon in order to be put on the market. As gas prices continue to escalate, more people are looking into buying hybrid cars or other cogenerated cars. Researchers have even developed cars that operate solely on solar energy Advanced Vehicles, 2011). Electric cars reduce greenhouse gases emissions, decrease in air pollution, and lessen the dependence on oil. However, as advantageous as these cars are, they have faults; they are very expensive compared to gas-powered cars, and there is a lack of recharging stations to power the cars, which a possible problem in light of the limited range of car battery life. Electric cars are still a developing field, but they have enormous potential in the near future. The technology for cars to advance into energy efficient vehicles is immensely growing, and researchers are focusing in the development of this field; in the near future the evolution into a mass-produced green car will be a reality. Conservation at Home As the technology of the 21st century advances, various devices developed are becoming smaller and faster; new music players, televisions, computers, and other electronics have better sound quality, clearer resolutions, and faster connections. However, these appliances require more energy to support the advanced technology, and the demands for these gadgets are expanding as well. Nevertheless, the increasing knowledge of technology provides an outlet to generate devices that can reduce energy consumption. Many electronic companies are switching their devices to include LED light bulbs, which last up to 15 times longer and uses 75% less energy than traditional lights (Light bulbs (CFLs), n.d.). LED light bulbs supplies light for

televisions, computers, phones, lamps, flashlights, headlights, and other systems. Furthermore, devices are incorporating more effective power save modes; appliances are going into sleep mode faster and using less energy when they enter sleep (Computer keyboard product criteria, n.d.). Another feature being added is the option to choose brightness; a duller screen will use less energy than a bright screen (Computer keyboard product criteria, n.d.). Even the chargers for these items are becoming more energy conscious. Electronic companies are developing new software and devices that reduce the energy consumption while still performing at the gadgets highest potential level. The technology implemented in saving energy for electronic devices is also being used for common household items. Lighting is a common factor in all four of the sectors of the economy; traditional light bulbs are switching from incandescent light bulbs to compact fluorescent light bulbs (CFLs), the swirly bulbs. These CFLs have better quality than amalgam mercury found in traditional lights (Light bulbs (CFLs), n.d.). Many breakthroughs have been made since the invention of the incandescent light. If every household in the United States were to replace five of their incandescent light bulbs with the energy-efficient equivalent, the compact florescent light bulb, then the country would save as much as 800 billion kilowatt hours (kWh) of energy (Feder, 2003). Cordless phones are also entering quicker power save modes, allowing for less consumption of energy. Furthermore, cooking appliances are also reducing their energy needs. More gas-fueled ovens are being sold rather than electric power ovens, which are less efficient. Using gas-powered ovens can help increase energy efficiency by nearly 200% (Need for awareness about energy conservation, 1990). This change between power sources greatly cuts the energy involved in the transformation process. Another transition people are making is the switch to more energy efficient refrigerators, which use fewer harmful chemicals and have improved cooling systems and better insulation techniques (Refrigerators, n.d.). The improvements in dishwashers include saving energy and water by changing the amount of reused energy created by the spin cycles of water. The dishes still remain sparkly clean, without extra waste of water or energy (Dishwashers, n.d.). Outside of the kitchen, the laundry room is also seeing new appliances to reduce energy and water consumption. These advancements are similar to the new dishwashers. Green washing machines recycle the water used previously, generating its own electricity to power the rest of the machine (image below). This cuts the energy bill by a third (Clothes washers, n.d.). There is even high-energy efficient detergent to wash your clothes (Clothes washers, n.d.). Along with greener housing construction, the inside side of houses contain new green technologies, which is adding to the reduction of energy consumption.

An ENERGY STAR efficient washing machine reduces the amount of electricity and water used per wash cycle, greatly reducing the energy cost.

Industrial Conservation Industry uses vast amounts of energy to power many different processes. These processes usually require high amounts of heat and power supplied mostly by gases, other fuels, and electricity; steam is very commonly an output of industrial work. With the implementation of cogeneration in industrial sites, it is possible to be almost 90% efficient with any energy use. The technology behind reduction of consumption is the machines used; they are one of the leading consumers of energy, but now that is slowly changing. Not only are the machines decreasing energy usage, but also the chemistry of consumer goods is changing. Instead of petroleum based plastic items, new chemicals are replacing the depleting oil resources, while still having the same durability and standards of the original plastic. These chemicals use less energy to make and are less harmful to the environment when degraded. There are several applications for green technology in industry. Hundreds of items used everyday have the potential to develop into energy efficient products. Researchers are continuously working to expand the collection of green technologies to incorporate every item possible. Thousands of people, companies, and campuses are enlisting in green technology to fuel their environment to help generate a cleaner earth and reduce the ridiculous rates of energy consumption. Emerging Technology Although many steps have been taken to increase efficiency and decrease waste there are still many improvements that can be applied in the future. Cogeneration is a relatively new method of harnessing the heat dissipated off plants that are generating electricity (Rosen, Le, & Dincer, 2005). During the generation the power plants will release their energy in one of a few ways. Depending on the location of the plant, how much heat it needs to dissipate, and the availability of other resources, it will disperse heat through cooling towers or flue gas. These are the two most common means of releasing the heat into the natural environment. Cogeneration eliminates the need to release this heat directly into the environment (Rosen, Le, & Dincer, 2005). With the implementation of this process, the waste energy can be applied as a heating

source for areas close to the plant, or as a hot water, which can be transported for use by some central heating systems (Rosen, Le, & Dincer, 2005). Research is also being conducted to see the plausible applications and cost efficiency of Enhanced Geothermal Systems (EGS). Using the heat energy of the Earth, EGS produce electricity in a clean recyclable process through the heating and cooling of water. First, wells are drilled into high temperature rock foundations that are naturally fractured. The fractures are exploited to create several wells and a system of pipes. The initially cooled water is pumped through the wells, harnessing the heat energy of the Earth. Upon resurfacing, the steam from the water is used to turn turbines, generating electricity. The water can then cycle through the process repeatedly. The process could be continuously run all day without emitting any harmful gases or byproducts. In the summer when the atmosphere is preferably cold, the system will pump the hot air into the ground. During the winter months, the system can work in reverse to heat the air within the building. The ability to transition from a heat source to a heat sink is conservative itself, because there is now no need for an additional system (Hepbasli & Akdemir, 2004). An Enhanced Geothermal Systems plant in Cooper Basin, Australia drilled 4 kilometers into the surface to 250 degree Celsius rock, and it shows promise for the further development and implementation of this technology (Walker, 2009). Not only are they more environmentally friendly with zero CO2 emissions, EGS are also economically advantageous, being more cost effective to establish than a clean coal burning power plant. Other countries such as Germany, France, Switzerland, and the United States notice the benefits to clean alternative methods of producing electricity and are also researching EGS.

An example of Enhanced Geothermal Systems heating and cooling water as a source of energy. Another vanguard of future green technology is an advancement of electrical infrastructure known as Smart Grid. Currently, the power distribution system is wasteful in its transportation of electricity to the general consumer. In the United States alone is a large, complex power distribution system (see the image below) that is reliable to a point where any outage makes news headlines; nonetheless, the energy is not tracked throughout its travels and consumption to know the efficiency of the whole process. Below is a picture of the vast network of power lines spanning across the country. It is important for profuse and widely impacting technology such as this to be efficient and environmentally friendly. In response, the term Smart Grid refers to a proposed technology where a network of information based equipment cooperates with the already established power distribution

infrastructure. With meters strategically placed to track the patterns of electrical flow, the information could be used by both the utility companies and the ultimate consumer. If customers are more aware of the exact details of their electrical consumption and waste, then the hope is that people will try to limit their excessive use. On the other hand, utility companies could adjust the flow and distribution to be more efficient with the knowledge of where the electricity is being used. In addition, sources of other electrical energy such as solar and wind energy could be incorporated into the distribution system easily and put to use effectively. With more data to analyze, companies can find the optimal positioning for new towers and power lines that will fulfill the highest need or yield the largest profit. In effect, Smart Grid reduces carbon dioxide emissions, allows for expansion to other sources of energy, and enables both the producer and consumer to make educated decisions based on knowing the path of the electricity (Cleveland, 2009). However, as of now there are no protocols or standards for this type of technology to become widespread.

The U.S. connection of voltage lines is a complicated, inefficient system. The long term effects of carelessly wasting non-renewable resources are evident, but are there unknown consequences to the efforts being made to conserve energy. All of the green technology has gone through extensive research and engineering, and they do not appear to be instigating any new major problems. However, long-term effects of the new technology will become evident in the upcoming years. It is the possibility of the unknown consequences that makes industries and governments wary from implementing these new technologies throughout. Nonetheless, the right option is to wean away from dependency on the natural limited resources and focus on sustainability. Government Influence Some countries have progressed quicker than others in this change over to energy efficient products due to government and economic reasons. In general, Europe seems more accepting of the expensive green home appliances in comparison to Americans. The cost of energy in Europe is extremely expensive, which makes switching over to energy efficient devices an easy decision. On the other hand, Americans can still afford to keep older, wasteful appliances or purchase the ones they like independent of its efficiency because the price of

energy is not alarmingly high yet in the United States. Governments can try to implement requirements and limits on energy use purely for economical reasons. For example, there is discussion of banning sales of non-fuel efficient cars. This would be unnecessary in most European countries which already have a majority of fuel efficient cars. However, the countries which currently import the highest amount of oil United States, Japan, China should consider government policies that can limit the national intake. Most people need to be given an impetus to start conserving energy, and it may come from the government which can understand the consequences on a national and international scale. In many ways, China has taken the initiative to conserve energy in combination with harvesting renewable energy sources. Spending approximately $12 million on green energy each year, China strives to be the leading country for green technology (Bradsher, 2011). One example of reduced energy consumption is the majority of escalators wait for a passerby to turn on. Thus, the escalator does not waste energy when it is not needed. China has the largest number of wind powered plants and solar powered street lights that supplement the nonrenewable energy. The reason behind the leading change in China is solely for economical reasons and not necessarily for the environment. Nonetheless, countries should evolve to conserving more energy as already demonstrated by China. The United States, too, takes initiative in conserving energy at a national level; President Obama allotted $11 billion to the research and development of the Smart Grid. Although it is a costly endeavor, increasing the efficiency of the electricity used across the nation will save citizens and companies the initial investment and more. Within the past year, a bill on the recycling of electronics was presented to the congress of the United States. Gene Green and Mike Thompson proposed the Responsible Electronics Recycling Act of 2010 on September 29 of last year (Federal Legislation, n.d.). The bill is focused on what happens to the old computers and electronics of America. Currently, 50-80% of electronics disposed of by Americans are not actually recycled; rather, the e-waste is shipped to developing countries in Africa and Asia. The new bill would make it illegal for companies to export the old electronics and require proper recycling of the parts. It has been less expensive for companies to exploit developing countries than to dispose of the electronic waste appropriately. However, the new Responsible Electronics Recycling Act would mean that the electronic waste would not be decomposing in developing countries, contaminating their water supplies, and posing health issues (Kade, 2010). In this example, citizens may think they are doing the right thing, but it will take the government to intervene with laws in order to end this offense against the environment. The European Union (EU) has set high goals to attain in its future to conserve energy. Its overarching goal is by the year 2020 to decrease greenhouse emissions by 20%, to increase sources of renewable energy by 20%, and to increase energy efficiency by 20%. In order to achieve this goal the European Union is attempting to educate the public and influence its opinion on green technology. Also, the EU hopes to implement regulations for minimum energy efficiency and labeling on products, services, and national systems. The union understands that buildings are responsible for 40% of its energy consumption and 36% of its greenhouse gas emissions. Incorporating aspects of green architecture and sustainable building will also help them achieve the goals established in the Action Plan 2020. In May of 2010, the European Union released a Directive addressing the energy performances of buildings and setting minimal

requirements. This Directive requires regular inspections of boilers and air conditioning units for buildings. Through several government issued legislatures, the European Union is a path toward energy conservation for the future. Individual Responsibility At the current moment in time, efficiency is not one of the most popular choices for commercial and residential customers alike. One fourth of the people polled reported that they would not spend money to increase the efficiency of equipment they are currently using. An additional 50% reported that they would prefer to be compensated from the efficiency investment within two to three years (Wald, 2007). The unfortunate reality is that it is impossible, in some cases, to meet this three-year deadline for repayment. Especially on a large scale, it is not practical to expect a three-year turn around when purchasing forty industrial-size cooling fans for your equipment that will now run 15% more efficiently.

Green Initiatives encourage people to use green technologies and to change their habits. Despite the unwillingness from the general public to become green, there are certainly many things you as an individual could be doing in order to conserve energy that would make a big impact. It is important to be aware of your carbon footprint, or consumption of energy. One way to make a major difference is to make a new habit of carpooling, bicycling, or using public transit as your main means of transportation instead of a car. In your home, school, or work environment, encourage recycling programs and green technology initiatives. Also, simple actions such as unplugging devices after each use or closing the doors of empty rooms will conserve electrical energy (Go Green, 2010). Not having to heat or cool an empty room eases the job of a heater or an air conditioner. Think of ways to limit your current use of electricity. In addition, it is important to fix any leaky faucets because the dripping can waste approximately 30,000 gallons of water per year (Earth month, 2010). As described above, energy efficient products are becoming more popular among home appliances (dishwashers, washers, etc.) and are advertised as Energy Star. Although the green merchandise cost a little more initially, they will save money in the long term. Also, it is a good idea to replace all old light bulbs with efficient fluorescent bulbs. Many people are taking the initiative to make these small changes within their daily lives and households, and your participation is crucial to conserving energy.

A simple action that helps save thousands of kilowatts of energy. Energy conservation is essential to solving a potential energy crisis in the next few years. Action needs to be taken now if resources, such as fossil fuels, are going to be preserved for years to come. Government action needs to be taken to ensure that reform is undergoing for the conservation of energy. Tax breaks and other incentives would be wise for promoting these habits to the country. Eventually, regulations and law should be set forth to limit the extreme amounts of energy used in each of the four sections of the economy. Industry is one of the most wasteful areas of consumption of energy. Regulations should be set for how much and what types of energy industrial processes are allowed to use. Action needs to be taken to ensure that energy is conserved for future generations.

All of those people fit in one bus, or in over twenty cars. One bus of people is five times more energy efficient than travel without carpooling. Bibliography Advanced vehicles. (2011). Retrieved from http://www.unep.org/transport/gfei/ autotool/approaches/technology/advanced_vehicles.asp Bradsher, K. (2011, March 4). China reportedly plans strict goals to save energy. The New York Times Breesch, H., Bossaer, A., & Janssens, A. (2005). Passive cooling in a low-energy office building. Solar Energy, 79(6), 682-696.

BP. January 2011. Energy outlook 2030 summary tables. Retrieved from http://www.bp.com/sectiongenericarticle.do?categoryId=9035979&contentId=7066648. Cleveland, C. (Topic Editor) (2009). Smart Grid. The encyclopedia of earth. Retrieved March 29, 2011, from http://www.eoearth.org Clothes washers. (n.d.). Retrieved from http://www.energystar.gov/index. Cockram, M. (2011, April). Low energy, but high impact. Architectural Record, Retrieved from http://continuingeducation.construction.com/article. php?L=5&C=763 Computer keyboard product criteria. (n.d.). Retrieved from http://www.energystar/gov Dishwashers. (n.d.). Retrieved from http://www.energystar.gov/ Duct sealing. (n.d.). Retrieved from http://www.energystar.gov/ia/products/ heat_cool/ducts/DuctSealingBrochure04.pdf Dyer, DF. (2007). An overall approach to energy conservation. Proceedings of the Proceedings of the fifth iasted international conference on power and energy systems (pp. 497-500). Benalmadena, Spain: ACTA Press] Earth month: it's easy being green. (2010). Retrieved from http://www.cdc.gov/ Features/EarthMonth/ Energy Consumption. (n.d.) Retrieved April 11, 2011, from www.need.org Energy use per capita. (2011). In Google public data explore. Retrieved April 10, 2011, from Google Public Data Explore Labs: http://www.google.com/publicdata?ds=wbwdi&met=eg_use_pcap_kg_oe&idim=country:USA&dl=en&hl=en&q=energy+ consumption Feder, B.J. (2003, December 1). Economy & business; energy efficiency could gain favor. Retrieved from nytimes.com Federal legislation and policy on e-waste. Retrieved from http://www.electronicstakeback.com/promote-good-laws/federal-legislation/ Five elements of passive solar design. (2008). [Web]. Retrieved from http://www.iklimnet.com/save/passive_solar_heating.html Go green initiative. (2007). Retrieved from http://www.gogreen initiative.org/ Gonchar, J. (2010, July). More than skin deep. Architectural Record, Retrieved from http://continuingeducation.construction.com/article.php?L=5&C=685

Green Architecture. (2011). Encyclopedia Britannica. Retrieved March 29, 2011, from http://www.britannica.com/ Hepbasli, A., & Akdemir , O. (2004). Energy and exergy analysis of a ground source (geothermal) heat pump system. Energy Conversion and Management, 45(5), 737-753. Himley, M. (2007). Energy Crisis (1973). (2007).Encyclopedia of environment and society. Sage Publications. Hybrid car information and resources. (n.d.). Retrieved from http://www.hybrid-car.org Kade, A. (2010, October). The darker side of electronic waste recycling. Retrieved from http://www.environmentalgraffiti.com/ Light bulbs (CFLs). (n.d.). Retrieved from http://www.energystar.gov/ Need for awareness about energy conservation. (1990). Economic Review, 21(2), Retrieved from Gale PowerSearch DOI A8944257 Nenstein, DN. (1978). U.S. Patent No. 4,196,718. Washington D.C.: U.S. Patent and Trademark Office. Refrigerators. (n.d.). Retrieved from http://www.energystar.gov/ Roof products. (n.d.). Retrieved from http://www.energystar.gov/ Rosen, MA., Le, MN., & Dincer, I. (2005). Efficiency analysis of a cogeneration and district energy system. Applied Thermal Engineering, 25(1), 147-159. Shuwen, N., Yongxia, D., Yunzhu, N., Yixin, L., & Guanghua, L. (2011). Economic growth, energy conservation and emissions reduction: a comparative analysis based on panel data for 8 Asian-Pacific countries.Energy Policy, 39(4), 2121-2131. Turner, WC., & Doty, S. (2007). Energy management handbook. Boca Raton, FL: The Fairmont Press. Wald, ML. (2007, May 29). Efficiency, not just alternatives, is promoted as an energy saver. New York Times, p. C1. Walker, T. (2009, February 7). 7 green technologies of the future. Retrieved from http://www.environmentalgraffiti.com/ Zhang, L., & Neelameggham, NR. (2011). Energy conservation, CO2, and other greenhouse gas reduction.Jom Journal of The Minerals, Metals And Materials Society, 63(1), 22.

Chapter 5 Electric Cars


Ryan Norby, Daniel Kasza, and Linda Xu

Introduction Of the devices that are known to be harmful to the environment, gas-powered automobiles are some of the most common, causing 28% of the carbon dioxide emissions in 2006, a percentage expected to increase by 50% (Van Mierlo & Maggetto, 2006). The fumes produced from gasoline pollute the air with gases such as nitrogen oxides, sulfur dioxide, and carbon dioxide. In response, engineers have designed catalytic converters, which react with gases to produce less inimical compounds, and other devices to be placed inside engines to reduce harmful emissions. Hybrid electric vehicles that run partially on gasoline and partially on electricity from a battery, reducing the amount of gasoline used, entered the industry. Eventually, plug-in hybrid electric vehicles, which use electricity as a main power source, replaced the conventional hybrid vehicles that produced twice as much carbon dioxide (Stephan & Sullivan, 2008). However, none of these inventions solved the issue of petroleum, which was used to power 98% of the transportation systems in 2006, becoming less common (Van Mierlo & Maggetto, 2006). To do so, many automobile companies have begun to produce electric cars, which are completely powered by electricity instead of gasoline. History of Electric and Hybrid Cars nyos Jedlik, a Hungarian engineer and physicist, developed the first electrical motor in 1827. The following year, he attached his motor to a miniature cart and demonstrated how the cart could move by electricity. Although Jedlik did not patent this device, a functionally-identical design was patented by Werner von Siemens several years later, preempting the design of many other models of electric vehicles (nyos Jedlik, n.d.). In 1834, the first direct-current electrical motor was invented by Vermonter Thomas Davenport. Between 1890 and 1891, William Morrison produced a six-passenger wagon that had a speed of 14 mph. At the 1893 World Exhibition, six varying models of electric cars were featured (Hyer, 2008). These electric motors were invented before internal combustion engines because electricity was the primary energy source at the time. Decades later, however, cars using internal combustion engines began to outperform those with electric engines due to the high speeds that could be induced with the rapid bursts of energy supplied by the gasoline. Henry Ford contributed to this widespread shift from electric to gasoline cars in the 1920s as a result of the improvements he made to the noise, vibration, and odor of gasoline-powered vehicles (Chan, 2007). From that point onward, electric cars were uncommon and only returned to public notice in the 1990s when people began to worry about the pollution caused by the burning of fossil

fuels and the finite amounts of crude oil. Tesla Motors, a company founded within the last decade in Palo Alto, California, now produces and sells fully electric sports cars. It is currently the only automaker that commercially produces a fully electric vehicle, the Tesla Roadster. There were only 22 models of hybrid vehicles available in 2010, but the 2015 production calendar shows that 108 hybrid models, both conventional and plug-in, will be available for consumers (Berman, 2010). In addition to the need for an alternative fuel source, new developments in batteries and automotive efficiency have led to electric cars becoming a viable option for general use, and many currently purchase them. Motors and Engines Electric motors are the most important parts of electric and hybrid vehicles. They not only increase traction, but they can also act as brakes. Although there are many different type of motors, the two that are most widely used for vehicles are brushless direct current (DC) and induction motors. Both require very low maintenance and provide high efficiency, although they need additional complex electronics to function properly. Brushless DC motors are synchronous electric motors that use direct current and have an electrical commutation system. Synchronous means that the shaft rotates at the same frequency as the magnetic field that drives it. They are called brushless motors because the directions of the current inside their coils are not controlled by mechanical commutators and brushes, but by sensors and electrical circuits. Because they have fewer mechanical parts, brushless motors are more reliable than brushed ones. Also, they have higher efficiency and a better power to weight ratio, despite the additional electronics. To be able to run the motor, the controller must monitor the position of the rotor. Hall effect sensors, or rotary encoders, are often used to determine its orientation, but there is also a third design that does not use these additional sensors. The back electromotive force (EMF) on the unused coil is measured by the controller to acquire the needed information. However, the motor must be running before measurements can be made because there would be no back EMF present on any of the coils otherwise. Although sensorless motors are less complicated, the additional complexity of the controller often outweighs this advantage. There are, however, small devices that have sensorless motors without the need of sophisticated controllers. In this case, the rotor is assumed to match the speed dictated by the controller; this solution only works when the load on the motor is small and constant. Although regenerative braking is possible, it would require even more complex electronics. Because electric and hybrid cars already have complicated electronics and tremendous weight due to the batteries, the power to weight ratio of brushless DC motors makes them suitable for these vehicles, although they are not used in railroad and other commercial vehicles. Induction motors are alternating current (AC) electric motors that are based on electromagnetic induction (Electric motor 2011) and Lenzs law, which states that An induced current is always in such a direction as to oppose the motion or change causing it. In these motors, a rotating electromagnetic field is created by the coils around a rotor that has no permanent magnets. Because the field is rotating relative to the rotor, the electric current is induced in the rotor, and by Lenzs law, the rotor starts rotating in the same direction as the field. However,

because the torque on the rotor would be zero if it rotated at the same speed as the field, the speed of the rotor is always less than the speed of the field. That is why these motors are also known as asynchronous motors. Although the peak efficiency of induction motors is lower than the peak efficiency of brushless DC motors, the simplicity of induction motors allowed them to succeed where brushless motors had failed. They are used in railroad vehicles, trolleybuses, and the cars of Tesla Motors (Rippel, 2007). Induction motors also have higher average efficiency in high power applications, which makes them suitable for vehicles with only electrical traction, and can be further improved using smart controllers (Rippel, 2007). The lack of magnets also allows for lower cost and easier handling. Drivetrain Design The major differences between the designs of electric and gas-powered automobiles result from the profound differences between the principles of electric and engine propulsion. The main foci of the design of an electric vehicle include the motor, power electronic converter, controller, battery or capacitor, and energy management system (Chan, 2007). These cars can connect the motors directly to the wheels, which increases the amount of available power, as well as the traction because the motors both brake and accelerate.

This diagram depicts the differences between the fuel tanks and batteries and the motors and engines of each type of hybrid. There are three categories of electric cars: battery, hybrid, and fuel cell. Battery electric vehicles (BEVs) run solely off battery power, while hybrid electric vehicles (HEVs) use both batteries and gasoline for propulsion. Fuel cell electric vehicles (FCVs) use hydrogen as a power source. BEVs and FCVs both have zero emissions, high energy efficiency, and are independent from crude oils. However, they are much more expensive than HEVs and are incapable of the range on gas-powered cars. HEVs and BEVs are commercially available; FCVs are under development. Hybrids are the most common of the three, due to their lower initial cost and

longer range (Chan, 2007). Conventional hybrids are vehicles which mostly consume gasoline and only use some battery power. They have large fuel tanks and small batteries. Also, their internal combustion engines are larger, and their electric motors are smaller. Plug-in hybrids, however, have smaller fuel tanks and engines but larger batteries and electric motors. The major difference between the two is that plug-in hybrids must be charged from an external electrical source, whereas conventional hybrids are able to be fully charged by their generator-like motors. Hybrids For traction, hybrid cars use two separate power sources, which are most commonly an internal combustion engine and an electric motor. Although they are all based on this concept, there are several different ways to build a hybrid drivetrain (Wouk, 1995). Hybrid vehicles are used to increase the fuel mileage compared to similar traditional cars, which can be attributed to more than just the drivetrain. Manufacturers often use other techniques to increase the efficiency and sacrifice performance for better gas mileage, but because of the characteristics of electric motors, hybrid cars are feasible for everyday driving conditions.

This is a series hybrid drive train. There is no mechanical connection between the wheels and the engine. Series hybrid vehicles are also often called extended range vehicles. In this design, only the electric motor moves the car while the other power source charges the batteries to extend the range of the vehicle (Wouk, 1995). The internal combustion engine is connected to a generator and only starts when the batteries are low or when the electric motor needs more power than what the batteries are capable of providing. Because the motor runs at a constant speed, it can be very efficient compared to the engines of other cars. For example, the Audi A1 e-tron concept car has only a 254 cc, 60 HP, highly-optimized Wankel engine. The simplified drivetrain also results in better efficiency and lower cost. Series hybrids are the link between purely electric and other hybrid cars. They combine the efficiency of electric vehicles with the convenience of traditional cars. Although there are no series hybrid cars on the market, the design is not recent. Dieselelectric locomotives are based on the same principle, and trolleybuses with batteries and range extenders are also available.

This diagram models a series-parallel hybrid drivetrain. The motor and the engine are both connected to the wheels. Most hybrid cars are series-parallel hybrids. In this configuration, the electric motor and the internal combustion engine can both move the car. When the car is moving slowly and there is enough charge available, the electric motor is used; when more power is needed, such as the case with acceleration, the engine starts automatically to accommodate this. When maximum power is needed, both power sources can be used simultaneously. This design is more efficient than traditional cars because the engine does not have to run constantly, and regenerative braking can be used for the electric motor.

This diagram models a parallel hybrid drivetrain. The motor and the engine are permanently connected. This configuration is used mainly by Honda. The electric motor and the internal combustion engine can still both move the car, but the drivetrain is less complicated than the drivetrain of the series-parallel hybrids. There is a permanent connection between the engine and the motor, so they always run at the same speed. The electric motor can still power the car without turning on the engine, but this motor is usually less powerful than the motor of other hybrids, and it is designed to mostly reuse the energy captured by regenerative braking. The main advantage of this design is its simplicity, which makes it an inexpensive way to include regenerative braking. Energy Storage Lithium-ion batteries, popular for their effectiveness and durability, comprise cathodes of lithium oxides combined with other metals and anodes of modified graphite (Types, 2011).

The cathode, which gives each battery its unique properties, is submersed in a liquid electrolyte of lithium salt, limiting the shape of the cell to rectangular or cylindrical (Lithium -Ion Batteries, 2004). The most common cathode compounds for electric vehicle batteries are lithium manganese oxide (LiMn2O4), lithium iron phosphate (LiFePO4), and lithium nickel manganese cobalt oxide (LiNiMnCoO2). They are the safest cathodes and have high specific powers and long life spans, although they do have lower capacities than lithium-cobalt batteries. Lithium nickel manganese oxide, or NMC, combines the best attributes of nickel, which is known for high specific energy but low stability, and manganese, which forms a spinel structure and lowers internal resistance but has low specific energy. These may be optimized for high energy density or high load capability, but not both at once. These are the most common batteries in electric cars and do not generate a large amount of heat (Types, 2011). Lithium manganese oxide, also called spinel, is known for its quick charging and its ability to discharge high currents. It has a high ion flow due to the 3D structure, causing lower internal resistance, improving handling, raising thermal stability, and increasing safety; however, its life is limited. Lithium iron phosphate batteries are relatively new and possess low resistance and high electrochemical performance. Their nano-scale cathode material enhances safety, raises thermal stability, increases tolerance to abuse, raises current, and increases cycle life. However, the voltage of 3.3V/cell is significantly lower than that of spinel, extreme temperatures shorten its lifetime, and its high self-discharge quickens aging. Lithium nickel cobalt aluminum oxide (LiNiCoAlO2) is also used in electric vehicles, but it is less common because it has a high cost and low safety. Nonetheless, all of these batteries have higher energy densities than those of lead- and nickel-based ones (Types, 2011).

This diagram depicts the attributes of lithium manganese (spinel) batteries.

This depicts the attributes of lithium iron phosphate batteries.

This depicts the attributes of lithium nickel manganese oxide (NMC) batteries. The range of cars with lead-acid batteries is approximately fifty or sixty miles, which is approximately one-quarter of the range of an internal combustion car (Eberhard & Tarpenning, 2007). This is not a severe issue because the average person living in a suburban or urban area drives only thirty to forty miles daily, but it causes drivers concern regarding when the engine will stop, which is called range anxiety (Electric Cars, 2002). However, all electric cars using lithium-ion batteries can drive at least one hundred miles on a charge (Gur & Danielson, 2011). There even exist models like the Tesla Roadster, which is capable of travelling over two hundred miles and has a battery pack that weighs approximately three hundred lbs. lighter than one of their lead-acid models (Eberhard & Tarpenning, 2007). The performance of lithium-ion batteries has improved by almost five times in the last twenty years (Gur & Danielson, 2011). Even better models continue to be made, such as the lithium-air batteries that scientists at Argonne National Laboratory and other facilities are researching. Unlike lithium-ion batteries, this compound directly attaches oxygen to lithium in the anode without adding any other metals into the mix, which greatly reduces the weight of the battery while leaving its energy density intact. Researchers must also find an economical, porous metal that allows oxygen to effuse at a

high rate in order to secure a continual supply for the reaction (Lithium-Air Batteries, 2010). Currently, the best known compound is an alloy of platinum and carbon, although a gold and carbon compound also functions well (Lu, Gasteiger, Parent, Chiloyan, & Shao-Horn, 2010). Clearly, neither of those is inexpensive. Moreover, the lithium is highly reactive and would explode upon any contact with water, which creates the need for a stable, cheap, waterproof casing for the battery. This idea is still in its infancy, requiring at least another decade to be marketable (Lithium-Air Batteries, 2010). Charging Charging systems, as the name suggests, recharges the batteries that power electric cars and must both charge the cells as quickly as possible and prevent damage. A sophisticated system would monitor the voltage, current, and temperature so that it could transfer the maximum possible electricity that would not overheat the battery. Less sophisticated versions simply measure voltage or current and transfer the maximum power that the cell can withstand until most of their capacity is filled, at which point the system sends a reduced level. In general, systems slightly overcharge the batteries to ensure that the most fatigued in the series receive sufficient charge because the car can only function so long as the weakest unit has power (Electric Cars, 2002). The normal household system may have the charger built into the controller, placed under the hood, or be completely separate from the main body of the car. A charging plug takes the space of the gas filler spout, and a diver simply needs an extension cord to charge the battery. On a 120V circuit, this usually takes ten to twelve hours, although a 240V circuit can cut the time in half (Electric Cars, 2002). Alternative charging methods exist, such as a system which checks the functionality of the batteries in a vehicle, charges the usable ones, and removes and replaces the spent ones (Hammerslag, 1999). Google has also been developing a charging station called the Plugless Power system where a driver does not need to connect the charger to the body of the car. Instead, it would use electromagnetic fields to transfer electricity from a coil inside the charging station to the coil in the car, which would store the energy inside the batteries. Owners could easily install this coil in cars that were not originally made to work with this system (Google, 2011). Other, older systems include the Magna-Charge system, electrical vehicle supply equipment (EVSE) which uses an inductive paddle that functions as half of a transformer, which is inserted into a slot with the other half of the transformer behind the license plate, which swings open like a door. Because no conductors are exposed, the paddle is safe to be touched in any condition. It is connected to the station, which can transfer a voltage of 240V and a current of 40A and can be attached to the wall of the home. The Magna-Charge gained popularity through the use of the Tesla GM/Saturn EV-1 model, despite its inconvenience when driving long distances due to its needing the specialized station. Another EVSE is the Avcon plug, which uses copper contacts with special interconnects that keep the contacts covered unless they are in contact with the charger on the vehicle. However, it requires a ground-fault circuit interrupter (GCFI) on the outlet to prevent shocks in humid weather. This system was specifically meant for Fords, although other makers have adapted it, whereas the Magna-Charge system was not originally targeted towards any one brand (Electric Cars, 2002).

A common misconception regarding charging states that there are three levels where level one is a household outlet, level two is EVSE, and level three is the DC Fast Charge system. Actually, there are six levels of charging decided upon by the Society of Automotive Engineers (SAE), with three levels each for AC and DC (Charging Standards, 2010). In AC, the direction of electricity periodically reverses itself, whereas the electricity in DC flows in only one direction. The AC levels are lower than the DC levels in terms of voltage, current, and power, but they are more commonly used than DC systems because they are less specialized. AC level three has not been defined yet, and what people commonly misconstrue as level three is actually DC level one. Currently, many different types of connectors are being used. Although there is a standard connector that nearly all automakers worldwide have agreed to (the J1772 model), there is still some conflict over chargers for other levels (Charging Standards, 2010). Nonetheless, according to a study by Newcastle University of England and Cenex, any drivers that found charging convenient and reliable (72% of the 264 participants) stated that they would exchange their regular vehicle for an electric car (Thumbs Up, 2010). Performance and Efficiency Standard internal combustion engines are extremely inefficient. A conventional engine only converts 15% of the energy stored in gasoline into mechanical energy to propel the automobile (most of the conversion process yields thermal energy), whereas an electrical engine converts upwards of 80% of the energy stored in the battery into mechanical energy. In addition, electric cars do not consume power when coasting or idling. Electric and hybrid vehicles not only have more efficient motors than gas vehicles, but also more efficient braking mechanisms, such as regenerative braking. When traditional, gaspowered cars slow down, their kinetic energy is converted to heat, which cannot be used for movement again. Regenerative braking allows for a vehicle to convert some of its kinetic energy into electrical energy, allowing for the resulting electrical current to be stored for later use. This procedure involves the motors being overdriven, an event which typically occurs when an automobile coasts downhill. Because of this, both braking and coasting downhill return an amount of energy back to the battery (Hyer, 2008). Studies have shown that this allows 20% of the energy used to slow down an electric car to be recovered and returned to the battery. Besides the improvements in efficiency, this also lowers the maintenance cost of the car. It requires no additional parts, and because it eliminates the use of traditional brakes, replacements are not needed as often (Cikanek, 2002). The earliest use of this system dates back to the late 1800s, when Louis Krieger developed an electric, front-wheel drive landaulet (a car body style with a convertible top for the back seat common in the early 1900s) that consisted of two drive motors in the front and a pair of bifilar coils in the rear. Electrical engines also provide more rapid acceleration than similarly-rated internal combustion engines due to the ability of electrical motors to accommodate higher power-toweight ratios. A car with an internal combustion engine that weighs the same as a car with an electrical engine typically has a lower power-to-weight ratio and therefore cannot accelerate as quickly. Currently, numerous batteries are unable to provide the instantaneous power required for an electric car to accelerate quickly, so vehicles require external devices to assist. A common

approach to this problem is to include a second battery pack that is dedicated to the acceleration of the vehicle. Hybrids often have low power, but efficient engines compared to other cars. While normal cars are expected to be powerful, hybrids are expected to be efficient. Also, because hybrids have an additional electric motor, which has different characteristics from engines, the lower performance of the engine is harder to notice during normal driving conditions. The efficiency of a car can be highly improved by decreasing its drag coefficient. Although this is not electric or hybrid car specific, designers of these cars tend to care more about this problem. While cars usually have a drag coefficient between 0.3 and 0.4, this value was only 0.19 for the GM EV1 electric car (Cogan, 2008), and 0.25 for the Toyota Prius hybrid (Toyota Prius, 2011). Environmental Effects Serious discourse about the environmental impact of gasoline cars did not occur before the 1960s. However, the realization that oil and other fossil fuels may become scarce resulted from the 1973 oil crisis in Saudi Arabia, a major exporter of crude oil. During this and similar crises, countries imposed severe restrictions on oil use, a notable example being Norway, which has prohibited privately-owned vehicles from being driven during the weekend in case of crisis (Hyer, 2008). Analysts estimate that crude oil production will decrease within a decade, while the global demand continues to increase. This will inevitably lead to the total depletion of crude oil. Not only does the scarcity of natural resources cause concern, but carbon dioxide emissions of internal combustion vehicles are also a source of recent controversy. The greenhouse effect is the phenomenon in which various gases, when released into the atmosphere, result in a global rise in temperature. In the 1950s, scientists discovered that global warming, a concept which Swedish scientist Svante Arrhenius introduced a century earlier, was rapidly changing the climate of the Earth. Carbon dioxide emissions are contribute greatly to global warming. Studies done at the Mauna Loa Observatory in Hawaii show that every thirty to thirty five years the quantity of carbon dioxide added to the atmosphere doubles (Greenhouse effect, 2011). However, the tailpipe emissions of electric cars are extremely low and able to reduce manmade causes of global warming, although the amount of smog generated is only slightly less (Rogers, 2011). Scientists from Carnegie Mellon University have found that electric cars produce 32% fewer greenhouse gas emissions than cars with internal-combustion engines. This occurs because 32% of electricity in America, which is used to power the batteries, comes from nuclear energy and renewable resources such as dams, wind, and the sun (Rogers, 2011). In fact, the air emitted from the tailpipes of an electric car is cleaner than the air of the more polluted cities like Los Angeles, California. Some protest that the production of batteries does generate pollutants, but the quantity is significantly less than the impact of a gas-powered car which produces approximately five tons of carbon dioxide a year (Van Mierlo & Maggetto, 2006). Therefore, in overall effects, electric cars are far less inimical to the environment than gas-powered cars.

Future Technology Electric vehicles may be usable, but many more problems must be solved. Electricity is far less expensive than gasoline, and the drivetrains of electric cars require almost no maintenance, which makes these cars more economical overall. Because of this, many corporations such as FedEx and General Electric have chosen to use these cars; they comprise a strong customer base that has encouraged many more automakers to develop their own. However, to power an electric vehicle, a lithium-ion battery costing over $1500 and weighing 500 lbs. is necessary, which makes it too costly for many average drivers to choose (Gur & Danielson, 2011). Therefore, agencies like the Department of Energy are funding researchers working to reduce the size of the batteries, raise their energy capacity, and lower expenses while leaving their safety, reliability, and lifespan intact. The Advanced Research Project Agency Energy (ARPA-E) aims to double the range of electric cars by adding different materials to current batteries, by using the magnesium-ion, or through metal-air batteries. However, as the use of electric vehicles increases, other fields must improve as well, such as the proficiency of the motor, drivetrains, and other components of the car and the cleanliness and efficiency of the power grid (Gur & Danielson, 2011). The most recent designs for electric and hybrid vehicles are radically different from those of earlier models. The Toyota Prius, a popular hybrid electric vehicle, functions primarily through the combustion of gasoline and secondarily through battery power. Chevrolet took this concept and developed it further; their newest electric car, the Volt, uses battery power as the main source of propulsion. When the battery becomes low on energy, the gasoline engine operates with the sole purpose of recharging the battery. Countries worldwide have pledged to spend billions of dollars on the electric car industry within the next decade. Some European countries, including England, have already started constructing public battery charging and swapping stations. Although governments have only been using charging stations as incentives for people to switch to electric vehicles, the concept itself has been around for over a century. In as early as the year 1900, charging stations (then known as charging hydrants) began to appear in urban settings. These were not maintained by the government, but by electrical companies. Charging hydrants had both wattmeters and voltmeters, and the consumers were able to deposit coins and receive a corresponding number of watt-hours of energy for recharging their batteries (Hyer, 2008). Eventually, electric vehicles will be developed enough for general and possibly universal use, but until then, consumers will continue to rely on increasingly efficient gasolinepowered cars.

A public charging station is recharging the battery of an electric vehicle.

Bibliography Berman, B. (2010, September 10). Expert: expect more than 100 hybrid and EV models in U.S. by 2015. Retrieved from http://www.hybridcars.com/news/expert-expect-more-100hybrid-and-ev-models-us-2015-28579.html Chan, C.C. (2007, April). The state of the art of electric, hybrid, and fuel cell vehicles. Proceedings of the IEEE, 95(4), 704-718. Cikanek, S. R., Bailey, K.E. (2002). Regenerative braking system for a hybrid electric vehicle. American Control Conference, 2002., 4, 3129-3134. Cogan, R. (2008). 20 Truths About the GM EV1 Electric Car. Retrieved from http://www.greencar.com/articles/20-truths-gm-ev1-electric-car.php Eberhard, M. and Tarpenning, M. (2007). The 21st century electric car. Retrieved March 24, 2011, from http://www.fcinfo.jp/whitepaper/687.pdf Electric motor. (2011). In Encyclopdia Britannica. Retrieved from http://www.britannica.com/ EBchecked/topic/182667/electric-motor Electric vehicle charging standards. (2010). Retrieved March 24, 2011, from http://www. visforvoltage.org/book/9471 Electric vehicles given thumbs up. (2010). Retrieved March 24, 2011, from http://www.physorg. com/news193501752.html Frank, A. A. (2007, March-April). Plug-in hybrid vehicles for a sustainable future: appropriately designed hybrid cars will help wean society off petroleum. American Scientist, 95(2), 158-165. Google trials plug-free electric vehicle charging; invests in carbon negative biofuels. (2011, March 22). Retrieved April 8, 2011, from http://www.environmentalleader.com/2011/ 03/22/google-trials-plug-free-electric-vehicle-charging-invests-in-carbon-negativebiofuels/ Hammerslag, J. G. (1999). U.S. Patent No. 5,951,229. Washington D. C.: U.S. Patent and Trademark Office. How electric cars work. (2002). In HowStuffWorks.com. Retrieved March 24, 2011, from HowStuffWorks.com http://auto.howstuffworks.com/electric-car.htm Hyer, K.G. (2008, June). The history of alternative fuels in transportation: the case of electric and hybrid cars. Utilities Policy, 16(2), 63-71.

Lithium-air batteries could rescue electric car drivers from "range anxiety". (May 7, 2010). Retrieved March 24, 2011, from http://www.scientificamerican.com/article.cfm? id=lithium-air-electric-car-battery Lithium-ion and lithium-ion-polymer batteries. (2004). Retrieved April 11, 2011, from http://support.radioshack.com/support_tutorials/batteries/bt-liion-main.htm Lu, Y., Gasteiger, H. A., Parent, M. C., Chiloyan, V., Shao-Horn, Y. (2010, April 1). The Influence of Catalysts on Discharge and Charge Voltages of Rechargeable Li-Oxygen Batteries. Electrochemical and Solid-State Letters, 13 (6), 69-72. Rippel, W. (2007). Induction Versus DC Brushless Motors. Retrieved from http://www.teslamotors.com/blog/induction-versus-dc-brushless-motors Rogers, P. (2011, April 11). Will buying an electric car make an environmental difference?. Miami Herald. Sorenson, B. (2007, May). On the road performance simulation of hydrogen and hybrid cars. International Journal of Hydrogen Energy, 32(6), 683-686 Stephan, C. H. and Sullivan, J. (2008). Environmental and Energy Implications of Plug-In Hybrid-Electric Vehicles. Environmental Science and Technology, 42, 1185-1190. Thounthong , P., Rael, S., Davat, B. (2004). Supercapacitors as energy storage for fuel cell automotive hybrid electrical system. ESSCAP 2004. Tollefson, J. (2007, November 27). Charging up the future: a new generation of lithium-ion batteries, coupled with rising oil prices and the need to address climate change, has sparked a global race to electrify transportation. Nature, 456(7221), 436-440. Toyota Prius. (2011.) Retrieved from http://www.toyota.com/prius-hybrid/specs.html Types of lithium-ion. (2011). Retrieved April 11, 2011, from http://batteryuniversity.com/learn/ article/types_of_lithium_ion Van Mierlo, J. and Maggetto, G. (2007). Fuel cell or battery: electric cars are the future. Fuel Cells, 7: 165173. Weil, K.S., Xia, G., Yang, Z.G., & Kim, J.Y. (2007, November). Development of a niobium clad PEM fuel cell bipolar plate material. International Journal of Hydrogen Energy, 32(16), 3724-3733. Wouk, V. (1995). Hybrids: then and now. Spectrum, IEEE, 32 (7), 16-21.

Chapter 6 Air Quality and Air Pollution


Megan Steele, Richer Leung, and Jessica Latta

Introduction Everyone in the world is exposed to some form of air pollution daily. Outdoor air pollution affects the health and lifestyle of humans and wildlife globally. Indoor industrial pollution is caused by gaseous and particulate matter, and it is a global concern because individuals in these countries spend the majority of their lives inside. Indoor air pollution in developing countries presents its own unique challenges and is a significant concern. It is essential that knowledge of air pollution issues is spread so that it can be appropriately addressed and prevented. Outdoor Air Pollution Roughly fifty million Americans live in areas with unhealthy air. Outdoor air pollution, a major problem caused by an excess of gases and chemicals in the atmosphere, affects the health of most Americans. Nonetheless, pollution is a universal problem that influences the people and wildlife of every country, such as the developing industrial countries of China and India. There are many particles and substances that contribute to air pollution; however, there are many gases that also contribute to poor outdoor air quality, such as ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2). Pollutants Ozone (O3) consists of three oxygen atoms and forms the ozone layer of the atmospere. It is harmful to the respiratory system animals and can burn sensitive plants. Another pollutant, perhaps the most dangerous one, is carbon monoxide (CO) because of its abundance and ease of formation. It is a poisonous gas that disrupts the breathing process of the body. It is particularly harmful to babies, young children, and people with heart disease. The incomplete combustion of carbon compounds in fuels yields this pollutant; common sources of CO include automobile exhaust, incinerators, and industrial factories (Oren, 2008). A third pollutant is NO2, which is commonly generated by automobiles and power plants. It is a reddish-brown toxic gas that is a cause of air pollution. This is a major component of acid rain, chemically altered precipitation that has a negative effect on the wildlife, plants, and buildings (Schnabel, 2008). In addition, NO2 is known to affect respiratory health and can cause illnesses such as bronchitis and pneumonia.

This photograph shows harmful CO gas released from a factory into to the environment. A second element of acid rain is sulfur, and this pollutant is harmful in many different forms, not just SO2. In particular, it is detrimental due to its health and ecological impacts. Sulfur causes breathing problems and damages the heart and lungs (Oren, 2008).

This diagram illustrates the acid rain cycle from the troposphere to the stratosphere. In addition to gases, particulate matter is also a source of outdoor air pollution. Particulate pollutants are the most prevalent in large developing nations such as India and China because of their monsoons. However, monsoons also catalyze rain, which can wash away and remove air pollutants (Bhaskar, 2010). The dual role of monsoons is similar to that of wind which often carries and spreads an abundance of pollutants into a country. Air pollution from monsoonal countries is often transported into the stratosphere and then spread around the world. Outdoor air pollution is the most severe in urban areas because these regions contain factories and automobiles. This can be particularly devastating in school and residential areas, as children are more susceptible to negative health effects of smog and pollution (Mejia, 2010). Because an average school day is several hours long, children in schools located near busy streets or factories have prolonged exposure to smog and other air pollutants. An example of poor air quality in highly populated areas can be found in the 2008 Beijing Olympic Games, when the level of air pollution was 160g/m3(particles per meter cubed), or six times accepted level set by the World Health Organization Air Quality Guidelines (Cai, 2010).

Daily smog envelops Beijing, China. Emerging Technology and Solutions Fortunately even though there are numerous outdoor pollutants that generate unclean air, there are various solutions to improve air quality. Some of these strategies are easy and have rapid implementation, while others take are more costly and time intensive. One solution is to use alternative sources of energy, including automobiles and factories that are not fuel-reliant, and recycling of household and industrial waste. The products of coal combustion are one of the most significant contributors to low outdoor air quality. An innovative method is clean coal that reduces the amount of fumes produced in the burning of coal. This is accomplished by making coal a fuel source independent of carbon dioxide emissions. The impurities of coal are chemically washed away and the carbon is captured and removed. Clean coal addresses the only disadvantage, making it an excellent alternative. Factories around the world have started to use this energy source to reduce the amount of air pollution in the atmosphere. Currently, most of the energy that Americans use originates from power plants that burn an excessive amount of fuel. Alternative energy options are solar, wind, and hydroelectric resources; however, all of these approaches are difficult to implement. Wind farms are a productive way to generate energy using a renewable source, yet they are costly and take years to construct. In the United States, plans to erect wind farms are already in place, and researchers believe that by 2030, 20% of the energy in the United States will be generated by wind. This process would also decrease the amount of fossil fuels being burned, and it also has a subtle effect on air pollution. If wind farms became a major source of energy, cars would be less dependent on fossil fuel and they would rely more on natural gas, which generates 22% of the electricity in the country. Moreover, if cars and power plants were less reliant on fuel that produces high amounts of pollution, levels of air pollution would drop accordingly. Hydroelectric energy is less feasible because of the scale necessary to operate a power plant using this source. On the other hand, solar panels only use ten percent of the space required by hydroelectric plants. In addition, solar energy is a reasonable solution because it does not need a particular natural resource or terrain to operate. Solar panels can be built almost anywhere because of sunlight is available in all areas.

Outdoor air pollution can also be improved through automobile fuel and design changes. Cars burn an abundance of fuel, which causes a plethora of air pollutants. Furthermore, the exhaust contributes to the greenhouse effect, a theory that the temperature of the Earth is increasing due to excess chemicals and pollution. Fortunately scientists and engineers have already begun designing cars that use alternate power source to fuel. Common substitutes are solar energy, electricity, and even vegetable oil. These fuel-efficient automobiles produce less CO from combustion, which would lower air pollution levels in industrialized, crowded cities. The only drawback to hybrid cars is their occasionally visually unappealing exterior. Recycling is the easiest and most inexpensive way to reduce air pollution, and plastics, paper, and other waste can be salvaged. When plastic is incinerated, it emits dioxin, a deadly toxin which can cause lung damage and asthma. Another benefit is that recycling would reduce the need for a fuel-powered factory to dispose of harmful plastic and glass waste. Further innovative and efficient methods to reduce air pollution involve compounds that prevent pollutants. One such compound is uranium oxide, which purges the chlorine-containing pollutants chlorobutane and chlorobenzene, by initializing combustion at relatively low temperatures. Another extremely effective way to reduce pollution is through plants. New studies have shown that trees can absorb nitrates that generate toxins; the tree leaves uptake nitrates and produce amino acids from them. This discovery could lead to the exciting possibility of a new genetically modified plant species that uptakes harmful pollutants as nutrients. Air pollution is a detrimental to nature and human life, and although scientists have been working on a numerous solutions to reduce the outdoor air pollution, it is still prevalent. New techniques need to be developed to reduce the quantity of outdoor pollutants and purify the air we breathe. Indoor Air Pollution in Industrialized Nations Indoor air quality is an important factor in public health, and it is one of the United States top five environmental concerns. Moreover, levels of indoor pollutants are frequently higher than the concentrations of outdoor pollutants, yet most investigations focus on outdoor air problems. In industrialized nations, air quality should be of a greater concern because people in those countries spend at least 75% of their time indoors. It is often assumed that indoor environments are a sanctuary free of air quality problems; however, this is an incorrect belief. Indoor air pollution issues are often subtle and have both short and long term health effects. Whether in gaseous, particulate or fibrous form indoor chemical pollutants generally cause mild irritation and a pollutant overview can be seen in the figure below. However, on the more severe side of the spectrum, long-term exposure can cause cancer. One category of pollutant is combustion-generated, for example CO, SO2, and NO2. CO has been termed a socalled silent killer because it is an imperceptible yet deadly gas, and when CO combines with hemoglobin in the blood it hinders oxygen distribution. Potential sources of CO include heating appliances, paint strippers, drafty chimneys, and fireplaces. SO2 is an equally deadly colorless gas that weakens the respiratory and circulatory systems. This gas is dangerous in high and in

prolonged low concentration exposures. Alternatively, the third combustion generated pollutant NO2 is only hazardous in large quantities.

Sources of indoor air pollutants in the home. A second type of indoor air pollutants are carbon-based volatile organic compounds (VOCs). There are a plethora of VOC source materials, and they have either continuous or discontinuous emission. One significant source of VOCs, due to the number of chemicals involved in manufacturing, is carpeting. These chemicals, such as formaldehyde, acetone, and benzene are known to affect the respiratory system, the brain, and are carcinogens (Burroughs, 2008). Used in building construction for its low cost, particle board is a common source of VOCs. Although the highest concentration of VOCs is generated in the first year, exposure to formaldehyde and the other off gases from particle board will still cause health concerns. Other chemical based products that are sources of VOCs include printer ink, marker pens, electronics, and cleaning supplies. A frequently used product in construction that is also a pollutant is asbestos-containing materials (ACMs), which are often found in buildings built before the 1980s. ACMs were used for their insulating properties and as fire retardants in several public buildings such as schools and hospitals. Unlike other indoor pollutants, asbestos is stringently regulated, and numerous countries have required its removal and banned ACMs. These materials are dangerous because they produce small fibers that irritate the lungs and cause mesothelioma, asbestosis, and other asbestos related cancers (Griffin, 2007). Another indoor pollutant that has several health implications is environmental tobacco smoke, which contains a toxic mixture of over 3500 gaseous and particulate chemicals (Burroughs, 2008). Smoking raises the concentrations of NO2, acrolein, nicotine, CO and other VOCs, in addition to causing lung cancer and respiratory problems. Moreover smoking causes prenatal and postnatal developmental problems. Currently exposure to tobacco smoke is moderately controlled by ventilation or with the use of designated smoking areas. Radon, also a gaseous pollutant, is a toxic radioactive element that comes from the natural decay of uranium ores. It has been declared a carcinogen responsible for approximately 20,000 deaths per year by the World Health Organization and the Environmental Protection

Agency (Godish, 2000). Radon enters a home in several ways, through foundation cracks, wellwater, granite countertops, and in building materials (see figure below). Testing for this pollutant is simple and inexpensive; if excessive levels of radon were detected, sealing gaps and ventilation would mitigate the problem.

There are several ways that radon can enter a house. In comparison to chemical pollutants, biological toxins are equally hazardous to overall health and they have indoor and outdoor sources. Indoor biopollutants include pollen, fungi, bacteria, viruses, inflammatory endotoxins and allergens. While they are all harmful to the body, most studies have focused on the effect of fungi and bacteria. Generally, those microorganisms are associated with foodstuffs, walls, and other organic matter. They fluctuate in quantity according to the season and thrive in damp, poorly ventilated areas. Recent research has elucidated the suspected connection between home dampness and asthma, rhinitis, and atopic dermatitis (Burroughs, 2004). Another category of indoor biopollutant includes allergens, endotoxins, and house dust mites. Closely linked with sick building syndrome, endotoxins are the inflammatory materials found in gram-negative bacteria that can exacerbate respiratory problems in heavy concentrations. Contaminated humidifiers and heavily flooded areas are both potential sources of endotoxins. House dust mites also thrive in damp environments, and their allergen-containing feces can trigger asthmatic reactions. Health Concerns There are several adverse health effects that are caused by poor indoor air quality (IAQ). Although it is inadequately defined, sick building syndrome (SBS) has been the focus of several indoor air studies since the 1970s. This syndrome describes a range of ailments and symptoms that people experience while in the unhealthy building. The symptoms are usually mild and include airway infections, itchy skin, and lethargy; however, in a few cases high blood pressure

has been reported. Frequently the symptoms will abate after the person leaves the building. Most SBS buildings are new and energy efficient yet they have insufficient ventilation. A prompt health inspection by an environmental health agency will assess the problem and suggest the appropriate course of action. Sick building syndrome should not be confused with building-related illnesses (BRIs) which refer to diagnosable diseases such as Legionnaires disease and humidifier fever (Hogg, 2010). Legionnaires disease is a rare form of pneumonia that is contracted after the inhalation of air contaminated with Legionella bacteria. It is a naturally occurring microorganism; however, if it leaches into the water in the cooling system of a building it can be very harmful. Primarily this is a concern in large public buildings such as hospitals, factories, and offices. To prevent this disease, cooling towers need to be vigilantly maintained; the temperature and chlorination levels also should be controlled. Reducing Air Pollution Concentrated efforts to regulate and control air pollution did not develop until the 20th century, although air quality problems have existed for centuries. Among other duties, the Environmental Protection Agency (EPA) enforces the Clean Air Act, which is a collection of federal laws that span several decades. This act regulates air pollution on a national level with pollution standards, the development of localized programs, and the restriction of automotive pollution. It was expanded to include regulations about acid rain, interstate emissions, and ozone depletion in the 1990s. Current analysis has quantified the environmental and health benefits of these laws to have a mean benefit-cost ratio of 42 (Reyes, 2007). Unfortunately the recent Clear Skies Initiative passed by George W. Bush has made the act less stringent by diminishing power plant regulations and increasing emission caps. When compared to outdoor air quality, there are fewer alternatives to control indoor pollutants. Short and long term options include policies such as the Clean Air Act and actions to increase public awareness. Currently the most widely used methods are heating, ventilation, and air-conditioning (HVAC) systems. Ventilation replaces stale air through dilution and a proper HVAC must maintain normal temperature and humidity levels. A major drawback to any HVAC system is amount of energy it requires to operate, which raised significant concern during the energy crisis of the 1970s. In addition, the system should be able to quarantine specific pollutants, the building must be airtight and the system needs to be vigilantly maintained. If the HVAC system is neglected it can actually cause more indoor air problems, such as mold growth (Godish, 2000).

This diagram illustrates an HVAC system and air flow in a building. Commonly used with mechanical ventilation, filtration can be used with indoor and outdoor gaseous pollutants. Particulate control devices remove pollutants from a gas stream before it is released into the environment and can be utilized individually or combined into one, multi-step device. For example mechanical collectors, electrostatic precipitators, fabric filters, and wet scrubbers are all adequate methods of pollutant control. Due to modern regulations, some particulate controls such as mechanical collectors have almost been rendered ineffective. While mechanical collectors use simple settling chambers and have 90% efficiency, they cannot collect the size of particle new legislation requires (Placks & Sedman, 2008). Another concern is the power input required to run some of these devices. For instance wet scrubbers need a large amount of energy to offset the pressure drop it experiences to fulfill current requirements. A wet scrubber is a generic term for any device that captures pollutants by water. The use of electrostatic attractive forces and particle condensation helps offset the power necessary to operate these devices. Two other particulate controls are electrostatic precipitators and fabric filtration. Both of these have industrial size applications but can be used on smaller scales. The former gives a particle a charge, puts it in an electric field, and stores it on an electrode. Two step precipitators or models that include gas conditioning and pulse energization all improve the efficiency of those devices. The latter gathers pollutants in bags or cartridges and some automatically clean themselves so the user does not have to remove the dirty bags. The two types of fabric filtration are inside-out and outside-in flow; the only differences are the cleaning method and materials used to build the mechanism (Placks & Sedman, 2008). There are three times when gaseous pollutants can be controlled: before, during, and after combustion. Each method of pollution control has situation specific benefits and drawbacks. The major combustion-formed gases are sulfur oxide (SOx), nitrogen oxide (NOx), and hydrogen chloride (HCI), and rules to reduce emission have only been made in recent amendments to the Clean Air Act (Griffin, 2007). The type of pollutant removal used is dependent upon the stringency of the regulations for that gas and the process that produced it.

There are three types of pre-combustion control, and for solid fuels these methods are generally the least expensive yet most inefficient option. Gases that originate from oil, bitumen, and coal contain large components of hydrogen sulfide; a process called gas sweetening extracts the compound and concentrates it into elemental sulfur liquid. Another treatment is oil desulfurization in which petroleum is injected with hydrogen gas to remove nitrogen and sulfur contaminates. The third method of controlling pollutants is coal cleaning (Burroughs, 2008). This involves washing processes to eliminate unwanted ash and is fairly successful at removing sulfur from the coal. Therefore when the coal is burned, there is a considerable decrease in SOx. During the combustion process the focus is on minimizing the quantities of NOx and the best method accommodates for the combustor and fuel type. For example in steam boilers a reduction to combustion air, which can remove 10% of the NOx, is the preferred option (Chen, Chuang, & Lin, 2008). It is interesting to note the correlation between the amount of the pollutant that can be removed and the scale of the NOx source. It follows that in industrial size boilers special staging burners and fuel re-burning can take away up to 70% of the gas. Unlike the other two, post combustion filtration can reduce levels of SOx, NOx, and HCI and has a greater success rate. The most prevalent option to manage sulfur oxide is wet and circulating-bed scrubbers, and in some European countries these processes have been able to remove 98% of the SOx present (Godish, 2000). The methods of post combustion SOx control were modified for use with HCI and they share the same success rate, in some applications 95% of the HCI is eliminated. There are two ways to remove NOx and both put additives into the flue gas, the first uses ammonia and the later an oxidant. The first method is preferable because it can be utilized with any source of NOx and it is usually the least expensive.

Indoor Air Pollution in Developing Countries Air pollution problems in developing countries stems from the fact that approximately 50% of the population worldwide still relies on unrefined solid fuels and/or traditional cooking methods. Roughly eighty percent of people in countries such as China, Indian, and Sub-Saharan Africa utilized these fuel sources. They have large emission rates of particulate matter and greenhouse gases that are harmful to both the environment and human health. When these conditions are combined with little or no ventilation in domestic environments, it causes indoor air pollution and millions are put into danger and are killed annually.

Traditional Methods of Cooking With traditional cooking methods, biomass fuels are mainly used. This is a traditional method of energy production that approximately three billion individuals depend on daily (Hedon, 2011). The most concentrated use of biomass as a fuel source is in developing countries. This includes over 580 million people, or two-thirds of African households (Utria, 2004). These traditional methods, such as the three stone fires shown in the figure below, are highly inefficient, and emit large amounts of particulate matter and harmful greenhouse gases. Despite these drawbacks, the numbers of people relying on biomass will likely increase to 2.6

billion by 2015, and 2.7 billion by 2030, because of exponential population growth. By itself biomass is not harmful, but the incomplete combustion of this fuel produces dangerous amounts of smoke and soot, particulate matter that is constantly inhaled by the individuals using these traditional methods.

A group of women use the traditional cooking method of a three stone fire. Indoor air pollution in developing countries has a direct correlation to the type of fuels being burned; biomass compounds that are combusted usually produce more organic compounds, such as benzene, formaldehyde, 1,3-butadiene, and polyaromatic hydrocarbons, while the burning of fossil fuels produces SOx (Smith & Mehta, 2003). This causes health risks for the people using these fuel types and complicates the efforts to find an acceptable and feasible solution. Solutions Improved cook stoves are consistently the best solution to eliminate indoor air pollution that simultaneously improves the quality of life of individuals in developing countries. At this time, 220 million improved stoves have been distributed worldwide, with the majority of these stoves distributed in China (Sagar & Kartha, 2007). Hundreds of designs of improved cooking stoves have been created, and each is tailored to a specific region. For example, in sub-Saharan Africa, a distinct improved cooking stove is the Maendeleo stove, a one pot mud and sand based stove with a combustion chamber lined with fired clay. Using these cheap materials, the stove is easily recreated in homes. Moreover, the need for fired clay allows local artisans to be employed to produce the linings, while also allowing those living in poverty to construct their own cooking stoves cheaply and easily. It is important benefit that the improved stoves present a solution to indoor air pollution in addition to providing long-term economic benefits for the community.

There are several resources that give detailed instructions on building cook stoves. They range from improved three stone stoves to metal models mass produced in a factory. Stoves can also be manufactured from clay, straw, dung, or cement. The ones made from natural sources only burn wood; advanced stoves are primarily made out of metal, sometimes with ceramic, and they have the capability to burn a range of fuels. A multi-fuel grade cooking device is portable, created from metal, and is able to burn charcoal, dung, and agricultural residues in addition to wood (Hedon, 2011).

An example of an liquefied petroleum gas (LPG) cooking stove.

The combustion chamber in improved stoves must be proportionate, and take into account the stove type and usage. In addition, it should contain a high amount of thermal fuel, have efficient combustion, and emit a low amount of smoke and other pollutants. Durable materials are a necessity in the development of type of improved cooking stove. Baked clay, ceramic, or metals are ideal for the this part of stove design, and due to the unique stress that the combustion chamber undergoes it is highly corrosive, and should be made out of durable materials to order to ensure the long life and use of the stove (Hedon, 2011). These materials also have a need to have durability in high temperature environment, low cost, local availability, [and] easy manufacturability. (Hedon, 2011). Indoor air pollution solutions in developing countries lie not only with technological advances in cooking fuels and in cooking devices, but also on a social and economic level. It is necessary to model situations present in developing countries and to consider healthy options for those using traditional cooking methods. Education on the dangers of indoor air pollution is also vital to succeed in its reduction. This could be accomplished with government programs and policies that discourage the use of solid fuels and promote the upgrading to less dangerous fuels. It is necessary that we understand the connection among the use of solid fuels, indoor air pollution levels, and the health adults and children (Larson & Rosen, 2002).

Production and Distribution of Stoves The issue of traditionally constructed improved stoves is that they are built by the user in a developing country. When users are allowed to create their own improved cooking stoves that enables sustainability in the region. If improved stove companies continue to supply developing countries with improved stoves, they will arguably be less valuable because they are provided for free. In addition, when the stoves break or run out of fuel, the individuals utilizing this stove would be less likely to purchase a replacement, or attempt to gather the required fuel for the stove, as it was given to him free of charge. There are advantages to factory-built improved stove models over user created models, specifically that they have better quality control, higher efficiency, and a longer life (Hedon, 2011.). By adapting a new stove people in developing countries receive many benefits and can avoid contracting diseases associated with indoor air pollution. A new stove has less emissions, concentration of pollutants, burn risks, meal preparation time, and a more efficient cooking experience. Overall, a modified cook stove will improve the quality of life (Larson & Rosen, 2002).

This is a factory manufactured improved cooking stove. Alternate Fuels Alternate fuels, such as LPG and biofuels, produce less indoor air pollution and have unique benefits for developing countries. Gel fuel is a biofuel-based ethanol waste product produced by the fermentation of sugars. The final stage of the fermentation process is combined with cellulose to make this fuel source and although it is extremely flammable it has easy to produce (Utria, 2004). Nevertheless, the cost of gel fuel burners impedes widespread application of this solution to indoor air problems. Two challenges that promoters of gel fuel face are the establishment of energy markets and follow up investment projects in these developing countries. Recently the effect of indoor air pollution on human fatality has been examined and quantified. Pollution in developing countries puts 400-700 million people in danger, and it is a factor in 4% of all diseases. Furthermore it causes 2.8 million premature deaths annually, most of which are easily presentable diseases if air quality control measures had been implemented

(Larson & Rosen, 2002). In a comprehensive paper using economic equations, Larson & Rosen (2002), have calculated the cost of implementing air quality solutions in developing countries. They concluded that the cost of a solution is the deciding factor in the adaptation of these interventions. Modeling Solutions Indoor air solutions in these countries are multi-faceted and involve technological advances in cooking devices, but they also have benefits on a social and economic level. It is necessary to model the different economic situations and consider options for those living with traditional cooking methods to increase their general wealth. In one model a so-called energy ladder is used to understand which individuals in developing countries utilize certain fuels. A family using more efficient fuels should focus on activities to increase income, while time devoted to cooking related activities is decreased (Heltberg, 2004). A family starts by using biomass as a fuel source, as the income and the education level in the household increases they move into a characteristic transition phase. Here kerosene, coal, and charcoal are used instead of biomass. Finally, the household adapts to the use of LPG, natural gas, and electricity for their cooking use (Heltberg, 2004). This transition through the energy ladder is fluid process; families can continue to use different fuels throughout the process. However, the higher education level of an individual, it is less likely that he or she will use biomass. When one family progresses through the energy ladder it has the interesting side effect of influencing others to adapt to more efficient uses of energy. Moreover when sustainable cook stoves are used and a family follows that process it is connected with monetary and scholastic success (Heltberg, 2004).

. The energy ladder utilized to model the fuel transition of families in developing countries.

Bibliography Air Pollution Monitoring. (2011). Retrieved March 25, 2011 from http://www.epa.gov/ebtpages/airatmosphere.html Bhaskar, B. (2010). Atmospheric particulate pollutants. Aerosol and air quality research, Volume 10, 301-315 Burroughs, H. E. (2008). Managing indoor air quality. Lilburn, GA: The Fairmont Press, Inc. Cai, H. (2011). Traffic related air pollution. Science of the Total Environment. Volume 409, 10, 1935-1948. Chen, H., Chuang, C., & Lin, H. (2008). Indoor air distribution of nitrogen dioxide and ozone in urban hospitals. Bulletin of Environmental Contamination and Toxicology, 83(2), 147150. Crowe, M (2009). Plants that eat pollution. Retrieved April 29, 2011, from Bright Hub Web site: http://www.brighthub.com/environment/science-environmental/articles/37020.aspx Godish, T. (2000). Indoor environment quality. Boca Raton, FL: CRC Press LLC. Griffin, R. (2007). Principles of air quality management. Boca Raton, FL: CRC Press LLC. Hansen, S. (2007). Air Quality: Indoor Environment and Energy Efficiency. Encyclopedia of energy engineering and technology. Gig Harbor, Washington: Hansen Associates, Inc. Hedon. (2011). Household energy network. Retrieved from hedon.info Heltberg, R. (2004). Fuel switching: evidence from eight developing countries. Energy Economics, 26(5), 869-887. Hogg, M. (July, 21 2010). Sick building syndrome. Retrieved from http://www.eiresource.org/illness-information/related-conditions/sick-building-syndrome-(sbs)/ Kymisis, M. (2008). "Short-Term Effects Of Air Pollution Levels On Pulmonary Function Of Young Adults". The Internet Journal of Pulmonary Medicine 9 (2). Larson, B. & Rosen, S. (2002). Understanding household demand for indoor air pollution control in developing countries. Social Science & Medicine, 55(4), 571-584. Mejia, J. (2010). Assessing exposure and impacts of air pollutants. Science Direct. Volume 45, 4, 813-823.

Oren, C. (2008). Air Pollution. AccessScience http://www.accessscience.com/content .aspx?searchStr=Air+pollution&id=0177 Placks, N. & Sedman, C. (2008). Air Pollution. Access Science. Retrieved April 17, 2011, from http://accessscience.com Reyes, J. (2007). Clean Air Act. Encyclopedia of environment and energy. Retrieved April 17, 2011, from http://www.sage-ereference.com/ Sagar, A. & Kartha, S. (2007). Bioenergy and sustainable development? Annual review of environment and resources. Retrieved October 4, 2010, from http://merlin.allegheny.edu/employee/M/mmaniate/280/bioenergySD.pdf Schnabel, R. (2008). Acid rain. AccessScience. http://www.accessscience.com Smith, K. & Mehta, S. (2003). The burden of disease from indoor air pollution in developing countries: comparison of estimates. International Journal of Hygiene and Environmental Health, 206(4-5), 279-289. Utria, B. (2004). Ethanol and gelfuel: clean renewable cooking fuels for poverty alleviation in Africa. Energy for Sustainable Development, 8(3), 107-111.

Chapter 7 Bioremediation
Jeffrey Copeland, Christian DiMare, and Nicholas Donnelly

Introduction Since the beginning of time, but especially since the advent of industry, human activity has left the environment in tatters. Invisible oil sheets cover the beaches in the Gulf of Mexico. Heavy metals poison drinking water for millions of people and animals. Plastic bags lie on riverbanks, sidewalks, and landfills. These locations can contain various contaminants such as petroleum products, solvents, metals, acids, and bases. Eventually, natural degradation would eliminate some of these wastes, but modern technologies are required to shorten drastically the length of time required by this process. Politicians and businessmen have looked for manpowered methods and technologies to eliminate these pollutants; however, these plans are often inadequate and expensive. It is now known that polluted land can be a threat to human health. This discovery has led to the discovery of methods by which to decontaminate soil and groundwater in an effective manner. Techniques developed which included removal of the soil, quarantining of the area, or incineration of the contaminated soil. Each method had several drawbacks and, in many cases, did not solve the problem, but rather made it worse. Disposal of the soil merely caused the pollution to spread to new areas. Quarantining the location allowed the contaminant to leech further down into the soil and destroy any life present. Incineration of the soil rarely destroyed the contaminant, but rather spread it in the form of dangerous fumes. At this stage, a method was needed that would destroy the contaminant rather than displace it. Recently, researchers have begun to focus on bioremediation as a possible solution due to its permanence and economic qualities. Bioremediation Bioremediation is the removal of pollutants from the environment through the metabolic processes of organisms. There are many different methods and technologies used in the bioremediation processes, and they can be classified into two categories: in situ and ex situ. In situ remediation techniques involve biodegrading pollutants and repairing contaminated materials at the damaged environment, whereas ex situ involves transporting materials and treating them somewhere else. Some examples of bioremediation technologies are phytoremediation, bioventing, bioleaching, landfarming, bioreactors, composting, bioaugmentation, rhizofiltration, and biostimulation (Boopathy, 2000). Despite the recent emphasis, bioremediation is not a new process. The use of these organisms, however, dates back to 600 B.C.; many civilizations notably the Romans utilized their metabolic processes to treat wastewater. In addition, these societies used bacteria and fungi

for composting and fermentation practices. Approximately 100 years ago, modern uses for these microbes began. Although the technology is still used to treat wastewater, it can now treat oil, sludge, and radioactive and metallic contaminants as well. In the past 40 years, this practice has grown to a commercial scale and has become a well-developed and moderately efficient method of decontaminating soil and groundwater. Overall, bioremediation is a natural, time-tested, and practical solution to the problems that face the populace in the future. In situ bioremediation is merely a modified version of the role that microorganisms serve every day. The goal is to disturb the contaminated location as little as possible while removing as much of the pollutant, or pollutants, as possible. Within the in situ branch are two subsections known as engineered systems, which enrich the conditions of the site in favor of microorganisms, and intrinsic systems, which allow the degradation process to continue without enhancement.

In situ remediation using vacuum pumps. The above image shows a method of in situ remediation in which nutrients are pumped downward into the soil, which forces contaminants down. The contaminants are then vacuumed from the soil. Engineered bioremediation systems accelerate the rate at which the degradation occurs, minimizing the time it takes for the process to complete. These systems can be employed both above and below the water table. Bioventing provides oxygen and nutrients to soil above the water table using vacuum pumps. Another system distributes oxygen in the form of aqueous hydrogen peroxide. This solution is mixed with nutrients and pumped into the soil through a well. A third system, known as air sparging, injects oxygen directly into the groundwater. The air forced into the ground through a compressor while nutrients are injected using vacuum pumps. Intrinsic bioremediation is appropriate when the natural degradation process is performed quickly enough to prevent spreading of the contaminant. To ensure that the contamination does not migrate, the site is carefully monitored. As long as the contaminant does not spread, the process remains unaltered by human enhancement and continues until the entire area is returned to inhabitable conditions. The lack of requirement for excavation of soil or pumping of groundwater makes intrinsic bioremediation suitable for treating rocky or underground areas. The disadvantages, however, include the possibility of toxic by-products, prolonging of the

remediation process, and possible spread of the contaminant. If the process is not carried out correctly, the microorganisms can make the problem worse instead of improving soil condition. Allowing the organisms to complete the process naturally takes much more time than human enhancement or ex situ solutions. Finally, in some cases the contamination can spread even under close supervision because of the amount of time it takes the cleanup process to complete.

Vacuum pump in situ remediation below the water table. Nutrients are pumped below the water table, which forces contaminants upward and out of the water table. In situ bioremediation has several advantages over other technologies. Traditional methods require the soil or groundwater to be excavated or pumped to treatment centers. This disturbs the surrounding environment, which, more often than not, cannot be returned to its original condition. In some cases, the soil is incinerated or disposed and never returns to the original site. In any case, the excavation, pumping, incineration, and disposal all require large energy inputs and are costly alternatives. In situ bioremediation removes the need for incineration, disposal, and excavation and greatly minimizes the need for pumping. Furthermore, alternative treatments may leave the soil sterile and therefore, uninhabitable.

In situ remediation with peroxide. The in situ process displayed above does not use a vacuum pump. Along with nutrients, hydrogen peroxide is also injected into contaminated soil.

Ex situ bioremediation involves excavation of the contaminated area and its transportation to a treatment center. Upon arrival, the polluted substance is mixed with water, microorganisms, surface-active agents, dispersants, and nutrients in an agitator that mechanically aerates the substance. The agitation allows quicker breakdown of the contaminant, increased contact between the waste and the microorganisms, and increased biodegradation capability. The concentration of biomass, temperature of the chamber, and nutrient level of the soil are all monitored to ensure efficiency of the process. In addition, the polluted substance is treated in sections to decrease the volume of soil or groundwater in the chamber, thus increasing the rate of biodegradation of the contaminant per section. Although the ex situ method is faster and more thorough than the in situ alternative, it costs far more. The entire toxic region must be excavated and transported in a safe manner to avoid further contamination. Moreover, much more energy input is required to agitate the mixture constantly than to aerate the area in situ.

Landfarming is an example of an ex situ bioremediation process. While all kingdoms of organisms are actively involved with bioremediation, species of Monera are arguably the most significant, and the genus Sphingomonas stands out among the others. The name of the group comes from the sphingolipids in the membranes of the bacteria. While some of the species are opportunistic pathogens, sphingomonads are typically capable of adapting to the food sources that are available in their habitat. Because they are Gram-negative bacteria, they have a periplasmic space, which allows them to contain polymers up to 20,000 units long from within their membranes (Kawai, 1999). The advantage of this niche is allowing for access to more powerful oxidation enzymes. If the bacteria can overcome the initial step of oxidizing the compound, the material becomes much more reactive. Along with structural similarities between natural compounds and synthetic polymers, this unique design makes biodegradation possible. S. macrogoltabidus, is a perfect example of the ideal sphingomonad; it is capable of breaking down polyethylene, polycyclic aromatic hydrocarbons (PAHs,) crude oil, and dioxins (Kawai, 1999).

Another significant bacterial genus is Pseudomonas. Like Sphingomonas, the bacteria in this group have a periplasmic space and have great versatility when it comes to biodegradation; however, their processes are a little slower because their enzymes cannot proceed through the limiting step as quickly as those from Sphingomonads. Regardless, the species P. putida is significant because of its ability to degrade a variety of materials, including the some of the nearly recalcitrant ones. After an FDA study last year linked BPA (Bisphenol A) with a myriad of health conditions, many researchers have been concerned about the common substance. In an earlier 2002 study, J-H Kang and her partners showed that two species were able to degrade more than 90% of the BPA from a contaminated water sample, one of which was P. putida. Other important compounds that this microorganism can biodegrade include trichloroethylene (Heald & Jenkins, 1993), crude oil (Raghavan & Vivekanandan, 1999), polyaromatic hydrocarbons, and polyethylene (Burd, 2008). Mycoremediation Mycoremediation, a form of bioremediation, uses fungi to decontaminate toxic areas. This process can only be performed in situ, but is extremely effective, especially when used in combination with standard bioremediation practices. The fungi and microbial communities cooperate to break down and eliminate contaminants into carbon dioxide and water. Some fungi are known as hyperaccumulators, organisms capable of absorbing, concentrating, and storing many types of heavy metals. The success of mycoremediation, however, depends on the fungal species; the correct strain is required to target and destroy a specific pollutant. For example, morels (genus Morchella) and puffballs (division Basidiomycota) will rapidly absorb heavy metals, but will not affect any coliform bacteria. King Stropharia (Stropharia rugosoannulata) and turkey tails (Trametes versicolor) will, however, destroy all forms of coliform bacteria. Several other species of fungi exist that do not affect certain chemicals, yet in almost every case there exists a species that will fill that void. The most potent bioremediation species in this kingdom are white-rot fungi. This is a physiological group named after the color of degraded wood as opposed to a taxonomic group. The primary feature of this group is lignin mineralization. Lignin is an aromatic polymer that binds wood together and makes it chemically inert. In order to attack the wood for nutrients, white-rot fungi excrete three LMEs (lignin-modifying enzymes): one that contains copper and two others that contain iron. These proteins then oxidize the lignin, which then changes into mineralized aliphatic structures without enzymatic activity (Pointing, 2001). The structure of lignin its result share many similarities with other pollutants and xenobiotics, which means this group possesses vast bioremediation abilities. The white-rot fungi can disintegrate PAHs with little difficulty, and this metabolism can be extended further to incorporate resistant pesticides like DDT and Agent Orange (Pointing, 2001). The even more intractable PCBs (polychlorinated biphenyls) deteriorate regardless of the number of chlorine atoms, which is a typical limiting factor for PCB-degrading organisms. Dangerous munitions, namely TNT, reduce to DNT and lose carbon atoms during mycoremediation. Finally, white-rot fungi degrade plastics to a certain degree. Although only four species in the group are practical for bioremediation, they handle PVC and polyvinyl alcohol (PVA) very efficiently (Pointing, 2001).

Mycoremediation has several noteworthy advantages. Many fungi produce enzymes and acids that naturally decompose oils, petroleum based products, and pollutants. These contaminants are broken down into their most simple, non-toxic forms in a time span of mere weeks. Unlike standard bioremediation, several species of fungi used in mycoremediation are capable of eliminating chlorinated compounds. This vastly increases the scope of contaminants that can be removed using these natural processes. In addition, cleaning up soil with fungal metabolism can be completed in a matter of a few weeks. Although this may seem like a long time, its comparable to the amount of time required by standard decontamination processes. Furthermore, when mycoremediation and standard bioremediation practices are combined in situ in a common location, the time required to eliminate the toxins is reduced greatly. Unlike the microorganisms used in standard bioremediation, the fungi used in mycoremediation cannot grow and survive directly in soil or water; therefore, the fungi must be mixed into the toxic region in a manner that will allow their development. The most common and effective modern method is to inject the fungi into straw or woodchips, then spread the substrate evenly across the contaminated area. After this process has been completed, the fungi must be fed constantly. The fungi secrete enzymes to break down and digest the substrate surrounding them, then absorb the nutrients they need. Once the substrate has been depleted of necessary nutrients, the fungi will begin their fruiting stage and will cease to decontaminate the area. Substrate must be constantly added to prevent the organisms from reaching this stage. The current applications of mictobial bioremediation are limited, but varied. For instance, the BP oil spill prompted much controversy due to the use of dispersants, but bacteria were alternatives to chemicals. In fact, a contemporary study showed that natural anaerobic bacteria had started the breakdown process in the Gulf of Mexico. Scientists debated which species to use for remediation; options included S. macrogoltabidus, P. putida, and many native species from the nearby mangroves, but action for this never materialized. Because many digestive enzymes used in bacteria are on plasmids, they can shares the genes for these enzymes with other species of bacteria.

Phytoremediation Phytoremediation is a word constructed by combining the Greek prefix, phyto- meaning plant, with the Latin root remdium meaning, to correct or remove evil. The evil can be manmade pollutants of an area such as organic, solvents, heavy metals, pesticides, or radionuclides. Many plants have the ability to remove pollutants from groundwater and store, metabolize, or volatilize them, and deep root structures serve multiple purposes, one of which is its supporting of many different microorganisms in the sub-surface of the earth. Many of these microorganisms help to degrade pollutants in the soil alongside plants; however, phytoremediation is only particularly useful in areas where soil contamination is not deep (Latiff, et al., 2010). Heavy metals such as cadmium and lead are not readily absorbed or captured by organisms; therefore, they are not as effectively absorbed through bioremediation. The assimilation of metals such as mercury into the food chain may worsen matters. Using deeprooted, researchers have found phytoremediation to be a method of bioremediation that is

extremely well suited for use in areas where other remediation methods are not cost-effective, in low-level contaminated sites, or in combination with other techniques for remediating a location (Latiff, et al., 2010). There are numerous techniques utilized for remediating a polluted area that also fall into the category of phytoremediation. The list of techniques includes phytoextraction, phytofiltration, phytostabilization, phytovolatilization, phytomining, phytodegradation, and rhizodegradation.

This image shows a simplified diagram of the transfer of pollutants from the rhizosphere, into the plants and into the atmosphere as harmless compounds through phytoremediation. Phytoextraction consists of using the water uptake of plants to extract heavy metals from contaminated soils. The plants are harvested, and the heavy metals are removed from the site and dealt with elsewhere (Latiff, et al., 2010). Phytofiltration is a technique that utilizes plant roots to concentrate and precipitate heavy metals, allowing plants to extract the heavy petals without taking other pollutants into the stalks (Latiff, et al., 2010). Phytostabilization refers to using plants to stabilize contaminants in soils so that the concentrations are harmless to other environments. Plants are used to hold heavy metals in the

soil, allowing for other plants to perform the job of removing the metals. Plants with low metal accumulating properties are best suited for this phytoremediation technique, and without phytostabilization, many areas contaminated by heavy metals would have the pollutants dispersed into other environments by run-off and wind erosion (Latiff, et al., 2010). Phytovolatilization is used to extract volatile metals, and plants remove them from the soil and volatilize them aboveground (Latiff, et al., 2010). Phytomining is extremely similar to phytoextraction in implementation. Many plant species can accumulate heavy metals in the shoots, and by harvesting the aboveground biomass and smelting the harvested organic material, it is possible to recover the metals (Latiff, et al., 2010). Phytodegradation, also known as phytotransformation, refers to the breakdown of external contaminants through metabolic processes in the plant. This occurs through enzymes produced by the plants, and pollutants are used as nutrients for the plant and become part of the plant tissue (Truong, et al., 2010). Rhizodegradation has a plethora of other names: phytostimulation, rhizosphere biodegradation, enhanced rhizosphere biodegradation, or plant-assisted bioremediation. Rhizodegradation is the breakdown of inorganic contaminants through microbial metabolism in the rhizosphere, or roots area. Many microorganisms can implement heavy metals and other pollutants as nutrients (Truong, et al., 2010). It is extremely important to establish a plant cover in areas that need to be remediated because there are numerous health problems that come from heavy metals in the soil; however, it is also important to maintain control in an area that is being remediated using a method of phytoremediation. Plants transfer trace metals from the soil aboveground into the plant stalks via the roots, and this is potentially dangerous if not protected. This introduces the metals into the diets of the herbivore populations, which transfers the metals down the food chain. Despite the potential health dangers that sprout from applying it to an area, phytoremediation is a very economical and easily implemented system for remediation (Latiff, et al., 2010). Researchers currently use phytoremediation in many more situations than other forms of bioremediation. After the 1986 nuclear meltdown at Chernobyl, the surrounding waterways contained hazardous levels of the radioactive strontium-90 and cesium-137. In response, scientists planted sunflowers (Helianthus annuus) on Styrofoam rafts and left the plants to grow. They returned several days later to discover that the stems of the plant had several thousand times more radioactive metals than the water (Revkin, 2001). Near military firing ranges, excess TNT pollutes the nearby waterways. Conservationists created artificial wetlands made of parrot feather (Myriophyllum aquaticum) plants, which eliminate munitions waste from the environment entirely. The success of these experiments has led to the construction of more wetlands in the Hudson River Valley and in Ecuador in an effort to eliminate the waste from public water access. In Detroit, a plot of land that used to house a Chrysler car plant was satiated with lead. New sunflower and Indian mustard (Brassica juncea) plants removed nearly 50% of the lead of the

lead from the topsoil and saved Chrysler an estimated $1 million (Revkin, 2001). Meanwhile, a Florida lumberyard contained unprecedented amounts of arsenic in the soil. Scientists made the decision to plant brake ferns (genus Pteris) because of their absorbent properties and their novel immunity to arsenic. Several weeks passed, and the ferns lowered the arsenic levels to well below the legal limit.

This table compares the benefits, limitations, and factors to consider in both in situ and ex situ remediation. In a time when society focuses on many of the ecological problems with the earth, bioremediation and other green technologies are of the utmost importance. Very few methods work better when it comes to removing waste from an ecosystem. With proven techniques, competent organisms, and experimental modification, bioremediation will continue to grow and solve many world problems along the way. Bibliography A citizens guide to bioremediation (2001, April). United States Environmental Protection Agency Office of Solid Waste and Emergency Response. Bioremediation (2006, February 13). Arizona State University College of Liberal Arts and Sciences. Retrieved March 23, 2011, from http://bioenergy.asu.edu/photosyn/ courses/bio_343/lecture/ bioremed.html. Bioremediation (2008, July 9). BioBasics. Retrieved March 24, 2011, from http://www.biobasics.gc.ca/ english/View.asp?x=741.

Bollag, J., Mertz, T., and Otjen, L. (1994). Role of microorganisms in soil bioremediation. In Anderson, T., et al. Bioremediation through rhizosphere technology. Retrieved from http://gordonlibrary. wpi.edu Boopathy, R. (2000). Factors limiting bioremediation technologies. Bioresource Technology, 74 (2000) 63-67. Burd, D. (2008). Plastic Not Fantastic. Retrieved 4-16-11, from http://www.mchwebsites.com/wwsef/ archives/2008/08Burd. Chapter 8: Bioremediation (2011). Retrieved 3-29-11 from www.montana.edu/wwwmb/coursehome/ mb105/Lectures/Chapter%209.ppt. Guo, C., Dang, Z., Wong Y., and Tam, N. F. (2010). Biodegradation ability and dioxgenase genes of PAH-degrading Sphingomonas and Mycobacterium strains isolated from mangrove sediments. International Biodeterioration & Biodegradation, 64 (6), 419-427. Retrieved March 29, 2011, from Academic OneFile database (A234745531). Hawumba, J. F., Sseruwagi, P., Hung, Y., & Wang, L. K. (2010). Bioremediation. In L. K. Wang (Ed.), Handbook of environmental engineering (277-316). Springer Science. Heald, S. and Jenkins, R. O. (1994). Trichlorethylene removal and oxidation toxicity mediated by toluene dioxygenase of Pseudomonas putida. Applied Environmental Microbiology, 60 (12), 4634-4637. Kang, J-H. and Kondo, F. (2002). Bisphenol A degradation by bacteria isolated from river water. Environmental Contamination and Toxicology, 43, 265-269. Kawai, F. (1999). Sphingomonads involved in the biodegradation of xenobiotic polymers. Journal of Industrial Microbiology & Biotechnology, 23, 400-407. Latiff, A. A., Karim, A. T. A., Ridzuan, M. B. B., Yeoh, D. E. C., and Hung, Y. T. (2010). Heavy metal removal by crops from land sludge application. In L. K. Wang (Ed.), Handbook of environmental engineering (211-232). Springer Science. Loehr, R. C. and Webster, M. T. (1996). Performance of long-term, field-scale bioremediation processes. Journal of Hazardous Materials, 50, 105-128. MacDonald, J. A. and Rittmann, B. E. (1993). Performance standards for in situ bioremediation. Environmental Science Technology, 27 (10), 1974-1979. Pointing, S. B. (2001). Feasibility of bioremediation by white-rot fungi. Applied Microbiology Biotechnology, 57, 20-33.

Raghavan, P. U. M. and Vivekanandan, M. (1999). Bioremediation of oil-spilled sites through seeding naturally adapted Pseudomonas putida. International Biodeterioration & Biodegradation, 44 (1), 29-32. Revkin, A. (2001). New Pollution Tool: Toxic Avengers with Leaves. Retrieved March 29, 2011 from www.nytimes.com. Santos, H., Carmo, F., Paes, J., Rosado, A., and Piexoto, R. (2010). Bioremediation of mangroves impacted by petroleum. Water, Air, and Soil Pollution, 216, 329-350. Truong, P. N. V., Foong, Y. K., Guthrie, M., and Hung, Y. T. (2010). Phytoremediation of heavy metal contaminated soils and water using vetiver grass. In L. K. Wang (Ed.), Handbook of environmental engineering (233-275). Springer Science. Vidali, M. (2001). Bioremediation, an overview. Pure Applied Chemistry, 17 (2), 1163-1172. Watanabe, M. E. (2001). Can bioremediation bounce back? [electronic version]. Nature Biotechnology, 19 (12), 1111. Williams, J. (2006). Bioremediation of contaminated soils: A comparison of in situ and ex situ techniques.

Chapter 8 Recycling and Waste Management


Christopher Starbard, Megan McHugh, and Stefan Cepko

Introduction The waste management system encompasses the set of procedures and processes which relate to the collection, containment, and treatment of human-generated refuse. This multifaceted system focuses on storage and energy recovery through use of the most environmentally benign means available. The primary sectors of solid waste management include municipal waste, industrial byproducts, and hazardous refuse, each of which features a plethora of constituents and employs diverse technologies. The discovery and implementation of plastics have revolutionized the commercial, scientific, and technologic worlds. From purchasing a drink in a restaurant to typing on a keyboard, plastics influence the lives of people all over the world. Contributing 10% to the weight of municipal garbage waste, plastic waste still comprises much of the landfill in the United States. Regardless of current recycling standards and practices, plastic still contributes heavily to the pollution of the land and sea of earth. Polyvinyl chloride, commonly abbreviated as PVC, is a mass produced plastic used widely throughout the world. The second most produced plastic, PVC is utilized in jobs involving construction, piping, and even clothing; however PVC is also the single most harmful plastic for the environment (Recycling, 2011) . Over 30 million tons of PVC plastics are produced annually, yet only 18 million pounds of the 7 million tons of the plastics discarded are recycled (Wills, 2010). When less than a quarter of a percent of the PVC discarded is recycled, the lightweight and widely used plastic engorges municipal waste areas.

The chemical structure of a polyvinyl chloride plastic.

Many companies that make PVC plastics treat chemically to either resist aging or retain a vinyl effect that is beneficial to the products being produced by the company. When burned, the PVC plastics emit chemicals known as polychlorinated dibenzodioxins. These chemicals have distinct negative effects in the air and on the environment as a whole. Unfortunately, PVC plastics burned in municipal landfills and medical waste incinerators are large contributors to the high dioxin levels in the air. So much of the PVC discarded is not recycled because of the disparity of PVC recycling centers and the general difficulty of finding a center that takes PVC plastic. Because of the harmful chemical additives that treat the plastic during manufacturing, a safe, cheap, and commercially viable method of PVC recycling has not been widely applied. Though a recycling process using thermal depolymerization has been developed and is in use, the vinyl that is stripped from the PVC plastic costs more than vinyl purchased by large plastic manufacturers; consequently, the recycled vinyl is not purchased readily, and the detrimental process of PVC manufacturing continues with cheap vinyl that has never been used before. Abbreviated PET, the plastic polyethylene terephthalate is used primarily for the production of synthetic polyester fibers and plastic bottles.

A polyethylene terephthalate molecule.

In 2009, 5.1 billion pounds of bottles constructed from PET plastic were discarded; however, a record breaking 2.4 billion pounds of PET plastic were sent to recycling treatment plants. Where not even 1% of PVC plastic is recycled, PET plastic has an almost 50% rate of recycling (Fun Facts, n.d.). The high rate of recycling stems from the ease of plastic bottle recycling and the commercial viability of the resulting post-consumer PET plastic. Professionals define waste water as fresh water that has been contaminated. Municipally produced waste water constitutes 99.94% water (BAMF, 2004) on average but must be decontaminated prior to release. It entails raw sewage as a primary element, and although it is not EPA listed as hazardous waste, it is toxic to humans and requires special treatment. The typical American produces approximately 100 gallons of wastewater daily (BAMF, 2004), a considerable portion of which is excrement. However, sewage retains a vast amount of organic compounds from which energy can be recovered. Industry asserts that sludge contains within it ten times the energy which treatment requires (WERF, n.d.). Management experts estimate that 45% of this resource is either incinerated or deposited in landfills following sanitation (WERF, n.d.), but an increasing number of facilities employ microbes to extract the trapped

energy of fecal matter. An estimated 16,000 wastewater treatment plants operate in the United States, an approximate 3500 of which currently utilize anaerobic digestion to recapture energy (BAMF, 2004). Water treatment facilities exploit the feeding process of microorganisms that ingest excrement and release fuels such as methane as byproducts. On-site combustion engines burn the biogas to produce electricity. Most of the resultant power provides heat for digestion or powers the facility directly, thus greatly reducing the overall energy input. Superfluous electricity may be sold back to grid. Waste heat generated by combustion engine operation is also reclaimed and reapplied to facilitate the digestion process. These applications display promising results; operations in San Diego, California cut costs by $3 million and regained approximately $1.4 million from electricity sales to grid (BAMF, 2004).

The total MSW generation (by material) was 243 million tons in 2009 (before recycling). Household garbage subsumes a wide variety of materials, and the diverse composition and sheer mass of domestic trash makes it difficult to dispose of efficiently. The U.S. alone yielded approximately 243 million tons of domestic refuse in 2009 (Municipal Solid Waste, 2010), which on average constitutes 14.1% food scraps, 28.2% paper, 4.8% glass, 8.6% metals, 12.3% plastics, 8.3% rubber, leather, and textiles, 6.5% wood, 13.7% yard trimmings, and 3.5% other assorted materials (Municipal Solid Waste, 2010). Approximately 33.8% of that total is reclaimed through recycling or is composted, 11.9% burns in incineration processes, and landfills receive the remaining 54.3% (Municipal Solid Waste, 2010). Traditionally, domestic refuse management entailed deposition of material into landfills. However, a rapidly increasing population with a rapidly increasing waste output renders that option increasingly less favorable. Landfills present many opportunities for environmental pollution, whether by leachate flow or greenhouse gas production, and are also filling quickly. Open burning was formerly employed as

a means to eliminate household garbage but has been outlawed in the majority of locations due to the inherent danger; low burning temperatures result in inefficient combustion and release of potentially carcinogenic compounds (Open Burning, 2008). Advanced incineration reduces the volume of landfill bound solid waste by 80-85% and abates the volume by approximately 95% but that reduction often does not suffice (Evaluation of Emissions, 1997). Plasma gasification technology (referred to as waste to energy technology) provides solutions to the problem of landfill usage and traditional incineration. Augers receive raw garbage and disintegrate the waste into smaller fragments by physical means. Following fragmentation the garbage enters the plasma chamber. In plasma chambers, a high voltage current pulses between two electrodes and tears electrons away from the surrounding air or gas. The constant, resultant plasma decompounds the garbage to basic components and several useful byproducts. The solid, glass like substance which gasification produces from its own waste sludge (Moustakas, 2005) serves to harden asphalt, and the ambient gas can be converted to ethanol, hydrogen, or natural gas with relative ease. Industry regards waste gasification as renewable because professionals agree that municipalities will continue generation of waste indefinitely. Additionally, gasification appears well suited for the disposal of medical and biohazardous wastes which formerly were arduous to dispose of in a benign manner. The extreme heat employed in plasma gasification processes account for their efficiency and versatility; heat is provided not by materials directly but by the plasma itself, which is an important distinction (Moustakas, 2008). Inorganic solids constitute a large percentage of municipal waste and are generally referred to as recyclables. They include, but are not necessarily limited to: metals, plastics, rubber, textiles, and glass. These materials are not usually destroyed during use and are relatively easy to repurpose. Moreover, these resources tend to retain value. For those reasons, recycling exists as the primary method of reprocessing these solids. E-waste is an increasing sector of inorganic, solid waste management and is especially problematic because technological devices combine a number of diverse materials in a way which makes them difficult to extract. Electronics are not listed hazardous wastes, though they do contain a number of toxic constituents. An estimated 70% of the heavy metals residing in landfills derive from discarded technology (Scientific American, 2010). Printed circuit boards in particular comprise a number of inorganic constituents in a single unit and present certain difficulty for management. An experimental method circuit board recycling method entails the crushing of large quantities of units and extraction of metallic substances from the resultant pulp by means of an electric field. The process of recycling polymer plastics is generally referred to as tertiary recycling and employs the conversion of waste plastics to fuel as well as more basic, physical forms of reprocessing. In general, tertiary recycling entails the decomposition of waste polymers to hydrocarbons, at which point other materials such as metals are separated with ease. Automotive tires account for a large portion of solid debris; nearly 1.3 billion tires are sold annually (Hanlon 2006) and have been largely impossible to recycle by traditional methods. Additionally, tires generate high amounts of airborne pollution if burned, and even advanced incineration is an ineffective management outlet. Due to the versatile nature of petroleum derived rubber, natural degradation occurs only over extensive periods of time. New methods of chemical enhanced combustion within a vacuum generate minimal amounts of pollution while isolating large

amounts of the resources which compose spent tires. This method extracts approximately eight pounds of carbon, one gallon of oil, two pounds of steel, and roughly 30 cubic feet of flammable gas from the average 20 pound tire (Hanlon, 2006). Facilities that exercise this practice can process nearly 12 million tires, or 120,000 tons of material, annually (Hanlon, 2006). Re-treading extends tire lifetime prior to disposal and thus also reduces overall pollution. Worn treads can be removed and replaced multiple times before the core complex of the tire degrades to a hazardous state. Less optimal uses of discarded tires vary from the use of tire fragments as a base for asphalt and construction materials to use in structural landscape applications. Additionally, vulcanized rubber scraps can be reformed into objects such as brake pads, rubber mats, and containers to prevent the use of virgin material. Some diverse civil engineering applications such as dams also rely on tire pulp (Allbritton, n.d.). Textile recycling begins with sorting. Milling machinery then tears cotton based fabric of a specific color into strands; these strands are mixed with base fibers spun into useable thread. The resulting yarn is cleaned by various processes before redistribution. Facilities initiate the polyester reclamation process by removing buttons, zippers, and other non fabric materials from the discarded cloth. Specialized machines shred fibers multiple times before they melt under intensified heat. The material eventually congeals into bantam pellets that various mechanical processes decompose to simpler chunks of base polymers. Isolated polymers serve as fodder for the production of energy-conserving second generation material. Glass is another highly reformable solid. Re-forming facilities segregate containers according to color and pulverize used material into miniscule beads referred to as cullet. Glass pellet is liquified and remolded to create recycled glass containers of all types and colors. Producing glass containers from second generation material minimizes prime production and thus saves energy that would be squandered during the process. Industry generates a number of waste products, the most common of which are construction and demolition debris, cement kiln dust, waste oil, and mining wastes. Increasing numbers of contractors practice deconstruction as a means to avoid accumulation of debris; buildings are dismantled rather than destroyed and old materials are purchased and reused for new construction. Cement kiln dust, a particulate matter byproduct of cement production, stabilizes loose soil and raw wastes. Additionally, it serves as a filler in asphalt pavement. Mining wastes are particularly plentiful, and treatment facilities employ vitrification to produce glass from waste mineral sludge. That conversion is a highly efficient, viable practice because class is 100% recyclable and does not harm the environment if discarded. Thus dangerous liquid sludges are substituted for a fully recyclable, benign material that can be used to construct a variety of containers which would otherwise require virgin material. Moreover, vitrification produced glass serves as a base in asphalt in concrete, and reduces the overall waste volume by 87-93% (Jantzen, Pickett, & Schumacher, n.d.). Hazardous wastes account for a considerable portion of the wastes that require management. The ordinary American discards approximately 15.5 lbs of hazardous waste into the trash annually (Household Hazardous Waste, 2009). The EPA designed a multi list system in which F-listed wastes are non source-specific byproducts, K-listed wastes are strictly sourcespecific, characteristic wastes are hazardous substances which exhibit ignitability, corrosivity,

reactivity, or toxicity, and where universal wastes are not hazardous according to previous definitions but still contain materials which must be prevented from entering the environment. Most non-source specific wastes are spent solvents. Source specific K-listed elements are chemical byproducts. The U.S. disposes approximately half of the listed hazardous solid waste produced annually in underground injection wells. Professionals employ deep well deposition to contain hazardous wastes within the ground, but also generally agree that practice is potentially dangerous because toxic wastes may flow to unknown whereabouts. Incineration exhibits potential to volatize hazardous wastes and is thus also relatively inefficient but does suit certain hazardous waste types. Waste management experts consider lime a viable, versatile solution to a plethora of management problems. Many of the listed chemical compound wastes neutralize in reactions with lime. That method of treatment is highly welcomed because lime is relatively inexpensive and environmentally benign. Metals neutralized in lime reactions become more stable and exhibit less leakage potential; soil also stabilizes with the introduction of lime, thus also inhibiting leakage. The management system recognizes used motor oil as a major type of hazardous waste. However, it is never completely degraded and supports various secondary uses. Dehydration processes remove water which is treated and released, and refinement processes purify the actual oil of impurities and contaminants so that it may be reconditioned for secondary use in combustion engines. The purification processes retain a portion of the oil for the manufacture of repurposed lubricants. Hydro treating removes unnecessary polymers and compounds from that stock material and also enhances carbon chains with additional hydrogen. The process concludes with oil fractioning, which isolates three grades of lubricant oils and augments each with specific additives. The resultant substances are nearly impossible to discern from those of virgin production. Though somewhat inefficient, waste oil burners dispose of oil in a less dangerous manner than landfill deposition and often produce electricity. Most batteries are defined by the EPA as so-called universal waste and are thus a sub type of hazardous waste. Modern batteries contain a number of metals, including mercury, cadmium, lead, zinc, manganese, nickel, silver, and lithium, which are dangerous upon release to the environment. Though incineration proves a viable and efficient method of disposal for many wastes, it is an efficient and dangerous means for the destruction of batteries because combustion converts mercury and cadmium into volatile compounds and byproducts. Additionally, batteries in landfills present danger by means of leakage. Industry deems recycling a far more effective management method. On average, new lead-acid batteries incorporate between 60-80% recycled lead and plastic components (Battery Recycling, 2010). Nearly 97% of new lead -acid batteries are recycled through methods which extract those environmentally harmful resources. The actual recycling operations employ hammer mills extensively. These machines separate constituents by shredding or striking large quantities of batteries. Copper, aluminum, and steel can be withdrawn from the resultant pulpous substance with relative ease (Battery Recycling, 2010); steel production utilizes a portion of these reclaimed constituents. To prevent damage to materials or processing equipment, facilities cryogenically freeze cells with remnant charge prior to processing. The liquid nitrogen induced temperature of -325F eliminates the charge and prepares the units for deconstruction. The inner cavities of the batteries are flushed in a chemical

bath to decompose integral lithium salts; the constituents provide fodder for lithium carbonate production (Hamilton, 2009). Cobalt may be reclaimed and utilized for the production of battery electrodes (Hamilton, 2009) in addition. Other facilities exploit the physical characteristics of the materials to separate them in liquid filled vats. Heavy metals shift toward the bottom of the mixture whereas plastics emerge at the top; once plastics are removed from the surface and the liquid is drained, only lead and other metals remain (Battery Recycling, 2010). Those materials are separated by physical means with minimal input. Following separation, plastic scraps are melted, reformed, and then extruded into pellets of consistent size. Lead smelting furnaces convert lead fragments into uniform ingots which are repurposed for new battery construction. Sulfuric acid receives one of two possible treatments; the acid either neutralizes with baking soda, yields water, and is released or is converted chemically into sodium sulfate. Laundry detergent, glass, and textile manufacturing facilities all utilize that chemical ingredient at some point during production (Battery Recycling, 2010). Paper and paperboard makes up about 35% of all municipal solid wastes, thus it is very important that paper is recycled rather than collected in landfills. Some paper, known as mill broke, are the leftover trimmings and scraps from manufacturers of paper; this is internally recycled in a paper mill so that the scraps can be reused. Another type of paper waste is known as pre-consumer waste and is material that left the mill but for varying reasons was discarded before it was ready for use by consumers. The most common type of paper waste though is postconsumer waste. This is material that is disposed of after the consumer has used it. Typically cardboard containers, newspapers, magazines, official papers, telephone directories, and residential mixed paper (RMP) are what generally make up most of paper waste. The paper that is capable of being recycled is known as scrap paper. After being recycled the process of deinking is applied to the paper fibers in order to remove the printed ink from the fibers to make deinked pulp. This is an important step because both the ink and the paper gets recycled this way. The only problems with recycling paper though is that after being reused four to six times the fibers become weaker and are harder to use in remaking paper products (Jorgensen, 2010). Food scraps that are not edible are considered waste. After paper, food scraps make up most of solid wastes that are disposed. Leftovers typically comprise bones and the skin of certain fruits and vegetables. Edible scraps are usually recycled with home cooked meals where the family eats them later. Leftovers at other places, such as restaurants, may be taken home for later consumption by the purchaser of the food, or left behind to be discarded by the workers. Nonedible leftovers are thrown away and dumped into landfills because they are not easily recyclable. Composting is another good method for disposing of excess food, but he best method of course would be to avoid making any waste. Of course, this is the ideal situation, but it is not possible to accomplish because invariably there will always be something left over that needs to be discarded. Just attempting to prevent waste build up is very important though. If everyone consciously tried to avoid generating unnecessary waste, then the world would ultimately benefit (Solid Waste Management, 2007). Yard trimmings also pose a significant source of waste. Typically they include floral and tree trimmings, leaves, grass, brush, and weeds. Because prevention is not entirely possible, the

next best thing is to minimize waste. If generating some waste cannot be avoided, the most environmentally friendly option would once again be composting. It really is not necessary to dump yard trimmings in landfills when they could so easily be composted (Hettiaratchi, 2010). Metal is the next most commonly disposed of waste. Recycling metal is great because it can be recycled without losing any of their properties. This ability to be indefinitely recycled is really a unique advantage that metal has over many other materials that are generally recycled, such as paper. The unfortunate fact is that so much metal is thrown away instead of recycled. Every month the entire American commercial air fleet could be rebuilt with the mount of aluminum thrown away by Americans. Even just recycling an aluminum can save enough energy so that a television could be run for three hours. Forty pounds of limestone, 1000 pounds of coal, and 2500 pounds of iron ore is saved by every ton of recycled steel but Americans throw out enough iron and steel that could supply U.S. automakers continuously instead. With metal, nothing is more important than making sure it is recycled and not collected in landfills. If a steel mill recycled all of their scrap metal all related pollution, water and air, would be reduced by approximately 70% (Fricke, 2010). Textiles also make up a large part of the commonly disposed waste. Clothing is the biggest part of textiles that need to be recycled and this comprises different materials such as cotton, wool, burlap, polyester, foam, nylon, and leather. There are other types of waste in the textile category besides clothing though like carpets, rags, wipers, footwear, bags, and other synthetic fibers. The most wide spread way to reuse textiles is to donate used clothing to charities. Although a good method, reusing clothing is not considered recycling because the waste eventually reenters the system and usually ends up in landfills. Damaged textiles are usually converted into industrial wiping cloths. To recycle textiles it is necessary to have them shredded into shoddy fibers and blended with certain other fibers depending on what it will be recycled to later. This new mixture is then carded and cleaned so it is ready to be spun or woven (Jorgensen, 2010). Wood recycling is the greenest form of timber production. It is extremely common for timber to be reused because it is a popular image of being environmental friendliness. Glass is also a popular image of being green because it is so commonly recycled along with paper and plastic. Bottles and light bulbs are the two most abundant types of glass that are recycled. Every metric ton of waste glass that is recycled saves 315 kilograms of carbon dioxide from being released into the atmosphere. To recycle glass it is crushed and then remelted. Glass in this form is known as cullet (Solid Waste Management, 2007). Currently many different methods of disposing waste are employed. The most common method, yet least environmentally friendly is the use of landfills. Burying waste is such a wide spread way to manage it because it is so cheap and easy. An adverse effect of this is the development of gas generally comprising methane and carbon dioxide that is produced by organic waste when it breaks down anaerobically. The gas emitted by landfills generates odor problems, kills vegetation on the surface, and is a greenhouse gas. Some waste may also be blown by the wind out of the landfill and litter the surrounding area. Landfills sometimes also secrete liquid leachate, which is when water percolates the waste pile and encourages the decomposition of bacteria and fungi that use the available oxygen in their environment. This

leachate can potentially seep into any nearby body of water, where it causes the water to develop an overgrowth of sewage fungi and can lead to a toxin accumulation that could harm many. This is actually quite common, with 82% of landfills having leaks and 41% of those leaks being larger than one square foot. So despite being the most common waste management tactic, landfills are not good for the environment (Solid Waste Management, 2007). Another common form of disposal is incineration. Solid organic wastes are combusted to convert them to gaseous products and residue. This process does not eliminate the waste, but it reduces the volume of it to 20 or 30% of the original waste. This process is useful because it is a practical way to dispose of waste in solid, liquid, or gas form, or even hazardous wastes, and is a process that can be carried out by both individuals and on a larger scale. Countries that do not have the space for landfills tend to use incineration and sometimes use it as a waste to energy facility where they burn waste to generate heat, steam, or electricity. Problems can arise from incinerating waste because occasionally there are gaseous emissions from incinerator stacks and incineration can also develop dioxins, furans, and polycyclic aromatic hydrocarbons (PAHs) which are all organics that have serious consequences for the environment and human health. Exposure to these organics may cause children to be born with several different problems including asthma, low IQ, heart malformations, and there is some speculation that it is linked to cancer (Rubio, 2011). In recycling, waste is collected and reused as oppose to being collected and buried when recycling. The materials that make up the items being recycled are reprocessed into new products. Items for recycling are collected separately from other refuse in specified bins, generally separated into glass, plastic, or paper items. However, there are other materials that can be recycled such as electronics. These are more difficult to recycle because they are complex objects and need to be dismantled before the different parts can be recycled. Not all materials are allowed to be recycled in every city or country; it depends on the recycling programs that are established in those places (Taylor, 2002). Another form of recycling is known as biological reprocessing. Any organic materials that can be found in nature can be composted. This is when the matter is simply allowed to decompose and be digested into organic material that can be recycled as compost or mulch for many purposes, mainly agricultural or landscaping. Gas is released during this process and can be captured. Methane is the most common gas to be released during composting and can be used to generate electricity and heat in a power station through a process known as cogeneration. The main reason to compost is because it speeds up the natural decomposition rate of the organic matter and overall makes it easier to manage waste. Biological reprocessing can be either aerobic or anaerobic, with anaerobic being the preferred method of decomposition because microorganisms break down faster with a lack of oxygen. This process of waste management is considered more effective for the environment than using a landfill or incineration because of the ability to form a useful synthetic gas as opposed to producing fossil fuels (Rapport, 2011).

This chart illustrates the management of MSW in the United States in 2009. There are many ways to recover energy besides biological reprocessing. Most of these processes are thermal technologies that involve heating up waste in sealed areas without oxygen under high pressure. The process of pyrolysis would then yield the original material converted into different solid, liquids, and gases. The solid residue is char, which can be refined to make products such as activated carbon, which is good for adsorption because it has a large surface area and is very porous. The liquid and gas byproducts of pyrolysis can also be refined into chemical products, but it is more likely that they would be burnt to produce energy (Fricke, 2010). Gasification and plasma gasification are also becoming more common ways to convert waste to energy. In these processes organic materials are directly altered into synthetic gas, which is mostly composed of hydrogen and carbon monoxide. The synthetic gas is burned so that electricity and steam can be produced. The byproduct of heat also comes from these thermal technologies and can be used in cogeneration as well. In the two different gasification processes the solid byproduct of slag can be converted to mineral wool and is useful for insulation. All of these energy recovery technologies have a promising position in the future where all waste is recycled back into energy instead of filling holes in the ground (Dave, 2010). Over 251 million tons of solid waste was accumulated in 2009 alone, and only about 33% of the waste gathered could be recycled in any way; however, most of the recycled materials are the paper items. As more plastic engulfs landfills, less area can be devoted to productive projects. Dumping of plastic begins to spill into oceans which harms the ecosystem drastically. Already in the middle of the Atlantic Ocean there exists an enormous gathering of plastics and other refuse that have been trapped in the powerful ocean currents. This amalgam of trash and plastics contains millions upon millions of pounds of refuse that penetrates deep into the body of the ocean damaging its ecosystem. Thermal depolymerization was developed not only dispose of plastic waste but also to convert it back into useful post-consumer crude oil. Because of the heating element used in its process, TDP completely deconstructs organic poisons inherent in the materials being processed.

The process used in TDP imitates the processes that produce fossil fuels in the earth. Material to be processed is ground into small particles and dampened with water. Next, the material is heated at a temperature of 250 degrees Celsius using a pressure cooker for approximately fifteen minutes, and then the water is boiled off. The process of heating separates the material into hydrocarbons and disposable minerals, and following the separation of the hydrocarbons and minerals, the hydrocarbons are heated to double the original temperature then distilled into usable fuels (Lemley, 2003).

In the process of TDP conversion, plastic/biomass fed into grinder (a), incinerated in the first reactor (b), rapidly dried in flash vessel (c), incinerated again in the second reactor (d), distilled into usable fuel in distiller (e), then stored in tanks (f) Only 15% of the energy gained from the fuel outputs from the TDP process needs to be utilized to provide energy to recycling plants. The remaining 85% of fuel can be used in other fields (Lemley, 2003). Instead of outputting fuel to be used in other sources, monomer recycling utilizes recycled plastics and decomposes them into base components. The process of monomer recycling begins by separating the plastics into respective groups of plastic such as the PET, PVC, or High Density Polyethylene groups. Using chemicals specific to the category of plastic, the plastics go through a process inverse to the method by which they were produced. Once the plastics are decomposed into the most basic of components, the post-consumer materials are shipped off to companies that will then repurpose the materials into new plastics. Though the process is an excellent example of reuse, the disposal of the chemicals used to degenerate the plastics can be a real danger to the environment as well as humans. Where the process of thermal depolymerization does not take advantage of any procedure other than heating and cooling with pressure, the chemicals used to process the plastics in monomer recycling pose challenges in properly discarding excess waste. The environmental-friendliness of monomer recycling lies solely with the chemicals and how they are used. As traditional forms of energy peter, energy recovery becomes increasingly more important. Multifarious technologies are utilized to harness the residual energy of waste. The solid waste management industry, which formerly focused on containment, now attempts to recycle and reclaim resources and energy alike. Increasing energy demands bolster the technological advances that support energy reclamation.

Bibliography Allbritton, M. (n.d.). Scrap tire recycling technology. Retrieved April 15, 2011, from http://www.ehow.com/about_5628399_scrap-tire-recycling-technology.html Analyses of the recycling potential of medical plastic wastes. (2002). Waste Management, 461-470. Battery Council International. Battery recycling, lead acid recycling safety flyer. (2010). Retrieved April 15, 2011, from http://www.batterycouncil.org/LeadAcid Batteries/BatteryRecycling/tabid/71/Default.aspx Best Practices in PET Recycling. (1997). Retrieved April 16, 2011, from NAPCOR.com Dave, P.N. (2010). Plasma pyrolysis and gasification of waste a review. Journal of Scientific & Industrial Research, 69(3), 177. Fricke, K. (2010). Energy efficiency of substance and energy recovery of selected waste fractions. Waste Management, 31(4). Fun facts about PET. Retrieved April 16 2011, from NAPCOR.com Guffey, D. F., & Barbour, A. F. (1994). Patent No. 5753086. USA. Hamilton, T. (2009). Lithium battery recycling gets a boost. Retrieved April 16, 2011, from http://www.technologyreview.com Hanlon, M. (2006). Earth first - promising new tire recycling technology. Retrieved April 14, 2011, from http://www.gizmag.com/go/5989/ Hettiaratchi, J.P.A. (2010). Sustainable management of household solid waste. International Journal of Environment and Waste Management, 6(1), 96. Jantzen, C.M., Pickett, J.B. and Schumacher, R.F. (n.d.). Mining industry waste remediated for recycle by vitrification. Retrieved April 15, 2011, from http://sti.srs.gov/ fulltext/ms2000195/ms2000195.html Jorgensen, S. (2010). A dynamic game of waste management. Journal of Economic Dynamics and Control, 34(2), 258-265. Lemley, Brad. (2003). Anything into Oil. Retrieved April 16 2011 Lots of Action in Plastics Recycling. (2011). Plastics Technology, 4. Moustakas, K. (2005). Demonstration plasma gasification/vitrification system for effective hazardous waste treatment. Journal of Hazardous Materials, 123, 120-126.

Moustakas, K. (2008). Analysis of results from the operation of a pilot plasma gasification/vitrification unit for optimizing its performance. Journal of Hazardous Materials, 151, 473-480. New Hampshire Department of Environmental Services. Environmental fact sheet, open burning of residential trash. (2008). Retrieved April 13, 2011, from http://des.nh.gov/ organization/ commissioner/pip/factsheets/ard/documents/ard-33.pdf New Hampshire Department of Environmental Services. Household hazardous waste. (2009). Retrieved April 13, 2011, from http://des.nh.gov/organization/commissioner /pip/factsheets/hw/documents/hw-3.pdf Oak Ridge National Laboratory. Biomass and alternative methane fuels (BAMF) Super ESPC. Program Fact Sheet. (2004). Retrieved April 16, 2011, from http://www1.eere. energy.gov/ femp/pdfs/bamf_wastewater.pdf Plastic incineration versus recycling: a comparison of energy and landfill cost savings. (1996). Journal of Hazardous Materials, 295-302. Rapport, J. (2011). Anaerobic phased solids digester technology innovation for waste to energy conversion. Energy Special Issue, 35( 3). Recycling of non-bottle rigid plastics. (2011, February 16). States News Service. Recycling. (2011). In Encyclopedia Britannica. Retrieved March 24, 2011, from Encyclopedia Britannica Online Rubio, S.J. (2011). Application of incineration in the destruction of organic wastes. Journal of Hazardous Materials, 186(1). Scientific American. (2010). One person's trash is another's technology: recycling or donating discarded electronic equipment helps reduce e-waste pollution [electronic version].Scientific American. Solid Waste Management. (2011). Encyclopaedia Britannica. Retrieved March 24, 2011, from http://www.britannica.com/ Taylor, B. (2002). Two shades of green recycling. Recycling Today, 48(4). U.S. Environmental Protection Agency. Evaluation of emissions from the open burning of household waste in barrels. (1997). Retrieved April 17, 2011, from http://www.p2pays.org/ref/10/09845.pdf U.S. Environmental Protection Agency. Municipal solid waste generation, recycling, and disposal in the United States: facts and figures for 2009. (2010). Retrieved April 14, 2011, from http://www.epa.gov/osw/nonhaz/municipal/pubs/msw2009-fs.pdf

Water Environment Research Foundation. Water environment research foundation fact sheet, wastewater sludge: a new resource for alternative energy and resource recovery. (n.d.). Retrieved April 16, 2011, from http://www.werf.org/am/template.cfm?sect ion=Search_Research_and_Knowledge_Areas&template=/cm/ContentDisplay .cfm&ContentID=7008 Wills, A. (2010, May 24). The Numbers on Plastics. Earth911.com. Retrieved June 2010 from http://earth911.com/news/2010/05/24/the-numbers-on-plastics/

Chapter 9 Biofuels
Alexander Witt, Kris Skende, and Andrei Cimpan Introduction Biofuel is any fuel that is derived directly from biomass, which includes plant material or animal waste. There are several types of biofuels; some that have been exploited, such as wood, can be used directly in their raw material forms to generate heat through combustion. This thermal energy can then be applied toward the revolution of generator turbines in power plants to yield electrical output for use in domestic, commercial, and industrial capacities. A multitude of existing power facilities implement this same technique in their electrical manufacturing processes when combusting organic compounds such as grasses, woods, and other types of biomass. Currently, research and development leading to viable sources of liquid biofuel are of particular interest, as a direct consequence of the diverse array of infrastructure being readily compatible with ethanol in the consumer market. The most significant of these organizational structures becomes apparent when considering the successful daily operation of both contemporary private and commercial modes of transportation on petroleum-ethanol solutions. Typically, the biomass product most frequently employed in the production of commercially manufactured biofuel is ethanol (ethyl alcohol), a common hydrocarbon byproduct of the starch (carbohydrate) or sugar (polysaccharide) processes of fermentation. Dominating this industry of ethanol bioconversion, Brazil and the United States are among the leading annual producers of this particular liquid fuel reserve. In the United States, ethanol biofuel is cultivated primarily from produce crops such as corn (maize) grain, and it is typically combined with gasoline to yield a fuel blend that is 10% ethanol, known colloquially as gasohol; however, in Brazil, ethanol is derived primarily from sugarcane feedstock, and it is often used in the form of 100%-ethanol fuel or gasoline blends containing 85% ethanol. Other than ethanol, the second most common liquid biofuel is biodiesel, which is extracted primarily from oily plants (such as the soybean or oil palm) and to a lesser extent from other oily sources (such as waste cooking fat from restaurant deep-frying). Biodiesel, which has found great acceptance in Europe, is generally combusted in diesel engines and usually is blended in varying homogeneous compositions with petroleum-based diesel fuel. Other widely manufactured biofuel products include methane gas (which can be derived from the decomposition of biomass in anaerobic environments), methanol, butanol, and dimethyl ether. At present, intensified efforts are being made to advance innovative methods capable of yielding ethanol from biomass containing high concentrations of cellulose. This cellulosic ethanol could potentially provide an abundant fuel supply by utilizing low-value materials including wood chips, grasses, crop residues, and municipal waste. Once these procedures are

discovered, the homogeneous mixtures of commercially used biofuels will undoubtedly be modified according to development; however, in the meantime, the range of possibilities presently known could furnish power for transportation, heating, cooling, and electricity. (Scott Hess) Economic and Environmental Considerations In evaluating the economic benefits of biofuels, the energy required to produce them has to be considered. For example, the process of cultivating corn to produce ethanol consumes fossil fuels through farming equipment, fertilizer manufacturing, transportation, and ethanol distillation. In this respect, ethanol manufactured from corn represents a relatively inefficient means of fuel cultivation; the energy output from sugarcane is greater and that from cellulosic ethanol could be even more substantial.

These charts compare the biofuel use among top biofuel consumers. In addition, biofuels also provide environmental benefits, but depending on how they are manufactured, they can also have serious environmental drawbacks. As a renewable energy source, plant-based biofuels, in principle, make little net contribution to global warming and climate change; the carbon dioxide (a major greenhouse gas) that enters the air during combustion will have been removed from the environment earlier as growing plants engage in photosynthesis. Such a material is said to be carbon neutral. In practice, however, the industrial production of agricultural biofuels can result in additional emissions of greenhouse gases that may offset the benefits of using a renewable fuel. These emissions include carbon dioxide from the burning of fossil fuels during the production process and nitrous oxide from soil that has been treated with nitrogen fertilizer. In this regard, cellulosic biomass is considered to be more beneficial. Land use is also a major factor in evaluating the benefits of biofuels. Corn and soybeans are important feedstocks, and their use in fuel production can therefore affect the economics

governing the price and availability of produce. By 2007 about one-fifth of the corn output in the United States was allocated to the production of biofuel, and a single case study showed that even if all U.S. land dedicated to corn cultivation were to be used to produce ethanol, it could replace just 12% of gasoline consumption. In addition, crops grown for biofuel can compete for natural habitats. For instance, emphasis on ethanol derived from corn is shifting to grasslands and brushlands, and emphasis on biodiesel is causing the destruction of ancient tropical forests through the construction of palm plantations. This loss of natural habitat can change the hydrology, increase erosion, and generally reduce the biodiversity of wildlife areas. The clearing of land can also result in the sudden release of large amounts of carbon dioxide as the plant matter is burned or allowed to decay. Furthermore, some of the disadvantages of biofuels apply mainly to low-diversity biofuel sources, including corn, soybeans, sugarcane, and oil palms, which are traditional agricultural crops. One alternative involves the use of highly diverse mixtures of species, with the North American tall grass prairie as a specific example. Converting degraded agricultural land that is out of production to such high-diversity biofuel sources could theoretically increase wildlife area, reduce erosion, cleanse waterborne pollutants, store carbon dioxide from the air as carbon compounds in the soil, and ultimately restore fertility to degraded lands. Moreover, these biofuels could be either burned directly to generate electricity or converted to liquid fuels as technologies develop. The proper way to grow biofuels so that they serve all needs simultaneously will continue to be a matter of much experimentation and debate, but the fast growth in biofuel production will likely continue. In the European Union, biofuels were planned to account for 5.75% of transport fuels by 2010, and 10% of European vehicles are expected to run exclusively on biofuels by 2020. In the United States the Energy Independence and Security Act of 2007 mandated the use of 136 billion liters (36 billion gallons) of biofuels annually by 2020, more than a six-fold increase over 2006 production levels. The legislation also requires, with certain stipulations, that 79 billion liters (21 billion gallons) of the total amount be biofuels other than corn-derived ethanol, and it continued certain government subsidies and tax incentives for biofuel production. Additionally, the technology for producing cellulosic ethanol is being developed at a number of pioneering plants in the United States. In combination with an emerging technology called carbon capture and storage, the process of generating and using biofuels may be capable of perpetually removing carbon dioxide from the atmosphere. Under this vision, biofuel crops would remove carbon dioxide from the air as they grow, and energy facilities would capture the carbon dioxide emitted as biofuels are burned to yield power. Captured carbon dioxide could be sequestered (stored) in long-term repositories such as geologic formations beneath the land, in sediments of the deep ocean, or conceivably as solids such as carbonates (Biofuels, 2008).

The diagram above illustrates the lifecycle of typically used biofuels.

History The concept of biofuels is surprisingly old. Rudolf Diesel, whose invention now bears his name, had envisioned vegetable oil as a fuel source for his engine. In fact, much of his early work revolved around the use of biofuel. In 1900, for example, at the World Exhibition in Paris, France, Diesel demonstrated his engine by running it on peanut oil. Similarly, Henry Ford expected his Model T to run on ethanol, a corn product. Eventually, in both Diesel's and Ford's cases, petroleum entered proved to be the most logical fuel source. This was based on supply, price and efficiency, among other things. Though it wasn't common practice, vegetable oils were also used for diesel fuel during the 1930s and 1940s. It was in the 1970s and 1980s that the idea of using biofuels was revisited in the United States. One of the most important events occurred in 1970 with the passage of the Clean Air Act by the Environmental Protection Agency (EPA). This allowed the EPA to more closely regulate emissions standards for pollutants like sulfur dioxides, carbon monoxide, ozone and nitrogen oxides (NOx). (Scott Hess)

Solid Biofuels Solid biofuels are the most commonly used alternate energy sources around the world. They are derived from biomasses, such as wood and hay, and are typically burned to extract energy. Raw materials including firewood, tree bark, wood chips, biomass from demolition waste, straw, and biomass from other crops are processed into briquettes or pellets and sold commercially for fuel. Wood can also be dried and broken down into a powder. Once processed, these products undergo combustion and release energy in the form of heat. Another important biofuel is charcoal, which is derived by thermochemically transforming biomass using pyrolysis.

Charcoal has much less energy content than the original biomass, but its cleaner combustion makes it a beneficial product. Many commercial factories utilize charcoal as and effective biofuel. Developing countries greatly benefit from the cost efficiency of solid biofuels while reducing pollution caused by fossil fuels. Overall, solid biomass is an effective biofuel that presents people with a renewable energy source. Although its energy is extracted by combustion, the processed biomaterials burn much cleaner than fossil fuels (Johnsen, n.d.).

Bioalcohol and Bioethanol The most utilized biofuels are bioalcohols, which are derived from crop waste known as feedstock. Bioethanol, a popular bioalchohol, is produced by the fermentation of farm products, including corn, wheat and sugar cane. These particular crops have feedstocks that are high in starch and cellulose. The biomass of this produce undergoes pretreatment and is placed through enzyme production by saccharification. The enzyme-rich biomass is then fermented with yeast or bacteria that catalyze ethanol production and is finally distilled to produce the resulting bioethanol product. This liquid fuel is renewable and has reduced emissions when compared to fossil fuels. This bioalcohol is commonly used as a fuel additive, but some vehicles with modified spark-ignition engines are able to run on bioethanol in its pure form. A car operating on this fuel emits much lower levels of carbon monoxide, hydrocarbons, carcinogens, and sulphur oxide. The energy capacity and performance of ethanol is comparable to standard fossil fuels. Although it only contains 66 percent of the energy, the bioalchohol benefits from high octane levels (Nigam & Singh, 2010). Because it is an environmentally conscious alternative to standard automotive fuel, countries such as Brazil, the United States, China, and India all utilize bioethanol. The Brazilian variety, which is manufactured from sugar cane, contains balanced energy and produces fewer emissions in comparison to bioethanol from other sources. In Brazil, ethanol use is increasing at staggering rates, and 40% of U.S. fuel consumption is comprised of it. The United States has set up a law that enforces the production of 36 billion gallons of bioethanol by 2022. Of the 36 billion gallons, 15 will be composed of corn, and 21 will be composed of cellulosic feedstock. At a global perspective, 51 billion liters of the bioalcohol were produced and used to fuel vehicles. Nations are currently discussing new policies and are planning expansions that would increase ethanol consumption in the years to come (Nigam & Singh, 2010).

A commercial biodiesel pump.

Biodiesel Biodiesel is another important biofuel that is most commonly utilized in Europe, where a majority of vehicles operate with diesel engines. This fuel is composed by mixing vegetable oil into diesel fuel and allows a majority of diesel engines to normally operate on it. Vegetable oil is firstly run through a transesterification process. This converts the triglycerides from vegetable oil into diglycerides, monoglycerides, and then glycerol with alkyl esters (Leung et al., 2009). Biodiesel can be made from many varieties of vegetable oil including, palm oil, soybean oil, sunflower oil, coconut oil, rapeseed oil, and tung oil. Recent research has determined that waste products, such as used vegetable fats and cooking oil waste, can be used to produce a lower quality diesel additive. Generally, vegetable oil that has undergone transesterification is mixed with diesel fuel at a 4:1 ratio and used to operate most diesel-powered engines (Nigam & Singh, 2010). This biofuel is renewable and can sustain high heat level, but it has a high viscosity levels that damage its efficiency. The addition of biodiesel to standard diesel fuel reduces the omissions associated with diesel, but some tests have showed that the vegetable oil can be detrimental to the engine of the vehicle. Overall, this alternative fuel has resulted in beneficial research on diesel as a cleaner fuel (Nigam & Singh, 2010).

Lipid Derived Biofuels Biofuels can also be generated from lipid sources. There are two types of fuels that are based on lipids: pure plant oil (PPO) and biodiesel. Similarly, there are many options for utilizing different feedstock types for PPO and biodiesel production. Besides dedicated oilseed crops such as rapeseed and soybean, microalgae, animal fats, and waste oil provide viable feedstock opportunities for fuel production. The lipid feedstock sources can be sub-divided into palm fruits, algae, seeds, and waste oil. Although, the productivity of palm fruit is on the highest, the most common feedstock sources for PPO and biodiesel production are seeds from various plants (Rutz & Jannsen, 2007). Furthermore, the selection for a dedicated feedstock is pre-determined by agricultural, geographical, and climatic conditions. But it also has to be considered that different feedstock types are characterized by different properties. For example, the oil saturation and the fatty acid content of different oilseed species vary considerably. Biodiesel from highly saturated oils is characterized by superior oxidative stability and high octane number, but performs poorly at low temperatures. Therefore, pure plant oil (PPO) with a high degree of saturation is more suitable as feedstock in warmer climates (Rutz & Jannsen, 2007).

Biogas Biogas, a methane-based alternative fuel, is gaining popularity as a legitimate biofuel. Initially, scientists engineered biogas by anaerobic digestion, a process in which the biomass is broken down in the absence of oxygen. Anaerobic digestion involves bacterial treatment that produces methanol and oxygen. First, the organic materials are placed through bacterial hydrolysis to breakdown insoluble polymers. Then, acidogenic and acetogenic bacteria convert

the products of bacterial hydrolysis into acetic acid, ammonia, hydrogen, and carbon dioxide. Methanogens process the resulting materials into a mix of approximately 60% methane and 40% oxygen. Scientists have developed a method to purify the gas and increase methane levels to over 95%. The methane/oxygen gas is processed through a water scrubber and compressed. Biomass such as, animal excreta, food industry waste, sewage sludge, and municipal green waste has been used to produce the biofuel. Biogas is currently being used as an energy source in the United States and Europe, but many question its efficiency and environmental effects in comparison with fossil fuels (Jury et al., 2009).

Biogas pipes are used to store the fuel once it has been processed. A group of scientists from Luxembourg designed a study, which examined the effectiveness of biogas produced by monofermentation. Currently, the nation is consuming more energy per capita than any other in the world. Thus, in order to confront energy demand, it is turning to alternate energy to reduce emissions and increase environmentally consciousness. In specific, the scientists involved in the study wanted to curtail energy and cost requirements for each step of biofuel production. The analysis spanned the entire life cycle with farming, digestion, purification and upgrading and transportation being subject to analysis. Agricultural testing included seed production transport, reduced tillage, harvesting, chemical harvesting, chemical fertilizer, emissions, and soil emissions. In addition, the crops were treated with phosphorus and potassium fertilizers to grow properly. The harvested biomass was prepared as silage and digested using monofermentation. The digestate produced could be reused to treat the farmland on which the crops were grown, and the methane produced during digestion is then purified and compressed. Using current technology, approximately one to four percent of the pure methane fuel is lost during production. Transportation and energy used during farming and production were all considered in the analysis. The overall study revealed that the production of biogas was more expensive due to the energy required to manufacture the product. The researchers concluded that biogas has the potential to be an efficient fuel, but new production methods must be developed. The current benefits of biofuels are that it has lower greenhouse gas emissions, and that it is an effective alternative to nonrenewable fuels (Jury et al., 2009).

Syngas Syngas, or synthetic gas, is a biomass derived fuel alternative produced by fermenting gasified lignocellulosic feedstock. The biomass syngas is derived from makes it a renewable fuel and it is a competitive method of ethanol fuel production (Zerbe, 2009). It is produced by a thermochemical pathway in which the biomass undergoes gasification and fermentation. Because syngas is produced from non-food sources, it is more environmentally sound than bioethanol, which is produced from corn starch, cassava starch, and sugar cane. Cost analysis has determined that bioethanol engineered using enzymatic hydrolysis is more cost efficient than the syngasbased variant (Khanal & Munasinghe, 2010). The methodology of engineering biofuels from syngas was first studied in the 1920s, and improvements are still being made today. In 1987, Closteridium ljungdahlii, a bacterium that had the ability to ferment carbon dioxide and hydrogen into ethanol and acetic acid, was discovered. This finding initiated research which led to the development of current process of producing bioethanol from syngas. To produce syngas, the lignocellulosic feedstock is first endothermically gasified under high temperatures of 750-800 C. Before it can be fermented, the excessively hot gaseous product is cooled in a heat recovery system, which collects released heat energy. The syngas produced by gasification often contains debilitating pollutants such as tar and solid particles. To remove the particulate matter, the gasified biomass is passed through a filtration process. If undesired materials are not properly filtered, microbial catalysts can be inhibited, and the efficiency of the fermentation process will be reduced. The gaseous mix is catalyzed by microbacteria to produce the final ethanol product. Mesophilic microorganisms such as Clostridium aceticum, Acetobacterium woodii, C. carboxidivorans, and C. ljungdahlii are used to catalyze fermentation. An acetyl-CoA pathway in an anaerobic environment is used, and the syngas is converted into ethanol, acetic acid, and butanol. The efficiency of the produced fuel can be affected by the reactor manufacturers use to ferment syngas. Fermentation in batch reactors involves a closed system in which the gaseous substrate is supplied continuously. In continuous stirred-tank reactors, a liquid nutrient flow is supplied as the gaseous substrate is continuously injected. Other machines used to produce ethanol include bubble column reactors, monolithic biofilm reactors trickle-bed reactors, microbubble dispersion stirred-tank reactors, and membrane based systems. Different fermentation devices vary based on energy consumption and syngas production methods. Temperature is critical to the successful production of ethanol, and ideals for mesophilic microorganisms range between 37 and 40 C. Other critical factors include growth media used, types of microbes, and pH levels of the microbial catalyst. Syngas is currently a viable alternative fuel, but it is not the most economically efficient variant to solving the world impending energy crisis (Khanal & Munasinghe, 2010).

Advanced Biofuels Advanced biofuels are liquid fuels derived from any biomass that is directly cultivated for the purpose of providing a renewable and alternative energy source. Examples of this biomass include such feedstock as switchgrass, sorghum, straw, alfalfa, algae, and available forms of lignocellulose. The most notable difference between current biofuels and advanced biofuels is that rather than fermenting starches from corn grain, sugar cane, and beets, the plant fibers and structural polysaccharides are instead deconstructed to produce simple sugars (Himmel, Ding, Johnson, Adney, & Nimlos, 2007). Once chemically produced, the biomassderived saccharides of glucose, cellobiose, or other plant sugars are fermented by microbes into ethanol, biodiesel, hydrogen, methane, and other combustible compounds chemically similar to petroleum (Wang, Gao, Ren, Xu, & Liu, 2011). Recalcitrant Biomass In general, the decomposition of cellulose, an essential structural compound found in plant cell walls, yields an increased amount of energy-per-kilogram of its biomass when compared to the more diluted and starch-filled cellular tissues of corn, soy, vegetable, and the wide variety of feedstock currently employed in the mass production of alternative energy. In addition, lignocellulosic biomass (plant tissue consisting of hemicellulose, lignin, and cellulose) is abundant and can be converted into ethanol by fermentation; however, recalcitrance, inherent to the chemical structure of this biomass, has limited the commercialization of ethanol generated from dedicated bioenergy crops. In this light, failure to establish a system of commercially competitive liquid hydrocarbon production precedes from of the use of cost-ineffective techniques to address defensive tissue response against enzymatic and chemical attack (Fu, Mielenz, Xiao, Ge, & Hamilton, 2011).

Lignocellulose

This image illustrates the structural components (cellulose, lignin, and hemicellulose) of lignocellulosic biomass .

Cellulase

This image illustrates the chemical structure of cellulase, an enzyme typically used to deconstruct cellulose into ethanol .(http://sci.waikato.ac.nz). A factor contributing to an increase in the expense of conventional methods for ethanol bioconversion includes the thermochemical pretreatment of lignocellulosic biomass, which incorporates a process conducted to reduce the level of recalcitrance by dissembling hemicellulose into simpler sugars and increasing the accessibility for cellulase enzymes to the appropriate substrate (Himmel, Ding, Johnson, Adney, & Nimlos, 2007). In fact, these treatments with heat, toxic chemicals, peroxides, and ammonia often represent 30% of the cost of advanced biofuel production (Lionetti, Francocci, Ferrari, Volpi, & Bellincampi, 2010).

Genetic Engineering and Biofuel Production Recently investigated measures intended to bolster the efficiency of ethanol conversion involve the genetic modification of switchgrass, sorghum, canarygrass, and alfalfa. For instance, decreasing lignin structural composition has demonstrated an increase in the ability for saccharifying enzymes to degrade cellulose into fermentable sugars. (Fu, Mielenz, Xiao, Ge, & Hamilton, 2011). Because lignin constitutes approximately 20% of switchgrass and most other cellular plant tissues, efficiently degrading this compound to produce energy has become a major priority (Yang, Kataeva, Hamilton-Brehm, Engle, & Tschaplinski. 2009). Furthermore, the negligible effect of decreased lignin content on cellulose structure suggests that the genetic alteration of lignocellulosic biomass tissues facilitates ethanol bioconversion as a feasible option that will not negatively impact theoretical energy yield. Genetic traits that have been tested and utilized in biofuel production often incorporate traits that increase sugar release and ethanol yield, while decreasing the recalcitrance of plant materials other than the lignin found in plant cell walls. Aside from decreasing lignin in stem and leaf tissues, the genetic modification of plant cell walls with a reduction in de-methyl-esterified homogalacturonan (a chemical compound contributing to polymer strength in plant cell walls) increases efficiency in the bioconversion of lignocellulosic biomass to a viable energy source by enabling enzymes that decompose cell walls and cellulose. Plant cell walls comprise a considerable percentage of lignocellulosic biomass and represent a relatively abundant material for ethanol and other liquid fuel conversion.

Other than genetic modification, investigation into the implementation of other types of bacteria such as Anaerocellum thermophilum and Shigella flexneri has been conducted as a more efficient means of replacing the expensive process of chemical pretreatment. The use of these bacteria is recent and presents a viable replacement or supplement for the use of white-rot Basidiomycetes and Actinomycetes fungi in the saccharification of lignocellulose. This is an important advancement, considering that very few cultivated microorganisms can successfully degrade this biomass without chemical pretreatment (Yang, et al., 2009). As a consequence of the high cost of converting biomass to sugar, the use of thermophilic bacteria for the purpose of degrading biomass certainly represents a practical option. Cellulose dissembling bacteria such as Clostridium thermocellum, and hemicellulose deconstructing bacteria could potentially be applied as an alternative to expensive cellulase enzymes (Shaw, Podkaminer, Desai, Bardsley, & Rogers, 2008). In addition, anaerobic bacteria have demonstrated the ability to degrade untreated plant biomass along with crystalline cellulose and xylan. The genetic modification of improved strains of anaerobic bacteria through metabolic engineering has also been considered as a priority of advanced biofuel research. Particular alterations of thermophilic bacteria incorporate genes calculated to produce ethanol at high yields while excluding others that produce organic acids. Enhancement of the microorganisms involved in the fermentation process has been demonstrated as a means of furthering progress toward commercializing lignocellulosic biofuels. Thermoanaerobacterium saccharolyticum is an example of a researched thermophilic anaerobic bacterium capable of fermenting xylan and cellulose (Shaw, et al., 2008). Algae and Biofuels Aside from the ethanol bioconversion with lignocellulosic biomass, algae have been considered for their extreme fuel efficiency and innate ability to exude natural oils (Francis, 2010). Recent results have indicated the algal yield to be approximately 2000 gallons of biofuel per acre, demonstrating greater efficiency than that of other current biofuels in production. (Francis, 2010). In particular, the appropriate species of algae are selected as cultivars for biofuel production based on growth rate, composition, and recalcitrance as comparative criteria. Favorable overall composition in algae often includes an increased presence of lipids (otherwise known as triglycerides, which are storage cells used for natural oils), carbohydrates (polysaccharides or polymer chains that are deconstructed during fermentation), and proteins. The ability to thrive in specific climatic conditions is also considered (Francis, 2010).

The photograph above shows algae growth in a setup used for photovoltaic panels.

The high diversity and abundance of multiple species of algae make production of algal biofuel an efficient and reliable means of cultivating advanced biofuel biomass that can then be converted from cellulose to ethanol or directly extracted from the cellular tissue. Additionally, fuel end products are diverse due to the ability to apply more than one method. These products include biodiesel, ethanol, methane, hydrogen, jet fuel, bio crude and more. (Francis, 2010). Industrial methods that are currently in wide use include decreasing osmotic pressure within microalgae or macroalgae cells in order to extract the oil and pressing to both extract oils and degrade the recalcitrance of remaining cellulose. Currently, various species of algae have been determined to as viable options in the production of biofuel; these include macroalgae because of their ability to yield high quantities of oil and microalgae because of their diminished production costs and cultivation requirements (Francis, 2010). The efficiency of algal biofuel cultivation is directly observed in the methods use to reduce high water content. For instance, centrifugation, flocculation (clumping together of suspended algae particulate matter), microscreening (the use of small filters to separate algae), pressing, and the use of chemical solvents in oil extraction do not depend on expensive enzymes or chemicals. Additionally, new methods such as cellulosic fermentation, gasification (for deriving biodiesel), and anaerobic digestion, have been investigated as ways to exploit the potential of macro algae (Francis, 2010).

The image above illustrates the process of growing algae and processing it into biofuel.

The need for a purifying or transformative processing of algal biofuel is not required due to the readily apparent similarity in the molecular structures of algal biofuel, petroleum, and jet fuel (Francis, 2010). This has culminated in the recent test flights conducted on passenger aircraft by such aviation companies as Boeing (Biello, 2009). Other benefits of algal biofuel and petroleum blends include an increase in efficiency (able to save millions of dollars a year in the aviation industry alone) over jet fuel alone and a decrease in freezing point (Biello, 2009). Conclusion In the current environmentally conscious society, renewable energy alternatives are crucial to the environment and the economy. Researchers are working to improve the efficiency of current biofuels, and new methods for extracting the energy from varying biomasses would improve the cost of current biofuels and curtail greenhouse gas emissions. Adding to currently utilized alternative fuel sources, scientists are developing methods that rely on algae as biomass. These second generation biofuels have the potential to solve the international energy predicament. Another important requirement for the future is the development of a method that could provide developing countries with a relatively inexpensive fuel source. The world has ample potential to expand the biofuel market, and the correct legislation and technology might one day secure renewable energy for the global population. (Biofuel, 2011)

Bibliography Biello, D.B. (2009, January 07). Air Algae: U.S. Biofuel Flight Relies on Weeds and Pond Scum. Scientific American, Retrieved from http://www.scientificamerican.com

Biofuel. (2011). In Encyclopdia Britannica. Retrieved March 23,2011, from Encyclopdia Britannica Online: http://www.britannica.com/EBchecked/topic/967492/biofuel BiofuelsThe Next Great Source of Energy?: Year In Review 2007. (2008). In Britannica Book of the Year, 2008. Retrieved from http://www.britannica.com/EBchecked/topic/ 1391035/Biofuels-The-Next-Great-Source-of-Energy Cohen, K. (2010). The next phase of algae biofuels. ExxonMobil. http://www.exxonmobilperspectives.com/2010/07/14/the-next-phase-of-algaebiofuels/?utm_source=yahoobing&utm_medium=cpc&utm_term=biofuels&utm_campai gn=xom_yb%2Bsearch_perspectives%2Bblog%2B-%2Bnational Demirbas, A. (2008). Biodiesel - A Realistic Fuel Alternative for Diesel Engines. Retrieved from http://gordonlibrary.wpi.edu/

Demirbas, A. (April 10, 2008). Biofuels sources, biofuel policy, biofuel economy, and global biofuel projections. Energy Conversion and Management. 49, 2106-2116. Retrieved April 10, 2011, from Science Direct. Francis, S.F. (2010). Algal biofuels. In U. Aswathanarayana (Ed.), Green Energy (pp. 137-147). CRC Press. Fu, C.F., Mielenz, J.R.M., Xiao, X.X., Ge, Y.G., & Hamilton, C.Y.H. (2011). Genetic manipulation of lignin reduces recalcitrance and improves ethanol production from switchgrass. Proceeding of the National Academy of Sciences of the United States of America, 108(9), Retrieved from http://www.pnas.org/cgi/doi/10.1073/pnas.1100310108 doi: 10.1073 Hess, S.M. (2010). How Biodiesel Works. How Stuff Works. http://auto.howstuffworks.com Himmel, M.E.H., Ding, S.Y.D., Johnson, D.K.J., Adney, W.S.A., & Nimlos, M.R.N. (2007). Biomass recalcitrance: engineering plants and enzymes for biofuels production. AAAS, 315(5813), Retrieved from http://www.sciencemag.org doi: 10.1126/science.1137016 Howell, K.H. (2009). Is Algae the Biofuel of the Future?. Scientific American, Retrieved from: http://www.scientificamerican.com Hsu, J. (2010). U.S. Government Invests $78 Million in Algae Biofuels Research. Popular Science. http://www.popsci.com/ Johnsen, A. (n.d.) Solid biofuels. Retrieved April 7, 2011, from http://www.renewable. no/sitepageview. aspx?articleID=177 Jury, C., Benetto, E., Koster, D., Schmitt, B., & Welfring J. (October 29, 2009). Life cycle assessment of biogas production by the monofermentation of energy crops and injection into the natural gas grid. Biomass and Bioenergy. 34, 54-66. Retrieved April 10, 2011, from Science Direct. Khanal, S.K. and Munasinghe P.C. (January 2010). Biomass-derived syngas fermentation into biofuels: opportunities and challenges. Bioresource Technology. 101, 5013-5022. Retrieved March 24, 2011, from Science Direct. Leung, D.Y., Wu, X., and Leung, M. (November 7, 2009). A review on biodiesel production using catalyzed transesterification. Applied Energy. 87, 1083-1095. Retrieved April 10, 2011, from Science Direct. Lionetti, V.L., Francocci, F.F., Ferrari, S.F., Volpi, C.V., & Bellincampi, D.B. (2010). Engineering the cell wall by reducing de-methyl-esterified homogalacturonan improves saccharification of plant tissues for bioconversion. Proceedings of the National Academy of Sciences of the United States of America, 107(2), Retrieved from http://www.pnas.org/cgi/doi/10.1073/pnas.0907549107 doi: 10.1073

Low-Level Biodiesel Blends. (2010). U.S. Department of Energy. http://www.afdc.energy.gov/afdc/fuels/biodiesel_blends.html Mousdale, D.M.M. (2008). Chemistry, biochemistry, and microbiology of lignocellulosic biomass. Biofuels: Biotechnology, Chemistry, and Sustainable Development (pp. 49-53). CRC Press. Nigam, P. S. and Singh, A. (2010). Production of liquid biofuels from renewable resources. Progress in Energy and Combustion Science. 37 (1), 52-68. Retrieved March 24, 2011, from Science Direct. Packard, A. (2010). Biofuels. The New York Times. http://topics.nytimes.com/top/news/business/energy-environment/biofuels/index.html Ragauskas, et al. (2006). The Path Forward for Biofuels and Biomaterials. Science Magazine, 484-489. Rutz, D. & Jannsen, R. (2007). Biofuel Technology Handbook. Retrieved from http://libguides.wpi.edu Shaw, A.J.S., Podkaminer, K.K.P., Desai, S.G.D., Bardsley, J.S.B., & Rogers, S.R.R. (2008). Metabolic engineering of a thermophilic bacterium to produce ethanol at high yield. Proceedings of the National Academy of Sciences of the United States of America, 105(37), Retrieved from http://www.pnas.org/cgi/doi/10.1073/pnas.0801266105 doi: 10.1073 Wang, A.W., Gao, L.G., Ren, N.R., Xu, J.X., & Liu, C.L. (2011). Isolation and characterization of Shigella flexneri G3, capable of effective cellulosic saccharification under mesophilic conditions. Applied and Environmental Microbiology, 77(2), 517-523. Yang, S.J.Y., Kataeva, I.K., Hamilton-Brehm, S.D.H.B, Engle, N.L.E., & Tschaplinski, T.J.T. (2009). Efficient degradation of lignocellulosic plant biomass, without pretreatment, by the thermophilic anaerobe anaerocellum thermophilum DSM 6725. Applied and Environmental Microbiology, 75(14), 4762-4769. Zerbe. J. I. (2009). Syngas from biomass. AccessScience. McGraw-Hill Companies. Retrieved March 23, 2011 (http://www.accessscience.com/content.aspx?searchStr=biofuel&id=YB090009.

Chapter 10 Water Quality and Pollution


Conor Pappas, Luke Derry, and Ram Nambi

Introduction Water is the source of life, and every living organism on our planet depends on this essential liquid. While water can be found in abundance all over the earth, only a small percentage is usable by living creatures. With only a limited amount of this resource, it is essential to sanitize our water and maintain its purity. Pollution in water is a matter of concern; however, there are water treatment plants all over the country working on ways to clean our water, making it safe to drink. With the population increasing, it is becoming more imperative every day to maintain a steady clean water supply. Although the pollutants are disconcerting, the solutions are promising. Water Quality Control To understand the state of our water, we must first analyze how our water quality is tested and measured. Certain parameters are tested, usually once per month, in order to determine the drinkability of the water. These samples are gathered from reservoirs and community water lines (Monthly water quality test results, 2011). One of the first items to look for in water is microbial pollution. This quality of water provides a report of total bacteriological activity. Turbidity is also analyzed in water treatment. Looking at turbidity involves examining water for particles that include clay, silt, organic and inorganic matter, algae, and microorganisms. Another attribute that water treatment facilities and scientists look for is corrosiveness. This is a measure of the amount of lead and copper in water samples. Lead and copper can get into the water through rusting pipes or old, toxic lead pipes that have not been replaced. Water pollution is a serious environmental issue, and there is plenty of support for this statement. A three-year study by the Environmental Working Group of the drinking water in the U.S. found more than 316 chemicals in the water supply of 45 states. This study included the testing of over 20 million tap water samples from across the country. The contaminated water is perfectly legal according to EPA standards, but is dangerously unhealthy to consume (Luntz, 2009).

The photograph depicts one of the water purifying mechanisms at a water treatment plant. Common pollutants are categorized into two major groups: point and non-point sources. The former occur when pollutants are put directly into the water supply, for instance when factories have waste pipes that run right into a body of water. Non-point sources occur when runoff pollution seeps into a waterway. This can occur when fertilizer from farms runs down a hill and into a body of water (Water pollution, 2006). Many other types of contaminants are making our water supply unsafe, and the sources of these toxins vary greatly. Pharmaceuticals, one of the main pollutants present in water, are mixed into the water when people take them as medication and excrete them. They are then flushed down the toilet and become part of the everflowing water system. Also, many people dispose of their unused medications by flushing them down the toilet or dropping them down the drain (Dean, 2007). This leads the medications to have an immediate contact with the water. Residential consumers are not the only culprits of this; hospitals, nursing homes, and drug stores also dump their left-over medications into the water along with the rest of their waste. In 1999, the United States Geological Survey performed a study in which investigators tested 139 streams in the U.S. They found that 80% of the samples they collected had traces of drugs such as painkillers, hormones, and antibiotics (Dean, 2007). This test and its results showed the high amounts of medications present in our drinking water, and shows that water pollution and improper disposal of medication has become a large issue in our country. Sources of Water Pollution Most water pollution is ultimately caused by the excellent solubility of water. A variety of pollutants will dissolve into a body of water, quickly spreading the contamination throughout the body. The most common sources of pollution are dissolved sediments, particulate matter, and microbial life. There is another type of water pollution, however, that may not be as detrimental to human life but certainly affects the ecosystems of lakes, rivers, and other bodies of water. Thermal pollution is the increase in temperature of a body of water due to human influence. Power plants that use water as coolants are the most predominant source of this type of pollution. As these plants release water back to the environment, the temperature rises by a significant

amount. This change in temperature modifies the composition and density of the water itself, which can disrupt the buoyancy of fish and unsettle the ecological balance. The water quality is significantly reduced by all of these sources; but if technology is advanced by even a little for each of these contaminants, then the amount of freshwater could be significantly increased (Schaefer, 2008).

This image illustrates many of the sources of water pollution. Industrialized countries have developed technology that not only targets dissolved matter and bacterial contamination that plagues the waters of developing countries, but other types of pollution as well. Water contaminants can come from a variety of sources and may taint both surface waters and ground waters. The pollution of surface water is categorized into two different types: point source and non-point source. Waste materials leaking from a single source, such as a broken pipe or the discharge from a sewage treatment plant, are classified as point source pollution. Non-point pollution, however, comes from diffuse sources. For example, pesticides and nutrients from farmland dissolving into runoff water are classified as non-point pollution. While these types of contamination afflict our surface water, ground water is also strongly affected by pollution. Contaminant in the soil or residue from human activity, such as radioactive material from toxic spills, may dissolve into the water of underground aquifers. The contaminants will spread easily throughout the entirely of the vast aquifers, leaving a large water supply unusable (Leon, Soulis, Kouwen, & Farquhar, 2001).

Water Quality in Third World Countries While developed countries may not witness this, developing countries often have water that is full of silt and other fine sediments; there is no clean drinking water and no facilities to produce clean water. The population of a developing country is often forced to drink directly from a saline body of water that has been out in the sun for extended periods of time. During the

warmer months of summer, the growth of parasites and insects like mosquitoes is promoted which helps spread even more disease than was already present in the water. But the suffering does not stop there. With diseases running rampant, people die, productivity is decreased, and the economy can only get worse. This poor country needs some sort of way to purify its water, but all that it can do is struggle and watch as its citizens die from a lack of clean water (New technology could make desalination accessible, 2011). Technology has helped distressed nations, but it has not solved their problems. Developed in Bangladesh, cloth filters are used to reduce the amount of particulate matter and pathogens in water. While it does not completely purify the water, a cloth filter does clean it, making it safer. This simple filter is nothing more than a cloth folded into about six to eight layers placed over a jar. By running tainted water through this filter, particles and microorganisms are caught on the fibers and are removed from the water. Smaller particles can pass through, but the water is left much cleaner and it has been demonstrated to greatly reduce cholera infections in poor villages. This simple technology has been beneficial to Third World countries, and with the development of new technologies these nations can begin to improve water quality (Different water filtration methods, n.d.).

This is the unsanitary condition of water sources in Third World countries.

Water Treatment Plants As cities expanded and the urban population increased, larger scale water purification became necessary. In addition, a shortage of water supply also meant that waste water must be filtered and reused. In order to purify the polluted water and remove its contaminants, it is carried by municipal sewer systems to a water treatment plant. These facilities use cutting-edge technology to filter the dirty water and make it safe to drink. As the water travels into the facility, filters are first used to remove large solid contaminants. However, suspended substances still remain; therefore, the non-potable water is placed in a gravity filtration tank and allowed to rest. Because the water is not agitated, the solutes in the water settle at the bottom of the tank and

are removed shortly thereafter. Although this physical method successfully removes solid particles, it is not very effective on dissolved contaminants; therefore, the water undergoes a secondary purification process. To remove organic substances, the contaminated water is dripped over a surface covered with micro bacterial growth. This process aerates the water, filling it with oxygen, which is then used by aerobic microorganisms to digest the carbon based pollutants. Then, the water is once again placed in the gravitational purification tank to remove remnant bacterial material, finishing that stage of filtration. Because inorganic pollutants still remain, additional treatment is needed to make the water drinkable. More advanced treatment plants uses techniques such as carbon filtration, ultraviolent radiation, ozone purification and chlorine to purify the water further (Nicholson, n.d.). Active Water Purification Although the problems with U.S. water supplies are numerous, there are also many methods that can be used to remove the pollutants from our water. One of these techniques is distillation. This process is one of the oldest and most basic forms of water purification. The process includes heating water until it begins to boil. The water vapor that rises from the boiling liquid is sucked into a condenser located above the boiling water, and it is condensed back into water and collected. Most of the contaminants in the water stay behind due to the fact that they are not transferred with the water vapor. Only the pure water molecules are evaporated and collected, leading to clean, purified water. However, there are also disadvantages to this method. Organic pollutants such as herbicides and pesticides can be taken up with the water vapor during the procedure. This leads to the re-pollution of the supposedly pure water. Distillation is also a very expensive process that requires an immense amount of energy and water to operate. Also, distilled water has an appallingly bland taste. It can be very acidic and lacks the oxygen and minerals that are present in water found in its natural state. For these reasons, distillation is usually used for industrial and scientific purposes (Different water filtration methods explained, n.d.). Ozone is also used in water filtration. While oxygen is usually found in its stable form with two oxygen atoms, ozone (three oxygen atoms bonded together) is formed when strong electric passes through air. It can also be made using an ozone generator in industries. To use this oxygen to filter water, bubbles of the ozone are diffused into the polluted water. These molecules are not as stable as their diatomic counterparts, so they separate into an O2 molecule and a single oxygen atom. This single atom of oxygen travels through the water and will react with metals present in the metal. Magnesium, manganese, iron, and other metals are oxidized by this oxygen and undergo a change in chemical composition. The oxidized metals are forced to precipitate into solid material which can be filtered easily with some sort of carbon filter or through reverse osmosis. This greatly reduces the turbidity of the water. While this method does not filter the water directly, it facilitates filtration so much and is so cost effective that it is one of the most successful methods of water purification. Also, ozone purification is exceptionally safe. For a chemical process, it is one of the safest methods available. The only downfall to this method is that the generation of the ozone uses a large amount of electrical energy. To generate this energy a large amount of fossil fuels need to be burned, which can cause an environmental impact (Natural gas and clean water, 2011).

This illustrates how an ozone molecule is produced. Another popular method of water purification is ultraviolet disinfection. This method uses UV light to remove pollutants from water. In its setup, the UV light is placed in a transparent sleeve. It is aligned in a way that as water goes through a chamber, it is put in contact with the UV lights. The amazing capability of UV light is that when the bacteria in the water absorb the light, their reproductive systems are destroyed, leaving them no way to cultivate and reproduce. Another advantage of UV disinfection is that is purifies water without adding unnecessary or extra chemicals. It also does not change the taste, smell, or mineral content of the water it is treating. Ultraviolet light is also all-natural, inexpensive, environmentally friendly, and easy to use from a public to home level. UV light has an array of applications that cover most water-related facets of life. It is used to clean well water and surface water of natural pathogens. It is also used to clean public water, bottled water, and in industries that require clean water to function. Ultraviolet rays are capable of destroying 99.9% of all micro-organisms from water (Ultraviolet (UV) Disinfection, 2007). Although the prior methods were relatively effective, the need for better purification systems lead to the development of the pulsed streamer coronal discharger. A corona, in general, is an area of space in which fluids are ionized to such an extent that a field of plasma is generated. A coronal discharge, however, is a momentary reaction that produces short blasts of non-thermal plasma. The pulsed streamer system utilizes two asymmetric electrodes, one of high and one of low curvature, to accumulate large amounts of charge, producing plasma around the surface of higher curvature. As electrical energy accumulates on an electrode, it imparts some of its energy to the surrounding molecules. This energy raises the electrons into an excited state, allowing them to break free of their atoms. Then, the electrical field cause by the presence of the charge on the electrodes accelerates the electrons and prevents them from reforming their bonds. As these free subatomic particles collide with other atoms, more and more electrons are released exponentially, resulting in an electron avalanche. Because of these dissociated electrons, this process greatly increases the conductivity of the fluid, allowing a high voltage spark to flow between the electrodes, producing both a strong magnetic field as well as considerable amounts of non-thermal plasma. A study by scientists at Drexel University showed that this form of energy discharge can actually be used effectively to kill bacteria and viruses. Through their research, they found that a plasma dosage of about 7.8 J/cm2 increased the number of apoptotic cells by approximately fifty percent; therefore, they were able to conclude that this

type of treatment effectively inhibited the growth of pathogenic organisms. When high dosages of plasma are formed, the particles in the medium ionize, producing inter cellular ROS (reactive oxygen species) such as O3, NO, HO2, H2O2. These ions react chemically with the microorganisms and lead to cellular death through DNA modification, purifying the water of any bacterial contaminants. Although this process has potential, it faces a lot of opposition because of its foreign nature. The general public views this as an extremely dangerous purification method and is hesitant to incorporate it into water treatment plants; however, if this misconception can be changed, this technology is a very viable option for commercial and domestic water purification (Chang, 1991).

This is the plasma produced by a pulsed streamer coronal discharger.

Methods of Filtration Carbon filtration and absorption is a common method of water purification that is more geared towards home-based use. It not only removes chemicals, gases, and microorganisms, but also improves the taste of the water and removes any pungent odors. However, it does not remove solid materials, the quality of hardness, or any metallic residue from the water. Carbon absorption depends on a few variables for its effectiveness. First of all, the size of the pores is a large factor. The smaller the holes, the longer the time the carbon has to be in contact with and absorb the pollutants. The longer the time, the more particles the carbon is able to absorb and remove from the water. Although it is effective, carbon filtration is usually with the assistance of other water treatment methods (Different water filtration methods explained, n.d.). Granular activated carbon, another type of carbon filter, is produced through the combination of many carbon-based materials in a high temperature process. This carbon filtration method is effective because the pores in the material catch not only large organic materials, but also remove microscopic particles of pollutant. Along with that, the surface area of activated carbon absorbs small organic molecules. Carbon also has the ability to remove microorganisms and organic chemicals from water. However, the effectiveness of carbon

depends largely on the type and amount of carbon, and the design of the carbon filter. These variables effect the flow time, filter lifespan, and the amount of pollutants the filter can remove from the water in each process of purification. Micro porous filtration is another process used to filter water. There are three types of this method: depth, screen, and surface filtration. Depth filters are made of tightly compressed fibers that form a matrix that captures particles. The tightness of the fibers creates smaller holes for the particles to travel through. Screen filters are simple in makeup and have a set pore size. However, they can only absorb and trap particles with a size bigger than the set size of the filter. This makes it largely ineffective unless the filter size is set to the smallest possible. Finally, surface filters are made from layering of filtration material. Most of the particles are gathered on the top layer of the filter, but the other layers below are in place to capture the remaining particles that make it through the above layers (Different water filtration methods explained, n.d.). One of the most effective methods of filtration is reverse osmosis, which occurs between two solutions of different concentrations. If they are separated by a semi-permeable membrane, then a pressure forms between them, which forces water through the membrane and toward the higher concentration to equalize the concentrations and put the system in equilibrium. In reverse osmosis, however, an external hydraulic pressure is applied to the high concentration water. This pressure counteracts the normal osmotic pressure and forces the solution through the semipermeable membrane opposite the usual flow. About 95-99% of dissolved matter is removed from this water by the membrane, leaving almost completely pure water. This method even removes radioactive materials from water, which can be useful for locations near nuclear power plants. The only downfall to this system is that the flow of water through the semi-permeable membrane is quite small, which means that only a certain amount of water can be produced per unit time (Different water filtration methods, n.d.). A company in Singapore is using reverse osmosis in an innovative way to improve efficiency. They are using several semi-permeable membranes and collecting all of the heat energy lost when water passes through them. This energy is reused to power the hydraulic presses which drive the osmosis process (New technology could make, 2011).

This diagram depicts the process of reverse osmosis.

Oil Spill Clean Up In the past, only the quality of drinking water was of concern; now however, the purity of marine habitats is also of Importance. Oil spills, the most harmful side effect of the oil industry, accidentally dump about 2000 barrels of oil per year, contaminating large areas of the ocean. These spills are much more difficult to remediate than other on-shore water quality problems because of the remoteness of their location and the large volumes of water involved. If a cleaning crew can reach the site of an oil spill within two hours, containment and skimming, a relatively easy method can be used to restore the water to its prior condition. Long buoyant booms are used in this process to contain the spill and allow boats to separate the oil and the water. If the spill is far enough from costal settlements, in situ burning, a technique where the oil is removed through burning, can be used. If the spill is not contained quickly or if business and wildlife are at risk, dispersants or chemicals can be used to control and decompose large spills. By the separating the oil into particulates, the chemicals allow the oil to mix with the water producing granular pieces called tar balls, preventing the spread of the spill. However, recent studies, such as one done in Israel, suggest that these granular pieces of oil can actually be more dangerous to marine life than the actual spill (Clark, 2001).

This is a photograph of the 2010 oil spill. The only method that is truly environmentally safe is the use of bacteria to help the ecosystem naturally purify the water. Rod-shaped bacteria such as Alcanivorax borumensis and Thalassolituus oleivorans are often used for this purpose because of their ability to utilize oil in it metabolic process. Although these organisms were able to thrive in laboratory tests, because of a lack of nitrogen and phosphorus, they are unable to live in an oil spill. To supplement this deficiency, scientists are using specially fertilizer containing large amounts of the necessary nutrients, but the right temperature and pH must be also maintained. Although acidity of sea water is usually ideal, warm temperatures are much harder to maintain in northern waters, so researchers are attempting to bio-engineer the microorganisms to withstand lower temperatures (Biello, 2010). If this bacterial method can be improved, it has potential because it is purely a natural process; however, the usage of these bacteria does constitute risks. Because these organisms are bio-engineered to thrive at low temperature, it is possible they may become invasive and threaten

native species. In addition, the usage of fertilizer will cause a dramatic increase in nutrients, which were once scarce, leading to allege blooms and reduced the amounts of oxygen in the water, potentially harming fish and other marine animals; however, if administered properly, this method might become the future oil spill bio-remediation (Roling, 2002).

These bacteria purify water by metabolizing oil. Our world is filled with many potential dangers in terms of our water. However, we have been working hard to make our water safer. Through these efforts, we have developed many methods to purify our water and restore it to its original natural form. Now that we have the cure, we must stop the disease. The people of our country must be more mindful of the dangers they cause by pollution. They must control their waste production and dispose of the needed waste in a proper way. The government must crack down on polluters and offer more recycling programs. It must also offer people safe ways to dispose of their medications and toxic materials. It is a multi-part project, and everyone must work together to ensure its success. When everyone cares for our shared world, we can work jointly to restore the environment. The sacrifice of people will lead to a more convenient life afterwards, and the support of the government will result in saving many of its funds as time goes on. We can and need to restore the cleanliness of our water supply, and with all of the purification methods available, this is more than possible.

Bibliography Activated Carbon Filtration (2011). Retrieved March 24, 2011, from www.apswater.com/article.asp?id=24&About_Activated_Carbon_filtration Alekseeva, V. (1965).Use of ozone in the purification of waste water. Chemistry and Technology of Fuels and Oils,1 (8), 616-619. Al-Shammaa, I., Pandithas, I.& Lucas, J. (2001). Pulsed Power Plasma UV Lamb for Water Purification and Ozone Production. IEEE, 301. Biello, D.(2010, August 24). How Fast Can Microbes Clean Up Oil Spills?. Scientific American, 303, 2.

Brower, J. (2005). Water pollution. (2006). Retrieved from http://www.mbgnet.net/fresh/ pollute.htm Chan J., Lawless P.,& Yamamoto, T. (1991, December). Corona Discharge Process. IEEE, 1152- 1166. Clark,J. (2001). How Do You Clean Up an Oil Spill?. Retrieved March 24, 2011, from www.science.howstuffworks.com Dean, C. (2007, April 3). Drugs are in the water. does it matter? Scientific American, Retrieved fromhttp://www.nytimes.com/2007/04/03/science/ earth/03water.html?pagewanted=1&_r=1 Different water filtration methods explained. (n.d.). Retrieved from water- education/qualitywater-filtration-method.htm Fountain, H. (2007, February 6). Oil's lasting presence. The New York Times, Retrieved from www.nytimes.com. Frayne, C. (2002). Boiler Water Treatment Principles and Practices. New York: Chemical Pub. Co.. How to clean up our water. (2001, April 11). Retrieved from http://www.nrdc.org/water/ pollution/gsteps.asp Janssens, J., Meheus, J.,& Diricks, J. (1984). Ozone Enhanced Biological Activated Carbon Filtration and its Effect on Organic Matter Removal, and in particular on AOC Reduction. Water Science Technology, 1055-1068. Karpuzov, B., Remnev E.& Uynda T. (2002). Radiation Water Purification for Humates. IEEE, 193. Kolesnikov~Jessop, S. (2011, March 21). New technology could make desalination more accessible. The New York Times, Retrieved from www.nytimes.com. Leon, L.F., Soulis, E.D., Kouwen, N., & Farquhar, G.J. (2001). Nonpoint source pollution: a distributed water quality modeling approach. Water Research, 35(4), Environmental Modeling & Software, 15(3): 249255. Luntz, T. (2009, December 14) U.S. drinking water widely contaminated. Scientific American.

Magnetic water treatment. Pollution Engineering, 37(2) (Feb 2005),p.26(3). Retrieved from http://find.galegroup.com/gps/infomark.do?&contentSet=IAC Documents&type=retrieve&tabID=T003&prodId=IPS&docId=A128869227&source =gal&srcprod=ITOF&userGroupName=mlin_c_worpoly&version=1.0. Monthly water quality test results. (2011, March 24). Retrieved from http://www.mwra.state.ma.us/ monthly/wqupdate/qual3wq.htm Namihiram, T., Wang, D.& Takashimna, T., Katsuki, S., Akiyama H. (March, 2004). Water Purification usong Pulsed Streamer Discharges in Micro-bubbled Water. IEEE, 1266. Natural gas and clean water [Editorial]. (2011, March 22). The New York Times, Retrieved from www.nytimes.com. Nicholson, N. (n.d.). Municipal Water Treatment Systems. Retrieved March 24, 2011, from www.ehow.com Okamoto K., Hotta K. (2005). Experiment on Purification of the Water Quality by Using of Cohesion Power. IEEE, 1427. Ozone Drinking Water (2010). Retrieved March 24, 2011, from biozone.com /ozone_drinking_water.htm Roling, W., Milner, M., Jones, M., Lee, K., Daneil, F., Swannell, R., & Head, I. (2002). RobustHydrocarbon Degradation and Dynamics of Bacterial Communities during Nutrient-Enhanced Oil Spill Bioremediation. American Society for Microbiology, 68 (11), 5537-5548. Schaefer M., 2008. Water technologies and the environment: ramping up by scaling down, Technology in Society 30 (34) (2008), pp. 415422. Singh, Rajindar. (2006). Hybrid Membrane Systems of Water Purification Technology, Systems Design and Operation. Boston: Elsevier. Ultraviolet (uv) disinfection. (2007). Retrieved from http://www.excelwater.com /eng/b2c/water_tech_3.php UV Purification systems. (2011). Retrieved March 24, 2011, from water-research.net/ Waterlibrary/privatewell/UVradiation.pdf Water pollution. (2006). Retrieved from http://www.mbgnet.net/fresh/pollute.htm What is water purification? purify water. (n.d.). Retrieved from http://www.waterpurificationmethods.com/

Chapter 11 Fuel Cells


Gregory Granito, Arjun Grama, Vincent Capone, and Derek Bain

Introduction A fuel cell is most simply described as an electrochemical cell that converts energy from a chemical form to an electric form. Chemical energy, by definition, is the potential of a chemical substance to proceed through a change through a chemical reaction or to transform other chemical substances. Breaking or joining chemical bonds requires energy, which can be released or absorbed from the chemical system depending on the reaction. Electric energy (also known as electric potential energy or electrostatic potential energy) is the potential energy of a defined system of point charges. The electrostatic potential energy of a system is defined by the total work that must be done by an external agent to bring the charges from infinite separation, into the system. An electrochemical cell is a device that is able to receive electrical energy from chemical reactions, or can cause a chemical reaction with the use of electrical energy. The electricity comes from the chemical reaction between the fuel supply and the oxidizing agent. The reactants meet within the cell, and the resulting product flows out of it, while the electrolyte remains inside it. An electrolyte is any substance which contains free ions which makes the substance electrically conductive. Fuel cells differ from normal electrochemical batteries. While normal batteries store electric energy internally through chemical means, fuel cells use reactants from an outside source. This outside source must be refilled each time, meaning that it is a thermodynamically open system. This means that the system is not separated or closed off from its surroundings, meaning that matter can move in and out of the boundaries of the system. This gives fuel cells the ability to operate continuously as long as the required reactant and oxidant levels are kept. Oxidants (oxidizing agents) are chemical compounds that easily transfer oxygen atoms, or substances that gain electrons in a redox chemical reaction. Redox (short for reductionoxidation) is a type of reaction in which one reactant gains electrons, and another one gains them. The one that gains them is reduced (counter-intuitive), and the substance that loses electrons is oxidized. There are multiple combinations of fuels and oxidants possible, but the pairing that is best known is probably the coupling of hydrogen fuel with oxygen as its oxidant. This is what is commonly known as a hydrogen cell.

Background The German physicist Christian Friedrich Schonbein made the scientific discovery that made fuel cells possible in 1838. He saw that standard methods of converting a fuel (often wood or coal) into electricity were very inefficient due to the extra step from fuel to heat to electricity. He proposed a much simpler model that was able to convert fuel directly into electrical energy. This could be done in a similar method to that already in use in batteries, and the next year, 1839, the first operational fuel cell was complete by the Welsh scientist Sir William Robert Grove. In 1955, W. Thomas Grubb, a General Electric employee, refined the rudimentary designs of the nineteenth century with the use of a membrane that aided the chemical reaction. In the next few years General Electric, NASA, and McDonnell Aircraft collaborated and produced the first commercial fuel cell (Carrette et al. 2001).

Christian Friedrich Schnbein invented the first fuel cell in 1838.

There are numerous types of fuel cells, but the best known is probably the hydrogen fuel cell. It can be used to power almost any energy need, and is a clean source of power. They can be used for backup, distributing, or portable power sources. They can power almost any portable application that would normally use batteries, from small electronics like iPods, to mobile generators. Even more importantly, they can power our transportation. This can greatly reduce our reliance on fossil fuels and decrease our carbon footprint and impact on the environment. Fuel cells allow for a wider range of uses than batteries. The military utilizes fuel cells in situations where normal batteries are too large or inconvenient to carry. Soldiers carry methanol fuel cells to charge their electronic pieces of equipment. Although methanol cells are not as efficient as hydrogen cells, when using hydrogen fuel cells soldiers run the risk of them exploding. In addition, methanol cells are easier to produce and carry due to their needing less protection. And although they are not as efficient as hydrogen cells, they still provide more energy than batteries.

Chemistry Although fuel cells are chemically similar to batteries in many ways, there are quite a few major differences between the two. The majority of materials used in batteries are solid materials, such as metals or oxides (Brain & Bryant, 2000). The products of the chemical reaction are also solid which is very difficult to remove from the battery. In a typical fuel cell, this problem is not present because all materials and products are either liquid or gas. All of the excess product can be easily removed from the cell and expelled as waste with relative ease. For a battery, once the chemical reaction is complete, the cell must be fully recharged (if possible) or discarded. With a fuel cell, the reactants can simply be replaced, and the cell is ready to continue converting chemical energy into electrical energy. Fuel cells can also be converted to be used as a continuous stream of electrical energy by simply removing the product and adding more of the reactants into the system as the reaction is taking place.

Types and Designs

Hydrogen-based fuel cell. Hydrogen ions are separated from electrons, which generate electrical current. The ions regain electrons and react with oxygen to produce water as exhaust. In the standard hydrogen fuel cell, hydrogen is oxidized by oxygen so that the former loses electrons and the latter gains them. When this process is performed in an electrochemical manner, or done within the confines of a fuel cell, the electron flow is organized in such a manner that a useful current is produced (Bagotsky, 2009). The overall chemical reaction formula can be displayed as H2 + O2 ==> H2O for this specific model of fuel cell. That can further be expanded to show the oxidation and reduction formulas. The oxidation of hydrogen can be seen with H2 ==> 2H+ + 2e-, where two hydrogen molecules are ionized at the anode of the fuel cell by removing two electrons. These electrons are directed to the cathode of the cell to form a current, where they are chemically combined with oxygen and hydrogen to form water as a product (O2 + 2H+ + 2e- ==> H2O). The hydrogen is transferred from the anode to the cathode by passing through a membrane that will allow the positively charged ions through but restrains the electrons. As already mentioned, this forces the electrons to employ a different route to the cathode, which forms the current. Although this process is for hydrogen-based fuel cells, the exact same process is used for cells which are based on other elements. The only differences are the physical materials, reactants, and products that are used or produced in the reaction.

Microbial Fuel Cells Microbial fuel cells generate electricity by modeling the chemical reactions found in nature. The process is driven by an energy converting reaction that requires the use of microorganisms as the catalyst for the exchange. A usual microbial fuel cell is composed of an anode compartment and a cathode compartment that separated by a membrane. The bacteria decompose the organic fuel in the anode compartment. The decomposition yields excess protons and electrons. The protons are transferred through the membrane into the cathode compartment while the electrons are used to power an external circuit. The electrons exit the cell through the anode, traverse an external circuit and arrive back through the cathode. Once inside the cathode compartment, the protons and electrons combine with already present oxygen to form water. In reality, transferring the electrons to the anode is difficult and requires the implementation of either a mediator or a more recent mediator-less design (Srinivasan, 2006).

This is a depiction of the construction of a typical microbial fuel cell. Microbial Electrolysis Cells Similar technology can be used to reverse the process that occurs in fuel cells. The result is a low energy consuming hydrogen producer. A small amount of electricity is required to move the protons to the cathode. In a microbial fuel cell the proton would then meet up with the electrons that have just completed a cycle through the circuit and combine to form water. However, in an electrolysis cell, the proton is allowed to remain separate from the electrons. This produces hydrogen gas rather than electricity and water. The applications for this process are endless. Hydrogen can be used to store and transport energy and can be used to power conventional fuel cells. This process is also very efficient and can result in large quantities of hydrogen (Fuel cell & hydrogen energy association, n.d.).

Mediator Fuel Cells Mediator fuel cells were the first microbial cells designed. They employed the use of a chemical to transfer the electrons to the anode terminal. However, Mediators are both very toxic and very expensive. Common mediators include thionine, methyl viologen, methyl blue, humic acid, and neutral red. Rarity and toxicity make the mediator the most impractical part of the entire fuel cell; however, recent breakthroughs have enabled the development of a microbial fuel cell that does not require a mediator (Srinivasan, 2006).

Mediator-less Fuel Cells Fuel cells that do not require mediators need to implement electrochemically active bacteria to move the electrons from the production site to the anode. In this system, a mediator is no longer required because the bacteria can fulfill the niche the chemical once occupied. This breakthrough enables electrical energy to be produced from specific aquatic plants, which include tomatoes, rice and algae. The use of plants in place of a fuel source in a fuel cell is referred to as Plant Microbial Fuel Cells. The use of plants has proved to be effective on the same level as other fuel sources, but it can be much more ecologically friendly (Minh, 2005).

This diagram illustrates how a standard fuel cell functions. Power Generation Microbial fuel cells have a great potential as an ecologically friendly source of electricity (Minh, 2005). The bacteria can function when almost any organic substance is provided as a fuel source. This leads to tremendous possibilities. Wastewater at treatment plants could be used to generate electricity. The bacteria would consume the waste and would result in both electricity

and cleaner water. The controlled combustion that occurs in a microbial fuel cell enables the reaction to be much more efficient than combustion engines and other standard methods of energy production. Scientists hope to eventually see energy efficiencies of 50% or greater.

Efficiency The military makes use of fuel cells where standard batteries would be too cumbersome to carry into the field. To charge their electronic equipment, soldiers carry methanol fuel cells. Although hydrogen fuel cells are more efficient than methanol cells, hydrogen is explosive, a major disadvantage in a combat environment. Methanol cells are lighter and cheaper to produce because they do not require a heavy casing for protection. Like hydrogen fuel cells, methanol fuel cells also produce no exhaust greenhouse gas and offer more power than heavy batteries (Atteberry, 2010). Fuel cells are more environmentally friendly than conventional combustion-based engines, producing much less greenhouse gas than combustion engines. The only exhaust of a fuel cell that consumes pure hydrogen is heat and water. In addition to being less environmentally friendly than fuel cells, conventional engines are also less efficient. The Carnot cycle restricts the efficiency of traditional engines that rely on the conversion of thermal energy to electrical energy. The Carnot cycle shows that heat engines lose some of the thermal energy supplied to them, resulting in poor efficiency at converting thermal energy to mechanical energy (Baumeister, 2008). Fuel cells do not rely on the conversion of thermal energy, so the Carnot cycle does not apply to them (Larminie & Dicks, 2003). A fuel cell can drive the wheels of a car at forty percent efficiency. Hydrogen fuel cells can reach mean voltage efficiencies of about fifty percent. There are other limitations, including a ten percent loss in the power output and a ten percent loss between the fuel cell and the wheels. This results in a total efficiency of forty percent (Bossel, 2003).

Environmental Concerns Microbial fuel cells do release a small amount of carbon dioxide. The bacteria prefer to release carbon dioxide in place of protons and electrons; however, in the absence of oxygen they are forced to release the proton and electron mixture. While there is oxygen content present in the water, the fuel cell or electrolysis cell will release carbon dioxide. The emissions are minimal; however, they could potentially pose a concern.

Applications Fuel cells are largely portable and require nothing more than fuel. In addition, the fuel weighs much less than traditional oil-based fuels. Hydrogen gas can be compressed into tanks. The compacted gas has a much higher chemical energy density than gasoline and other

combustible materials (Dyer, 2002). In situations where space and weight is very limited, fuel cells can be more effective than other traditional electric generators.

This bicycle was modified for fuel cell propulsion. The combination of the fuel cell and the pedal system allow for a very efficient and effective design.

Transportation Air pollution from greenhouse gases is a major problem for modern transportation. The engines of cars produce large amounts carbon dioxide from burning fossil fuels. Although fuel cells cannot yet completely replace traditional engines, they can be used as a supplement to provide auxiliary power or propulsion. Commercial trucks would also benefit from the addition of fuel cells to their power supply. Many cargoes require air conditioning, refrigeration, or heat, which must be powered by leaving the engine running while idle. Fuel cells could be implemented to provide power to these units without the need for the engine of the truck, saving fuel and reducing emissions (Cleveland, 2006). Space In outer space generating electricity is challenging. Solar panels provide a large portion of the necessary power but when that is not enough fuel cells are useful. Hydrogen and oxygen, the two required fuel sources for a fuel cell are both used for other applications in space (Dyer, 2002). Oxygen is needed for the life support of the crew and hydrogen is used as a rocket fuel. A fuel cell would consume some of each gas and produce both electricity and water (Weston, 2002). Both of these are useful on a spacecraft. Furthermore, the relatively light fuels are easier to transport into space. The combination of the benefits of fuel and the usefulness of both resultants of the reaction make fuel cells not only practical but also ideal for space use (Bagotsky, 2009).

Submarines Submarines have a very limited volume of usable space. The use of fuel cells in submarines has proven effective because they have a very high energy density fuel source and they do not output any gaseous waste products. Traditional fuel engines require heavy fuel and oxygen and output large quantities of carbon dioxide (Weston, 2002). Releasing the carbon dioxide into the environment in the form of gas bubbles is not ideal because it can result in detection by enemies (Dyer, 2002). However, fuel cells simply output water and are more practical than other forms of power. Furthermore, fuel cells and electrical motors are very quiet. When attempting to go unnoticed, any noise can be a detriment (Adams, 1960). Bibliography Adams, D. (1960). Fuel cells: power for the future. Retrieved from http://books.google.com Appleby, A. John. (2007). Fuel Cell. Access Science. Retrieved March 29, 2011, from http://www.accessscience.com/content.aspx?searchStr=fuel+cells&id=274100 Atteberry, J. (2010, February 4). What's the military (smart) fuel cell?. Retrieved from http://www.howstuffworks.com/military-smart-fuel-cell.htm Bagotsky, Vladamir (2009). Fuel cells: problems and http://electrochem.cwru.edu/encycl/art-f03-fuel-cells.htm solutions. Retrieved from

Baumeister, Theodore. (2008). Carnot Cycle. AccessScience. Retrieved April 14, 2011, from http://www.accessscience.com Bellis, M. (2011). Hydrogen fuel cells. Retrieved from http://inventors.about.com/ Bossel, U. (2003). Efficiency of hydrogen fuel cell, diesel-sofc-hybrid and battery electric vehicles. European Fuel Cell Forum, Retrieved from http://www.efcf.com Brain, Marshall, & Bryant, Charles W. (2000) How batteries work. Retrieved on March 29, 2011 from http://electronics.howstuffworks.com/battery3.htm Brooe Shumm, J. (2010). Fuel Cells. Retrieved March 23, 2011, from Encyclopaedia Britannica: http://www.britannica.com/ Capehart, B.L. (2007). Fuel Cells: Low Temperature. Encyclopedia of energy engineering and technology. CRC Press. Retrieved March 28, 2011, from http://www.crcnetbase.com/ Carrette, L., Friedrich, K.A., & Stimming, U. (2001). Fuel cells - fundamentals and applications. FuelCells, 1(1), 5-39. Retrieved from http://onlinelibrary.wiley.com/ Cleveland, C. (2006). Fuel Cells. The encyclopedia of earth. Retrieved April 14, 2011, from http://www.eoearth.org/

Dyer, C. (2002). Fuel cells for portable applications. Fuel Cells Bulletin, 2002(3), Retrieved from http://www.sciencedirect.com Electrocatalyst for oxygen reduction in PEM fuel cell. Fuel , 89 (12), 3847-3852 . Fuel cell & hydrogen energy association. (n.d.). Retrieved from http://www.fchea.org/ Fuel Cell Applications. (2000). Retrieved March 28, 2011, from http://www.fuelcells.org/basics/apps.html Fuel Cells. Retrieved March 23, 2011, from Gale Group: http://find.galegroup.com Investment Weekly News. (2011, March 19). Woodbridge and University High Schools to Be Powered by Fuel Cells Kattke, K. (2011). High-fidelity stack and system modeling for tubular solid oxide fuel cell system design and thermal management. Journal Of Power Sources , 196 (8), 3790-3802. Keese, J. R. (2007). Renewable Energy. Retrieved March 23, 2011, from Sage Reference Online: http://www.sage-ereference.com/ Larminie, J. (2003). Fuel Cell Systems Explained. Retrieved March 22, 2011, from Knovel: knovel.com Lee, C.-H. (2011). Modeling of the Ballard-Mark-V proton exchange membrane fuel cell with power converters for applications in autonomous underwater vehicles. Journal of Power Sources , 196 (8), 3810-3823. Minh, N. (2005). Ceramic fuel cells. Journal of the American Ceramic Society, 76(3), Retrieved from http://onlinelibrary.wiley.com Srinivasan, S. (2006). Fuel cells from fundamentals to applications. Retrieved from http://books.google.com/ Thanasilp, S. (2010). Effect of MEA fabrication techniques on the cell next term performance of PtPd/C Types of Fuel Cells. (2000). Retrieved March 28, 2011, from http://www.fuelcells.org/basics/ types.html Weston, M. (2002). Portable power applications of fuel cells. DTI Pub, 2. Retrieved from http://web archive.nationalarchives.gov.uk/ What is a fuel cell? (2000). Retrieved March 28, 2011, from http://www.fuelcells.org/basics/how.html

Chapter 12 Agriculture
Adan Rivas, George Slavin, and Ryan Kutzko

Introduction Agricultural practices have vastly changed over the past 100 years. The decades following World War II represented an explosion in the use of technology in the agro-business sector. Developments such as synthetic pesticides and fertilizers, larger farm machinery, and better irrigation techniques have all contributed to increased productivity. Unfortunately, overreliance on these agricultural technologies has negatively impacted the environment by reducing soil quality and endangering local flora and fauna. Modern advances in pesticides, fertilizers, genetically modified organisms, as well as diverse agricultural techniques such as urban farming, hydroponics, and aquaculture seek to keep productivity high, while minimizing damage to the environment. History of Agriculture Agriculture helped to develop civilization. Man now could produce his own food and abandon the nomadic lifestyle. Eventually farmers began to domesticate animals and livestock around 7000 BC (Lambert, n.d.). Our ancestors relied heavily on their environment, settling in close proximity to bodies of water to allow the development of agriculture. To this day, we depend on agricultural products, including the growth of corps and cultivation of livestock, for food. Agriculture would remain the main source of economy, until the 19th century when Farming developed drastically amidst the industrial revolution. Because of technologies including Jethro Tulls seed drill, Cyrus McCormicks reaper, and John Deeres steel plough, cultivation practice became far more efficient. A foreign market for agricultural products established the importance of agriculture in the global economy. Agriculture plays a pivotal role in the economy of nearly every country. A sovereignty that has a stable agricultural industry ensures the food security of the people (Macon, n.d.). Many small developing nations depend on agricultural yields as a significant sector of their economy. In addition to exports and trade opportunity, agriculture-related industries create employment endeavors for people. Sustainable Agriculture The 1970s energy crisis in the United States made evident the concerns of resource scarcity, particularly petroleum, in many American industries (Allen & Sachs, 1991). A focus on agricultural sustainability took hold, and during the 1980s farm crisis, the movement gained support as increasing awareness of agriculturally related environmental issues became apparent. Other concerns that sustainable agriculture attempts to redress are the contamination of the

environment by pesticides, plant nutrients, and sediments; loss of soil and degradation of soil quality; dependence on nonrenewable resources, and more recently the low farm income. Eventually this form of agriculture became described as the cultivation of crops and livestock that is economically profitable, socially rewarding, and environmentally conservative (Preston, 2003.)

The diagram above illustrates the delicate balance between the three aspects of sustainable agriculture.

Farmscaping Farmscaping is an ecological approach to pest management. It involves the use of hedgerows, insectary plants, cover crops, and water reservoirs to attract and support populations of beneficial organisms to farming. In some regards, beneficial organisms should be managed as mini-livestock. The larger varieties of livestock are healthier and reproduce more readily when provided an adequate and nutritious diet. Likewise, these organisms require adequate supplies of nectar, pollen, and herbivorous insects and mites as food to sustain and increase their populations; the best source of these foods is flowering plants. Farmscaping reduces the risk of chemical residues of farm products and promotes a safe environment for wildlife. A major component of sustainable practices is soil management and yield production. Coverage of soil prevents and regulates temperature, increases penetration and storage of water, and improves aeration of soil. Cover crops are used commonly to improve soil structure and limit soil erosion. Crop rotations allow a farmer to manage a polyculture of crops, the rotation breaks weed and pest life cycles, provide complementary fertilization to crops in sequence with each other. It is important to consider that planted row crops are commonly soil-degrading; microbes break down organic matter in the soil because the soil is open and cultivated between rows. Unfortunately, row crops do not have large root systems and contribute only small amounts of new organic matter into the ground.

One solution includes the integration of green manures such as forage or cover crops. When these crops are planted with a grain drill or broadcast-seeded, the seeds are more closely spaced and have more extensive root systems than row crops, greatly reducing the amount of soil exposed to degradation. In addition, green manures are low maintenance, which further reduces organic-matter loss. As a result, green manures can be considered neutral crops because the amount of organic matter they replenish into the soil is roughly proportional to the nutrients it extracts from the system. A select group of crops: grasses, clovers, and alfalfa make a perennial sod cover that keeps the soil entirely covered and these crops have massive root systems, which produce far more organic matter than is lost. Sod crops are best suited as soil-building crops because they can heal the damage done to soil by row cropping. Multiple field trials in Sweden regarding the subsequent yield of winter wheat (Triticum aestivum) due to nitrogen dose and a companion grass. The experiment revealed that as the initial nitrogen content increased the biomass of the clover, a legume crop, decreased. The inverse occurred when there was little fertilization (Bergkvist, Stenberg, Wetterlind, & Elfstrand, 2010.) The use of non-legume cover crops decreased the grain yield, while perennial cover crops and legumes had a minor impact on the yield of the wheat. The study provides evidence that legumes are self-sustaining, but when a soil is well fertilized, the crop growth is greatly diminished. Nonlegume cover crops were able to scavenge nitrogen efficiently form the soil and the resulting plant competition did not allow the winter wheat to germinate very well. Legumes and Nitrogen Fixation In addition to organic matter, fertilizers can improve the nitrogen content in soil. Legumes, which are plants that can fix nitrogen through a symbiosis relationship with the bacteria Rhizobium, are a cost effective alternative to potentially dangerous fertilizers. Rhizobium has the unique capability to extract nitrogen gas from the atmosphere and convert it into a form accessible to plants. Companion Planting An additional method of enhancing crop production is through companion planting; this is based on the idea that certain plants can benefit others when planted in near proximity (Keupper & Dodson, 2001.) In addition to nitrogen fixation, other forms of companion planting include trap and nurse cropping. In trap cropping, a neighboring crop may be selected because it is more attractive to pests and serves to distract them from the main crop. An excellent example of this is the use of collards to attract the diamond back moth away from cabbage. Nurse cropping tall or dense-canopied plants are used to protect more vulnerable species through shading or by providing a windbreak. Oats have long been used as a nursing crop to help establish alfalfa and other forages by suppressing the more competitive weeds.

The illustration above depicts a simple explanation of nitrogen fixation. Rhizobium (in legume roots) extracts nitrogen from the atmosphere and convert it into nutrients usable by plants.

Carbon Sequestration In response to the increase in greenhouse emissions there has been promotion of removing carbon from the atmosphere and using it elsewhere. Carbon (C) sequestration is the process of storing carbon dioxide, or other forms of carbon, to mitigate global warming. Simply increasing crop yields and efficiency helps to reduce emissions, such practices as no-till farming is less dependent on machine use and burns less fuel per acre in comparison to conventional practices. To improve sequestration in no-till plots there is a need to develop crop rotations. A recent study assessed the contributions of cover and forage based rotations on accumulation of carbon in subtropical Ferralsol. It was concluded forage-based rotations contributed the most C sequestration and legumes accumulated more carbon content than grass cover crops in no-till settings (Zendonad dos Santosi et al., 2010.) Interplanting Intercropping is the growth of two or more crops simultaneously in the same field during a given growing season. Nitrogen fixing legumes can incroporated in arable cropping systems via intercrops, where they contribute to maintaining the soil fertility through nitrogen fixation (Bellostas et al., n.d.). Intercropping of grain legumes and cereals therefore offers an opportunity to increase the input of fixed nitrogen into agro-ecosystems without compromising cereal nitrogen use, yield level, and stability. The introduction of a living cover crop during a cash crop growth cycle and maintenance of the cover crop after the cash crop harvest may help to preserve biodiversity, increase soil organic matter and C sequestration, and provide other ecosystem benefits such as natural pest

control and/or nutrient recycling. A study in France examined the impact of the various management systems to a red fescue cover crop in a winter wheat crop culture in terms of light, water and nitrogen competition, using the STICS (a European based statistical crop model adapted for intercropping.) Through the simulation of unmeasured variables, such as transpiration, the model enhances our analysis of the performance of the intercrop in the eld (Shilli-Touzi, De Tourdonnet, Launay, & Dore, 2010.) It was shown that the intercropping system was more efficient that wheat grown as a sole crop, in terms of nitrogen accumulation and decreasing soil nitrogen levels; however, it also resulted in lower wheat yields due to plant competition. Coffee and banana are major cash and food crops, respectively, for many smallholders in the East African highlands. Uganda is the largest banana producer and 2nd largest coffee producer in Africa; both crops are predominantly grown as monocultures, although coffee banana intercropping is commonly practiced in densely populated areas. One recent study assessed the protability of intercropped coffeebanana systems compared to mono-cropped systems in regions growing Arabica and Robusta coffee in Uganda. Coffee yields did not vary significantly between mono-crops and intercrops but banana yields were notably higher in intercrops compared with mono-crops in Arabica growing region. In Robusta growing region, banana yields were lower in intercrops compared with mono-crops; further geographical research suggested that Robusta has a more developed root system than Arabica. Marginal rate of returns (an analysis of revenue) of adding banana to mono-cropped coffee was 911% and 200% in Arabica and Robusta growing regions (Van Asten, Wairegi, Mukasa, & Uringi, 2010.) This study revealed that coffeebanana intercropping is more benecial than banana or coffee monocropping and that agricultural development of food and cash crops in African smallholder systems should not solely mono-crop practices. Cover crops and mulches have shown to be suitable choices for sustainable agriculture because they improve weed control and crop performance. Italian researchers investigated weed control and nitrogen supply by using different winter cover crop species which were converted into mulches in spring (similar to the study with pepper and nitrogen leaching.) They carried out a 2-year field experiment where a tomato crop was interplanted into four different types of mulches coming from winter cover crops: hairy vetch (Vicia villosa), subclover (Trifolium subterraneum), oat (Avena sativa), and a mixture of hairy vetch/oat) and in conventional treatment (tilled soil without mulch). The mixture of hairy vetch/oat cover crop produced the highest aboveground biomass, while the hairy vetch accumulated the highest N in the aboveground biomass (Campiglia, Mancinelli, Radicetti, & Marianri, 2009.) The oat cover crop was the most effective cover crop for suppressing weeds. After mowing, the cover crop aboveground biomass was placed in strips as dead mulch into which the tomato was transplanted in paired rows. Weed density and total weed aboveground biomass were assessed at 15 and 30 days after tomato transplanting to evaluate the effect of mulches on weed control. All mulches suppressed weeds in density and aboveground biomass compared to the conventional system; the oat was the best mulch for weed control but also had a negative effect on the marketable tomato yield. Legume mulches, in particular hairy vetch, gave the best marketable tomato yield 28% higher than the conventional system both with and without nitrogen fertilization. Overall, the research shows that winter cover crops used as dead mulch in spring can be integrated successfully as weed management in an attempt to reduce weed infestation in tomato crops.

Mulch Mulching enriches and protects soil; it is simply a protective top soil. Mulches can be composed of organic materials such as grass clippings, straw, bark chips, and similar materials; or mulch can be an inorganic composition of stones, brick chips, or plastic (Mulching, n.d.) Organic mulches improve the condition of the soil because they slowly decompose. Organic mulches provide organic matter as they decompose that helps keep the soil loose. This improves root growth, increases water filtration, and improves water storage capacity of the soil. Organic matter is a source of plant nutrients and provides an ideal environment for earthworms and other beneficial soil organisms (Mulching, n.d.) A layer of mulch will help prevent the germination of weed, reducing the need for cultivation or the use of herbicides. Mulches also moderate the soil temperature and retain moisture during dry weather, abating the need for watering. Mulches protect the soil from the impact of raindrops that can cause crusting (Mulching, n.d.) Crusting can prevent the germination of seeds. The application of mulch will reduce the evaporation of water from soil, greatly reducing the need to water crops. Mulch acts as an insulating layer on top of soil, keeping it cooler in the summer (Benefits of Mulching, n.d.) Biointensive Farming The biointensive method is an organic agricultural system that focuses on maximizing crop production while minimizing sue of land all while improving soil quality. Common practices include deep soil penetrations (which allow plants to penetrate further into the soil.) Use of compost returns nutrients back into the soil. And close plant spacing allows for high yields because it crowds out weeds and reduce the loss of water from evaporation (Biointensive farming, n.d.) Nitrogen Leaching Soil can only retain some much water; due to gravity excess water will move down through the soil, commonly carrying nitrogen with it. Nitrogen leaching has a great potential to contaminate local water supplies and create other health risks. The use of synthetic fertilizers in conventional farming is largely responsible for the nitrogen pollution in water systems and the atmosphere (Kish & Martin, 2006.) Excess nutrients (nitrogen) produce a dense growth of algae and other organisms; this causes a reduction in water oxygen levels, which is harmful for aquatic plants and animals. A study funded by the USDA Cooperative State Research, Education and Extension Service examined nitrogen leaching under organic, conventional, and integrated management practices. Each system were given equivalent amount of nitrogen, whether it derived from chicken manure or alfalfa meal, calcium nitrate (synthetic fertilizer), or a mixture of the two. The study found that annual nitrate leaching was four to five times higher in the conventional treatment than in the two organic treatments, with the integrated treatment in between. In comparison to conventional farming, organic and integrated farming resulted in lower levels on nitrogen leaching (Kish & Martin, 2006.)

Experimental practices being examined include the use of legumes as a mulch and the effects it has on nitrogen leaching and nitrogen input. One study conducted by the university of Tuscia (Italy) carried out a two year filed experiment on a pepper crop (Capsicum annuum) planted into three different mulches of winter crops: hairy vetch, subclover, and a mixture of hairy vetch/oat (Campiglia, Mancinelli, Radicetti, & Marianri, 2010.) In addition to the mulches, there was a control of pepper being grown in an unmulched plot. The pepper yield was noted to be higher in mulched plot than in conventional. A positive correlation was found between the nitrogen content in the cover crop biomass and the nitrogen leached during the pepper cultivation. The research behind this project reveals that winter legume cover crops used as mulch (in spring) could be used successfully for increasing nitrogen content in the soil and biomass of the following pepper crop, but such mulches may result in increased leaching. But efficient management practices of the system should be incorporated in order to order to minimize nitrogen leaching. Vertical Farming and Urban Agriculture Another interesting concept is described in a Scientific American article, The Rise of Vertical Farms. This article discusses the idea of farming inside city skyscrapers. This idea would solve a multitude of issues, greatly improving the efficiency of irrigation, pesticide use, and transport, and would also allow crops to be grown 24 hours a day year round, increasing the number of harvests possible in the same time period. On top of these benefits, this idea would also work to optimize the area in which farming can occur, and to increase the efficiency of agriculture in that area. This would, of course result in an increased agricultural yield for a decreased economic or land input (Despommier, 2009). The greatest potential for vertical farming is the possibility of these farms existing in the center of urban areas, providing fresh, nutritious produce to the people of that area almost immediately. This would vastly reduce the time taken to transport the food, as it would already be in the population center. Even if it were to be transported across the country, or even the world, it is already in an urban center, with almost instant access to markets around the globe. This can help resolve not only the issue with growing enough food to support the population, but can aid in distributing that food more equally (Organic Farming and Climate Change, 2008). Vertical farming would also prove extremely beneficial in areas such as Japan, where flat arable land is scarce. Japan is notorious for being forced to import a vast quantity of resources, due to its mountainous terrain. Farmland is almost nonexistent in many parts of the country. Cities also cover much of the nation. Vertical farming would be able to provide more space to produce food inside of these urban areas, despite the fact that very little arable land exists anyways. In essence, this could conceivably help nations such as Japan to become more self sufficient, and develop more efficient agricultural systems. Hydroponics Two ideas which are radically different from the conventional methods of agriculture also exist. Known as hydroponics and aeroponics, these methods involve producing plants in a growing medium other than regular soil. Hydroponics is the concept of growing plants in a

nutrient bath, and aeroponics involves producing plants from a nutrient rich mist which is circulated past the roots of the plant. In the nineteenth century, researchers discovered that although plants used soil as a type of nutrient reservoir, it was not essential for plant growth. Plants actually absorb mineral nutrients as inorganic ions suspended in water. In natural habitats, the nutrients come from the soil; however, it is possible, and far more efficient, to grow plants without soil, suspended in a nutrient bath or some other type of growing medium. Aeroponics, although it involves no growing medium at all, is considered a subdivision of hydroponics, due to the fact that it generally involves a nutrient rich water mist as the method for providing plants with growing material (Jensen, 1997). Although the concept of mediumless growth has been around since th 1600s, it was not until 1929 when William Frederick Gericke of the University of California Berkeley proposed using hydroponics for agricultural production. Although Gericke was proved wrong about the potential of using this agricultural technique during that time period, the idea lingered on. In the second World War, hydroponics was used to some extent to grow food in the Pacific islands occupied by the U.S Army, due to the scarcity of arable soil on these islands. Wake Island is the best known example of this. NASA has been interested in the possibility of using hydroponics in space, particularly for a potential Mars expedition. This method of agriculture would be exceptionally effective in space, simply due to its reduced spatial demands, and lack of a need for rich soil. Also, water can be recycled, as it will remain in the system (Jensen, 1997). Another benefit is that nutrition also remains in the system, meaning reduced costs both economic and environmental. Pollution from fertilizers does not occur because hydroponics operates on a closed system. Yields are stable and relatively high, compared to conventional growing techniques. However, at present, hydroponic systems produce an environment which is very susceptible to salmonella growth. Despite this, the benefits far outweigh the risks associated with this technology (Jensen, 1997) Aquaculture Some future agriculture may not be conducted on land. Until now, most seafood has been harvested from wild fisheries throughout the world. Although humanity has always consumed fish and other marine species, very little of this food has ever come from an ocean farm; it has not been necessary to farm the oceans because humans have historically been able to harvest all that they need. This may be changing, however, because ocean populations are delining worldwide due to overfishing, climate change, and acidification of the oceans. In the future, it may be necessary to attempt to culture various aquatic organisms if we intend to maintain our present supply of seafood. Fortunately, an agricultural technique known as aquaculture exists, the purpose of which is to do exactly that (NOAA Aquaculture Program, 2011). Aquaculture was first practiced as early as 6000 B.C, by the Gunditjmara people in Victoria, Australia. They developed a system to grow eels for consumption using a variety of canals, ditches, gates, and dams. Using this system, it is suspected that these people were able to survive off of the eels year round. Several thousand years later, around 2500 B.C, Chinese farmers trapped fish in small ponds after large floods from the local rivers. By feeding these fish nymphs and silkworm feces, these farmers were able to breed more fish for consumption.

Eventually, as a result of mutations, a few of these fish evolved into goldfish, now a domesticated species recognized worldwide. The Japanese also cultivated seaweed around this time. Later in history, the Romans also bred fish in ponds (Aquaculture, 2010). During the Middle Ages, Europeans also practiced Roman style aquaculture, but to a very limited degree. Presently, aquaculture is practiced around the world with a large variety of plant and animal species, and it is estimated to account for a fourth of the seafood produced each year (NOAA Aquaculture Program, 2011). Aquaculture has experienced a meteoric rise in the past few years, and in the past thirty years, it has experienced an average annual growth of around 8%(Aquaculture, 2010). In 2009, the aquaculture market surpassed $86,000,000,000. Fish are by far the most commonly cultivated organisms in aquaculture. Grown in either tanks, nets, or ocean enclosures, these organisms are normally produced simply for consumption, although a few cases exist where they are grown for release into the wild. An example of this is the Salmon fisheries in the U.S and Canada. Using aquaculture, the fishery has been able to regenerate a diminishing population through releasing captive bred fish into the wild (Aquaculture, 2010). This helps to restore the ecosystem back to its natural balance, among other things. Other fish raised through aquaculture include tuna, catfish, carp, tilapia, and cod (NOAA Aquaculture Program, 2011). Although fish are the most commonly raised organisms in aquaculture, crustaceans are becoming more commonly grown. Overall, aquaculture is already a major force in the agricultural industry, and will only continue to rise in importance and influence from this point onwards (NOAA Aquaculture Program, 2011).

Fertilizers Fertilizers are compounds that help increase plant growth because they contain phosphorus, nitrogen, or potassium. The two major groups are organic fertilizers, which contain nature compounds from local sources such as manure, and synthetic fertilizers, which are specifically manufactured to maximize plant growth. These manmade varieties are far more concentrated and effective than their organic counterparts, and they are categorized by three distinct criteria: absorption method, longevity, and solubility. The two main absorption routes are through the roots or the leaves of the plants. While synthetic fertilizers can be applied using either of these methods, organic varieties are limited to the former. Longevity is determined by the intended use of the product, because fertilizers used for agricultural purposes are designed to provide nutrients for a much longer period of time than similar compounds designed for horticultural applications (Longacre, 2008). The maximum solubility of a fertilizer is the amount of the compound that can be dissolved in water at a certain temperature, and therefore, this only applies to products that are sprayed onto crops in liquid form. Fertilizers with a high potency will have a lower solubility than products with a low potency (Walsh, 2007). Synthetic fertilizers are essential to modern farming, but they have many harmful effects. These products have the ability to alter the soil quality, making poor lands land arable and rendering local knowledge of soil condition obsolete. Because of this, large commercial farming

operations now focus their efforts on growing a single type of crop, such as corn or wheat, the continued planting of which robs the soil of all its nutrients. Because of this system, current yields of corn, soybeans and other major crops cannot be sustained without the use of massive amounts of nitrogen, potassium, and phosphorus based fertilizers (Longacre, 2008). Although these fertilizers have few detrimental effects if applied correctly, their overuse can harm the environment. Runoff into nearby water ways can harm aquatic organisms and cause algal blooms, which can deplete the amount of oxygen present in the waterway. These blooms are caused by an excess of phosphorus, which promotes uncontrolled algae growth (Longacre, 2008). The overuse of synthetic fertilizers also reduces soil quality because current farming practices forgo crop rotation. The continued growth of one crop quickly robs the soil of all its nutrients, making the application of artificial fertilizers a necessity. Because of these unfavorable effects, many farmers are attempting to use organic fertilizers as a more sustainable substitute. These fertilizers come from local sources and, most importantly, have no negative effects on the environment. Unlike synthetic fertilizers, these natural products improve aspects of soil quality including water retention, soil aeration, and pH level. Because of the improved soil quality that organic fertilizers provide, crop yields are superior to farming methods that use synthetic fertilizers (Walsh, 2007). Organic fertilizers are also longer lasting because they release their nutrients more slowly. Despite these advantages, these products contain much lower levels of nutrients than their synthetic counterparts. Because of this lower concentration of nutrition, organic fertilizers are much bulkier and are therefore more expensive to apply. Although their short term costs exceed that of currently preferred farming methods, organic fertilizers are much more sustainable and will provide much greater long term advantages (Walsh, 2007). No-till Agriculture Another method of maintaining soil quality is no-till farming. Because traditional plowing causes the soil to lose valuable nutrients, this farming method eliminates the practice outright (Obach, 2007). Instead, using specialized equipment, seeds are inserted in narrow slots in the ground. Normally, a portion of soil nutrients are brought to the surface by plowing and are then carried away by wind and water. Because this no-till method keeps these nutrients in the soil, crop yields are significantly higher when compared to traditional farming methods. Besides higher yields, this method drastically reduces the amount of fertilizer necessary to grow crops by inserting small amounts of it in the same slots as the seeds (Obach, 2007). Because fertilizers and pesticides are applied in this controlled manner, there is limited run-off into nearby waterways. This high yield, low labor farming method translates into better profits, which explains the spread of the use of this framing technique from 7,000,000 acres in 1990 to 50 million acres in 2002 (Obach, 2007). Pesticides The use of pesticides has been commonplace in agriculture for over 100 years. The type of pesticides varied from decade to decade, but most pesticides are highly toxic. For instance, arsenical pesticides were used in New Jersey, starting in 1888 and extending into the 1960s. They were then phased out by even more lethal synthetic pesticides such as DDT starting in 1967

(Murphy, & Aucott, 1997). Recently there has been a shift away from farming methods involving large amounts of pesticides in favor of methods that involve other approaches such as integrated pest management. According to David Pimentel, every year pesticide use comprises 500,000 metric tons of 600 different kinds of insecticides and fungicides. Despite this massive effort to protect crops, 37% of the U.S. harvest is destroyed by insects, disease, and weeds. Since the end of World War II, crop loss has increased by 6% despite the use of ten times the amount of pesticides demonstrating the clear decline in the effectiveness of pesticides (Pimental, 2003). One explanation for this drop in effectiveness is that excess application of pesticides cause insects to evolve resistance to the chemicals used. Farmers respond to this increase in insects, weeds, and fungi caused by pesticide resistance by applying more pesticides, which amplifies the problem. David Pimentel estimates 10% of pesticides are used just to combat increased resistance (Pimental, 2003). Along with their dwindling potency, pesticides cost the U.S.10 billion dollars in damage to the environment annually (Pimental, 2003). A large portion of this loss is attributed to the death of beneficial predators due to pesticides because pest populations skyrocket as their natural predators are killed. Fungicides also kill organisms that are critical for controlling insect populations. Because natural predators control 50% of the pest population, 520,000,000 dollars are spend on crop loss and additional pesticide use (Pimental, 2003). High levels of pesticides in the environment do considerable damage to wildlife. Because of streams that are heavily laced with pesticides, $100,000,000 worth of fish are killed annually and pesticides in the soil destroy decomposers such as worms and microbes, which causes a buildup of dead organic matter. Pesticides also destroy plants that birds and mammals use as shelter reducing the population of both, but these chemicals also have adverse effects on the organisms themselves for seventy-two million birds are killed each year by exposure to pesticides costing $2,100,000,000 in damage (Pimental, 2003). The use of pesticides can also adversely affect the target plants themselves. Even in moderate doses, pesticides can affect the growth of plants, and plants that have particularly high levels of pesticides cannot be sold meaning they must be destroyed (Pimental, 2003). Often pesticides are accidently applied to crops nearby target crops because the process of spreading pesticides is inaccurate and inefficient. When spread by a plane, only 25%-50% of the pesticides fall on the intended crop and when spread using ground methods, only 65-90% of the pesticides fall on the intended crop. Once applied, pesticides stay in the soil making the farming technique of crop rotation impossible due to the possibility of past pesticides poisoning future crops. The total economic toll of loss caused of direct damage of crops by pesticides totals $1,391,000,000 (Pimental, 2003). Because of these massive disadvantages, many farmers have adopted integrated pest management systems. Without the use of synthetic pesticides, these systems protect crops against pests while minimizing long term damage to the environment. Instead of attempting to eliminate organisms that damage crops, IPM merely limits these pests to an acceptable level (Obach, 2007). This is accomplished using a variety of methods. First, crops are only grown in areas where they are resistant to the local predators and weather conditions. Second, emphasis

on crop health reduces the impact of pests and predators. Third, sterile predators are released and crop rotation is implemented to disrupt the reproductive cycles of pests, slowly reducing their numbers. Mechanical solutions such as crop covering nets and manual pest removal may also be used (Obach, 2007). Genetically Modified Organisms Genetically modified crops and animals are organisms that have been altered using genetic engineering, a process that introduces genes from one species into the DNA of another organism in order to give that organism favorable characteristic. Although farmers have crossbred plants to achieve desirable traits since the beginnings of agriculture, direct manipulation of DNA was not possible until the 1970s. During the first part of this decade, Stanley Cohen and Herbert Boyer created the first genetically modified organism, an E. coli bacterium. Practical applications of this new technology were not realized until nine years later. In 1982, a bacterium was modified to produce insulin and six years later, a genetically modified yeast that made chymosin was used in cheese production. A year later in 1988, the first genetically modified animal, a mouse named the oncomouse was announced. Because these genetically modified bacteria and animals contain genes from different species, they are known as transgenic organisms (Davis, 2007). There are two methods used to engineer transgenic organisms. The first method involves the bacterium Agrobacterium tumifaciens, which can be used to modify plants because this organism has the ability to transfer genes to its host plant. Because of this natural process, scientists can insert useful genes into Agrobacterium tumifaciens that are then used to infect plants transferring the desired genes to the plant that they wish to modify (Davis, 2007). The second method involves a device that fires genetic material directly into the nuclei of a cell, which causes some cells to incorporate the new genetic material into their DNA. Because this process does not modify all of the injected cells, numerous cells must be injected and the ones that actually modify are collected (Davis, 2007). Genetically modified plants are a major component of the modern agricultural sector. As of 2005, there are 90 million acres of genetically modified crops worldwide, the majority of which are soybeans, cotton, corn, and canola. The largest modification applied to these crops is herbicide resistance, which accounts for 71% of all genetically modified crops (Davis, 2007). For example, the company Monsanto makes Roundup resistant soybeans. Because of the success of this and other modifications such as a gene from Bacillus thuringiensis that allows the plant to produce pesticides, soybeans make up 61% of all genetically modified crops. After soybeans, the next most common genetically modified crop is Bt cotton (Davis, 2007); Bt stands for Bacillus thurigiensis. When the insecticide-producing bacterial gene is inserted in to the DNA of cotton, the plant is able to produce the exact same pesticide as the bacterium. Other crops that have been modified with the Bt gene include potatoes and corn. By having crops that naturally produce pesticides, farmers are spared the work and cost of applying pesticides, which are also more harmful to the environment. These Bt crops compose 1 in 5 of all genetically modified crops (Robbins, 2007).

Bt cotton is the second most genetically engineered crop. Unlike genetically modified plants, genetically modified animals do not play a major role in the agriculture of today. The first available modified animal, a fluorescent fish which had no agricultural value, was released in 2003. Commercially valuable genetically modified animals are currently being researched such as faster growing trout, but none of these organisms are through the research stage. Parallel to the development of genetically modified organisms is the increase in genetically engineered seed patents. Plants that can reproduce asexually have had the ability to be privately owned since the 1930s because of the Plant Patent Act of 1930. Later on the Plant Variety Protection Act extended these ownership rights to plants that reproduce sexually. There was no protection for genetically engineered plants until the 1980s when the Supreme court declared that genetically engineered plants can be privately owned in 1985 with the Hibberd verdict (Klepek, 2007). During the 1990s, international organizations such as the Union for the Protection of New Varieties formed in response to the need for more oversight of the ownership rights of genetically modified plants. The power of this organization was bolstered by the General Agreement on Trade Related Intellectual Property Rights passed by the World Trade Organization in 1994 (Klepek, 2007). Despite these advances in patents and protections, many question the ethics of private ownership of genetically modified organisms. Because these plants are owned by private for profit corporations, most of the genetically modified seeds will go to large farming operations that can pay the extra premium instead of local farmers that cannot. Because these larger agricultural operations have access to more resilient genetically modified seeds, they may displace smaller farms that use natural seeds (Klepek, 2007). Local farmers can also be prosecuted if genetically modified crops accidently cross-bred with their crops because this is considered patent infringement. The concern that genetically engineered crops only benefit developed countries is not unfounded because almost no genetically modified crops are designed to help developing or undeveloped areas. The prime example of genetically engineered crops used to help developing areas is the cultivated variety known as Golden Rice, but this product is

of no actual value. Designed by Ingo Potyrykus and Peter Beyer to produce vitamin A, this modified rice was meant to supplement vitamin deficient, rice based diets in Asian countries but it was not released for use. In addition, the rice contains only minute amounts of vitamin A that have no effect on health and well-being (Stone, 2007). Because of this bias against small farms, the International Treaty on Plant Genetic Resources for Food and Agriculture was enacted to protect local farmers from the detrimental effects of the spread of genetically engineered crops. There are also concerns about the effects of genetically modified plants on the environment. Modified plants have the potential to cross-pollinate with native plants, which causes problems due to a special gene found in modified plants. In order to protect genetically modified plants from theft; companies add a gene that disables the reproductive capabilities of the crop. If this trait is spread to native crops, farmers will be faced with widespread crop failures that can lead to preventable famines and massive economic damages (Klepek, 2007). Conclusion Humans are dependent on a wide range of agricultural products and it is the basis to the economic stability of developed countries. In the past few decades, new technology and techniques have contributed to improving this necessary practice. Sustainable agriculture focuses on low input, organic, natural farming methods that reform conventional practices and prove to be beneficial socially and economically. Farmers are minimizing tilling and water use while promoting healthy soils and avoiding the use of pesticides and fertilizers. Urban agriculture offers to increase local food supply and improve its distribution. Modern architecture has encouraged the construction of infrastructures that are designed to provide means of agriculture in urban communities. Transgenic crops increase yields in high stress growing areas and expand the total amount of arable land worldwide, but it has yet to provide minimal corollaries to the environment. Sustainable farming technologies are one day to be the superior alternative to conventional and industrial agriculture Bibliography Allen, P., Sachs, C. (1991.) What do we want to sustain? Retrieved from University of California Santa Cruz, Center for Agroecology and Sustainable Agriculture Website: http://casfs.ucsc.edu/publications/sustainability-in-the-balance/what-do-we-want-tosustain Aquaculture (2010). WorldWildlife.org. Retrieved on 4/17/2011 from http:// www.worldwildlife.org/ what/globalmarkets/ aquaculture/index.html Bellostas, N., Dahlmann, C., Dibet, A., Gooding, M., Hauggard-Nielsen, H., Launay, M., & Monti, M. (n.d.) Retrieved from: http://www.groupe-esa.com/IMG/pdf/ Poster_Marseille_A4.pdf Benefits of Mulch. (n.d.) Retrieved from Stanford University Website: http://lbre.stanford.edu

Bergkvist, G., Stenberg, M., Wetterlind, J., Bath, B., Elfstrand S. (2010.) Clover cover crops under-sown in winter wheat increases yield of subsequent barley effect of N dose and companion grass. Fields Crops Research, 120, 292-298 Biointensive farming. (n.d.) Retrieved from: http://watershed.allegheny.edu Biotechs Plans to Sustain Agriculture. (October, 2009). [electronic version]. Scientific American Campiglia, E., Mancinelli, R., Radicetti, E., & Marianri, S. (2009.) Effect of cover crops and mulches on weed control and nitrogen fertilizatio in tomato. Crop Protection, 29, 354-363. Campiglia, E., Mancinelli, R., Radicetti, E., & Marianri, S. (2010) Legume cover crops and mulches : effects on nitrate leaching and nitrogen input in a pepper crop (Capsicum annuum). Nutrient Cycling in Agroecosystems, 89, (399-412). Charles, M.T., Arul, J., Stevens, C. (2008). Emerging approaches and technologies to improve the disease resistance of horticultural crops. Retrieved 3/24/11 from cabdirect.org Despommier, D. (November, 2009). The Rise of Vertical Farms. [electronic version] Scientific American. Dufour, R. (2000.) Farmscaping to enhance biological control. Retrieved from http://www.attra.org/attra-pub/farmscape.html Jensen, M. (1997). Hydroponics. University of Arizona. Retrieved from http://ag.arizona.edu/pls/faculty/MERLE.html on 4/15/2011 Klepek, J. (2007). "Genetic Patents and Seeds." Encyclopedia of Environment and Society. SAGE Publications. 7 Apr. 2011. <http://www.sage-ereference.com/ environment/Article_n456.html>. Kish, S., & Martin, J. (2006.) Organic and integrated farming key to lowering nitrogen leaching. Retrieved from United States Department of Agriculture, National Institute of Food and Agriculture Website: http://www.csrees.usda.gov/newsroom/ news/2006news/ nitrogen_organic.html Kuepper, G., Dodson, M. (2001.) Companion planting: basic concepts & resources. Retrieved from: http://attra.ncat.org/attra-pub/PDF/complant.pdf Lambert, T. (n.d.) Brief histoy of farming. Retrieved from: http://www.localhistories.org/ farming.html Longacre, A. (2008). "Fertilizer". In AccessScience. Retrieved April 10, 2011, from AccessScience: www.accessscience.com

Macon, G. (n.d.) Importance of agriculture. Retrieved from: http://ezinearticles.com Meng, S., Xu, H., Wang, F. (June, 2010). Research Advances of Antimicrobial Peptides and Applications in Food Industry and Agriculture. Retrieved 3/24/11 from ingentaconnect.com Murphy, E.A., & Aucott, M. (1997). An assessment of the amounts of arsenical pesticides used. The Science of the Total Environment, 218, 89-101. NOAA Aquaculture Program (2011). What is Aquaculture? National Oceanic and Atmospheric Administration. Obach, B. (2007). "No-Till Agriculture." Encyclopedia of Environment and Society. SAGE Publications. 17 Apr. 2011. <http://www.sageereference.com/environment/Article_n783.html>. Obach, B. (2007). "Integrated Pest Management (IPM)." Encyclopedia of Environment and Society.SAGE Publications. 17 Apr. 2011. http://www.sage-ereference.com/ environment/Article_n565.html>. Organic Farming and Climate Change. (2008). Retrieved 3/22/11 from orgprints.org Pimental, D. (2003). Environmental and economic costs of the application of pesticides primarily in the united states. Environment, Development and Sustainability, 7, 229-252. Preston, S. (2003.) Applying the principles of sustainability farming. Retrieved from http://www.attra.org/attra-pub/trans.html

Robbins, P. (2007). "Bt (Bacillus thuringiensis)." Encyclopedia of Environment and Society. 2007. SAGE Publications. 7 Apr. 2011. <http://www.sageereference.com/environment/Article_n118.html>. Screening Arabidopsis Activation Tagged Lines Based on Tolerance to Low Zn in Hydroponics. (2006). University of California Shilli-Touzi, I., De Tourdonnet, S., Launay, M., & Dore, T. (2010.) Does intercropping winter wheat (Triticum aestivum) with red fescue (Festuca rubra) as a cover crop improve agronomic and environmental performance? A modeling approach. Fields Crops Research, 116, 218-229. Stone, G. (2007). "Golden Rice." Encyclopedia of Environment and Society. SAGE Publications. 4 Apr. 2011. <http://www.sage-ereference.com/environment/ Article_n482.html>.

Stone, G.(2007). "Genetically Modified Organisms." Encyclopedia of Environment and Society. SAGE Publications. 7 Apr. 2011. <http://www.sageereference.com/environment/Article_n454.html>. Sustainable Agriculture : http://ezinearticles.com/?Sustainable-Agriculture&id=2422317 United States Department of Agriculture, Natural Resources Conservation Service. Mulching. (n.d.) Retrieved from: http://www.nrcs.usda.gov/feature/backyard/mulching.html United States Department of Agriculture, Natural Resources Conservation Service. Benefits of Mulch. (n.d.) Retrieved from: http://www.nrcs.usda.gov/feature/backyard/benmulch.html Van Asten, P.J.A., Wairegi, L.W.I., Mukasa, D., Uringi, N.O. (2010.) Agronomic and economic benefits of coffee-banana intercropping in Ugandas smallholder farming sytems. Agricultural Systems, 104, 326-334. Walsh, J. (2007). "Fertilizer." Encyclopedia of Environment and Society. SAGE Publications. 17 Apr. 2011. <http://www.sage-ereference.com/environment/Article_n408.html>. Zendonad dos Santosi, N., Dieckow, J., Bayer, C., Molin, R., Favaretto, N., Pauletti, V., Piva, J.,T. (2010.) Forages, cover crops and related shot and root additions in no-till rotations to C sequestration in a subtropical Ferralsol. Soil & Tillage Research, 111, 208-218.

Chapter 13 Endangered Plants and Animals


Haylee Caravalho, Hunter Lindquist, and Ryan Nutile

Introduction An endangered animal is a species that is at risk of becoming extinct in the immediate future because of a decrease in the total population. In 2011, the IUCN (International Union for Conservation of Nature) Red List of Threatened Species determined that over 17,315 species were considered vulnerable, endangered, or critically endangered. While this number seems relatively small in comparison to the total number of species that reaches close to 1,740,330, only 47,978 were evaluated that year. Thus, close to 40% of those were endangered (Conservation of biodiversity, 2011). The crisis of rare and endangered species is not a new problem. There have always been a multitude of causes for extinction or severe endangerment. These issues could be the fault of man, wildlife, or habitat, but the reason can also be as simple as a limited species population. Although it is a natural process, people have greatly increased the rate at which plants and animals are becoming extinct. For centuries now, the human population has been the underlying root of this issue and people are now realizing that plans must be put into action in order to protect these threatened species. All living organisms have a weakness or an obstacle which they cannot overcome; it is when this obstacle is imposed on them that a serious threat is made upon their entire way of life. It was once true that extinction occurred naturally due to physical and biological conditions; however, many more species are now at risk due to human activity. The impact that people have had on these plants and animals has been great enough that this can no longer be called a natural process. More than 500 species of plant and animal life have become extinct in North America since 1620, and that number is expected to rise greatly in the next 100 years. This projected increase is based on human advancements and population growth. Both of these areas of development require increased amounts of space and technology which could greatly impact the environment and well-being of countless amounts of species throughout the world (Why Saving Endangered Species Matters, n.d.). The entire ecosystem is greatly affected as various organisms continue to become extinct. All species coexist in a way that is vital to their survival. This interconnection is not only beneficial for specific plants and animals, but for environments all over the world, much like food webs in which individuals depend on those at lower trophic levels for survival. When a species becomes rare or extinct, other species could be affected and may follow their path to endangerment. This is referred to as the domino effect of extinction. When a species of plant or animal dies out, there are many organisms that are uniquely adapted to that species which will

also cease to exist. These so-called affiliate species are completely dependent on their host. It has been estimated that if approximately 12,000 plants and animals currently found on the endangered species list were to become extinct today, another 6,000 species would so as well due to the loss of a depended upon species. This co-extinction seems to be most common with insects, fungi, and other small organisms which live on the surface of plants as well as the skin of some animals (Extinctions Could Have Domino Effect, 2004) Animals In the early nineteenth century, the passenger pigeon accounted for over 40% of the total bird population in North America, with numbers reaching close to five billion. The accumulation of birds was so immense that the nesting grounds stretched across one billion acres of forestland. A single flock could reach up to a mile wide and 300 miles in length, taking hours to pass over a given area and causing complete darkness to the surrounding land (Passenger pigeon, 2009). On September 1, 1914, Martha, the very last passenger pigeon, died alone in captivity.

The passenger pigeon, now extinct, was once the most common bird on the planet.

What Causes Endangerment? The decline of different species is considered a natural part of evolution. Since the earliest life appeared on earth, changes in environmental factors have diminished populations; the endangerment and extinction of animals was caused by multiple natural occurrences. Natural selection, or the ability of an organism to adapt to its environment, was an essential factor in survival. Some animals simply were not fit for their environment. When an animal cannot adapt, the species declines until its eventual extinction occurs (Natural Selection, 2011). For those animals that are able to adjust to the surrounding environment, other natural factors such as predation and disease may still minimize the population.

Climate change is a current, widely controversial phenonenon that is causing some species to become endangered. As the environment around an animal develops, the species must adapt; however, as the climate rapidly changes some animals are not able to adjust in time. A primary example of this is seen with polar bears. The rising temperature has been melting away the ice that polar bears depend on for their primary habitat, breeding, and hunting grounds. Without this ice, the bears must swim longer distances to reach a food source and must find other grounds for living (Polar bear, 2011). These same issues are affecting the bears main source of food, seals. Many seal species are coming closer to endangerment as temperatures rise higher, pushing not only the polar bear population, but also the seals further towards extinction. Without food and shelter, it is likely that the polar bear will become extinct in the near future.

Example of a habitat destroyed by deforestation. While many natural changes have had a negative affect on many animals, the primary complication comes directly from human involvement. Over the past 10,000 years, the number of endangered species has increased so rapidly that scientists are not able to determine exactly how many animals have been affected (Wildlife conservation, 2011). Approximately 99% of the threatened species today are at risk because of human involvement. This number is exceptionally high in comparison to statistic of earlier life (Endangered species, 2011). As the human population has increased significantly, animal populations decrease. The need to expand land for development has led to the demolition of many natural habitats for endangered species. Without homes, the animals are forced to move elsewhere, sometimes to locations where they simply cannot adapt. The destroying of habitats is the leading contributor to the increasing number of endangered species that are caused by humans. Each time that a new town is built, a forest must be demolished in the process. The forest provides a habitat for an entire ecosystem of species that are now forced to relocate and start anew (Endangered species, 2011). Pollution by humans is another factor that is leading to the decline of species. Each time that a human drives a car or a company burns waste material, the air becomes polluted. This is similar to the situation caused by companies that dump hazardous material into the ocean. Each time that pollution occurs, species are at risk for chemical poisoning. Not only do the chemicals affect the habitats of some animals, but the aftereffects can be devastating for many months past the initial problem (Dublin, 2011). This was seen with the BP oil spill; the water was polluted by vast amounts of oil that spread across miles of the ocean, and aquatic organisms were poisoned

by dangerous toxins in the oil. Multiple populations of fish were left without clean water and were killed by the oil spill. While several months have passed since the oil spill, the damage done to the wildlife is an ongoing problem. While habitat loss and pollution are larger problems that human involvement has caused for endangered species, there are multiple smaller factors that still hold a large significance. The increased role of hunting and poaching has lead to the extinction of many endangered species. Whether simply hunting for a personal pleasure or hunting to gather specimens for medical testing, multiple species are being driven to extinction. Some people even hunt merely for the benefit of collecting the carcasses of the deceased animals (Dublin, 2011). While unpreventable natural factors exist that are leading to the endangerment of species, human invasion causes many controllable problems that can be prevented. By reducing the impact that humans make on wildlife, the number of animals that are becoming extinct will lessen significantly. Protecting Animals A large number of animal habitats are destroyed through human interference. The growing population of humans is causing a need for expansion. As more houses are built, more natural habitats are being destroyed. It is because of this that protected wildlife sanctuaries are forming throughout the country. These habitats allow endangered species to thrive as well as promoting captive breeding to increase the number of animals in a given population. A wide variety of programs aid in the collaborative effort to prevent the extinction of unique plants and animals. Legislation such as the Endangered Species Act of 1973 has helped to establish programs to encourage the protection of species in danger of extinction. These programs include land conservations and tracking systems which allow the groups to be monitored in their natural habitat. Wildlife refuges and nature reserves are well known in the fight to save plant and animal species from becoming endangered or extinct. They allow them to live in an environment which is protected from the outside world and harmful threats such as predators, excessive hunting, or deforestation. President Theodore Roosevelt established the first wildlife refuge in 1903, and there are now over 550 refuges in the country (National Wildlife Refuge, n.d.). Some reserves are vital to the survival of particular plants and animals. The Wolong Nature Reserve in China houses over 6,000 species which makes up 17% of the biodiversity found in the nation. It is home to severely endangered species such as the giant panda (Ailuropoda melanoleuca), snow leopard (Panthera uncial), and dove tree (Davidia involucrata )(Wolong Nature Reserve, n.d.). GPS tracking systems allow species to live in their natural habitat while humans track them with specially designed colors or tags. These markers can locate the animal at any given time and many logs are often kept to record animal activity. By keeping an eye on these creatures, researchers are able to better estimate survival and breeding rates. For example, whooping cranes (Grus americana) were tracked in order to study their migration patterns. Tracking allowed professionals to observe the number of birds that did not survive. GPS tracking also allows researchers to potentially answer questions that they could not before. The bighorn sheep (Ovis canadensis) is an example of an animal which once had a population of millions, but is now unfortunately faced with the possibility of extinction. Tracking systems have allowed these

animals to be observed in order to identify how a specific herd survives in the harsh mountains of Wyoming during the most brutal weather conditions. This knowledge could potentially help other herds of the same species and save this group of animals from extinction (GPS Tracking Bighorn Sheep In Wyoming, 2010). This is just one example of how GPS tracking systems can greatly help in the struggle to save endangered species.

Researchers use GPS tracking to follow the newly endangered bighorn sheep.

Despite the increasing rate at which species are declining, new advances in technology are assisting in the fight to save endangered species. In order to increase the mass of a given species, scientists must determine the factors causing the population to dwindle. This means gathering a surplus of knowledge on animal behavior, breeding patterns, eating patterns, and habitat structure. The scientists involved in these studies must collect this data in the most noninvasive way possible so that further damage is not done to a species. Currently, the least invasive method used is animal tracking. Scientists use multiple varieties of tracking in order to collect information. In most cases, location-tracking tags are permanently affixed to on animal so that researchers can track the location of the given species throughout its life. While this method has been helpful in the past, it is costly and has proven to cause discomfort for the animals involved (Automatic image recognition, 2011). It is because of this alternate tracking methods are beginning to arise. Researchers are currently developing what is known as automatic image recognition. This method is based solely on image recognition and does not require any invasive procedures. Researchers at Duke University use automatic image recognition with leather back sea turtles that nest on surrounding beaches (Automatic image recognition, 2011). The leatherback is one of the largest sea turtle species on earth. Currently, only about 34,000 adults are left today. The sea turtles are distinguished by a cloudy pink spot that rests on top of their heads. Much like a fingerprint, each spot is different for every sea turtle. When the leatherbacks come to the shore to nest, researchers are able to gather information on the health of each turtle and capture photographs of the pink spots. When the turtles return to the shore to nest again, automatic image recognition enables the scientists to track the turtles to see each one individually (Automatic image recognition, 2011).

Researchers at Duke University use automatic image recognition to track leatherback sea turtles With image recognition, a high quality camera analyzes the different pink spots on the turtle heads. The camera starts at the center and studies the overall shape of the marking. After the analysis, the camera compares images to those that were taken in the past. Once a match is made, scientists are able to gather more statistics on the turtle to compare its heath over time. With automatic image recognition, helpful information is collected regarding the growth of the leatherback sea turtle. By looking at the health of a particular turtle throughout the years, scientists are better equipped in determining the key factors that have been leading to the decline of the species. Another problem that results in the endangerment and extinction of several species is the medicinal use of animal parts. In many cultures, the practice of using animals in medical applications has been around for centuries. Zootherapy, as it is most commonly called, is the use of animals and products derived from them in healing (Cavaliere, 2010). Currently, over 1,500 different animal species have therapeutic uses in Traditional Chinese Medicine (TCM). With some applications of zootherapy, the animals involved are not killed or harmed in any way. These medicines only require animal fur, urine, feathers, excrement, or other by-products. With some deer speices, even antlers can even be harvested without any damage to the animal (Cavaliere, 2010). However, more often than not, animals must be killed in order to obtain resources needed for medicines. When intestines, skin, and other organs are required, there is no non-invasive way to gather materials without involving the slaughter of the animal. It is because of this that most trade of animals for medicinal use has been banned in multiple countries. Even with trade prohibited, illegal animal trafficking continues to take place with a yearly income of over $6,000,000,000 (Cavaliere, 2010). Hundreds of animal populations are slowly declining as trade continues. In China, the use of tiger bone in medical applications has pushed the wild tiger close to extinction. Once as numerous as 100,000, the tiger population

has fallen to under 3,200 in little over a century. It is because of this that scientists are desperately trying to promote alternatives to animal medicines.

Tigers are on the verge of extinction due to hunting. For several applications, other animal bones such as pigs and dogs can be used in place of that of the tiger. In some cases, however, this is not enough. For instance, the herbal company in Mayway is currently promoting alternatives to medicines made with tiger products. Once a market place for such goods, the company is now dedicated in the fight to save endangered species. A list of the tiger-friendly products was set forth by the company to show that other medicines were available that could work just as well. The products are made entirely of plants and herbs, allowing the animal population to remain unharmed (Cavaliere, 2010). Other companies share the same goal as the herbal company in Mayway. The search for plant alternatives has been growing over the past few years. It is likely that as more plant substitutes are introduced, the number of endangered species will follow a similar descent. While hands-on technology has become the main focus in saving the wildlife, other efforts have been made to spread knowledge to those who remain ignorant to the increasing problem. Bans have been put on a variety of harmful activities including animal testing, animal poaching and hunting, and the destroying of natural animal habitats. Many forests are now under legal protection and cannot be touched; however, there are several more issues that must be accounted for. Pollution is steadily becoming a larger problem as time goes continues. In the mid 1900s, the pesticide DDT was causing widespread damage to the bird population. Not only were adult birds dying from the harsh chemicals ingested from eating infected insects, but the pesticide caused further damage to offspring. DDT deteriorated the calcium found within the shells of bird eggs, causing the weakened eggs to shatter when the mothers tried to lie on them. After this knowledge was released to the public, a ban was put on the pesticide in hopes of minimizing the damage that had already occurred. Scientists believe that this ban is what brought back the bald eagle, peregrine falcon, and brown pelican from the verge of extinction (Ronald Bailey, 2004). In the past decade or so more scientifically advanced options have been explored in order to protect those species which have the potential to one day become extinct. Although it has had a small success rate and is not used often, cloning has become a realistic option in the fight

against extinction. In 1996, the sheep known as Dolly was the first mammal to be successfully cloned from an adult differentiated cell by the experiments conducted by Dr. Ian Wilmut. This was the single completely successful case study since John Gurdon cloned a frog in 1958. However, these frogs did not continue past the tadpole stage in development. The cloning process takes an unfertilized egg, an oocyte, and replaces the nucleus with that of a mature, differentiated cell from a fully developed specimen of the same species (Animal Cloning, 2000). A differentiated cell is one which has adapted for a specific purpose. These cells are predetermined and separated into different parts of the organism to do specific function as cells multiply. Many believed cloning to be impossible because it was thought that once a cell was differentiated it was irreversible and could no longer produce all the specific, vital cells necessary for another organism. Gurdon demonstrated that differentiation is reversible, but scientists tried and failed for decades when attempting to successfully clone any type of species. Keith Campbell, one of the scientists working with Wilmut, had the idea to use a quiescent cell when attempting to successfully clone an animal. These cells are those which have ceased to divide for a period of time rather than a normal cell, which had been used in all other cloning trials, that is constantly dividing. This was the breakthrough that produced the first surviving mammal clone.

Image of the first cloned house pet, CopyCat. Since that time, many more animals have been cloned, and it is now becoming a much better known area of study. In 2001 the first household pet was cloned in Texas, a domesticated cat named CopyCat, and this success story was a great triumph for science. In fact, some families have paid to save the DNA of a family pet in hopes that one day they may be able to replicate the animal. Even if this process is eventually available to consumers, it will not be possible to replicate the beloved pet identically. In fact, the personal characteristics and appearance of a clone may be significantly different from its single parent.

Cloning remains controversial; however, future experimentation may aide the struggle to protect endangered species. As this process becomes more familiar, it may become a much larger part in the fight against extinction in fields such as breeding programs. It would allow species that are greatly threatened to reproduce at a much greater speed considering they often have great difficulty producing offspring. Plants and Their Habitats As scientists discover more new organisms, their estimate of the total number of species increases as well, with current estimates ranging anywhere from 5-30 million (Vie et al., 2009). Over 10% of the known species of Earth reside in the tropical rainforests of South America, making this region the most biologically diverse on the planet ("Largest tropical rainforest," n.d.). Specifically, scientists believe that over 40,000 unique plant species exist within the Amazon rainforest, representing an enormity of inimitable wildlife (Silva, Rylands, & Fonseca, 2005). While this biome was once an enormous and thriving ecosystem, recently the impact of man has begun to redefine the area. The most significant impact is that of deforestation, which didnt truly arise until the 1960s (Kirby, 2006), but in recent years has caused the Amazon to lose 17% of its forest cover.

The Amazon Rainforest, the most biologically diverse ecosystem in the world. Thus in just 40 years, this problem has resulted in the loss of a significant amount of plant life and perhaps in various extinctions. If action is not taken to limit these threats, the Amazon rainforest and the wildlife it supports could soon be destroyed. Unfortunately, the plight of the rainforest is not unique; similar problems exist all over the world. Deforestation and pollution exist all over the Earth, as do global warming and climate change. As a result, scientists estimate that one-fifth of our plants are endangered species (Threat of extinction, 2010). Of these endangered plants, researchers approximate that two to three species become extinct each day (Cryogenic seed bank, 2003). Thus, if nothing is done to alleviate these problems, the world could soon lose 20% of its plant species.

The venus flytrap (Dionaea muscipula), perhaps one of the most famous plants in the world, is not only carnivorous, but is also an endangered species endemic to the southern U.S. Scientists recently estimated that fewer than 150,000 of these plants remain in the wild, and their Because the flytrap only grows in these small populations, even the smallest threat could cause the flytrap to become extinct in the wild. Fortunately, however, due to the interest and popularity of insect-eating plants, the Venus flytrap is a common household plant. As a result, the total extinction of this species is incredibly unlikely.

A venus flytrap, now in danger of extinction.

Alhough the widespread cultivation of the flytrap was certainly not an anti-extinction tactic, it seems that its popularity could prove to save the species. Organizations such as the World Wildlife Foundation seem to have caught on to this notion, and a common effort to save an endangered species is now to raise awareness of its issues. While the efficiency of this method is debatable, technology allows for the rapid and efficient spread of information, such as the plight of endangered species, and without it, this method of saving wildlife could not exist. Plant Collection Scientists now approximate that 50% of all endangered plants are found in Hawaii (Kerns, n.d.). Due to their geographic isolation, these species are unable to spread beyond these Pacific islands. However, as non-indigenous peoples have inhabited Hawaii, foreign plant species have been introduced to its habitats. These foreign species interfere with the natural balance of Hawaiian ecosystems; the non-indigenous plants grow so well in Hawaii that they interrupt the growth of its native species. Thus these species are unable to compete, and their numbers dwindle. Such has been the case over the past century, and as a result, thousands of native Hawaiian plant species have already become extinct, with many more currently at risk. Efforts are being made to save these plants, however; the National Tropical Botanical Garden (NTBG) strives to collect these endangered species and preserve them in a controlled environment (Conservation, n.d.). However, despite these efforts, reports indicate that only 39% of the endangered plants in North America are preserved in such collections, leaving a large margin for improvement (39 percent protected, 2011). While this method may not increase the population

of endangered plants, it nearly guarantees their survival, even if only in a laboratory. Additionally, if multiple specimens are collected, this preservation allows for the mating of plants, and thus the indefinite continuation of the species.

Woods cycad, now only found in conservatories. There are quite a few species around the world today that can only be found in captivity. These groups are so greatly endangered that they are not capable of living out in the wild. They must be contained in zoos or conservatories where they are not threatened. One of these rare species is the Wood's cycad (Encephalartos woodii). It no longer exists in nature since the explorer J. Medley Wood found it growing in its native habitat of South Africa in 1895. Only a very small number can be observed in botanical gardens today ( The King of our Conservatory, n.d.). Seed Banks While the methods of endangered species preservation mentioned above are widely used, they simply remove the plants from their natural environment and ensure the survival of individual specimen. However, in recent years, the revival of endangered species has become a possibility, largely due to technological breakthroughs. One such breakthrough came in 1983, with the establishment of the first seed bank (Seed bank, 2011). Now popular all over the world, seed banks are secure locations in which the seeds of endangered or threatened plants are gathered and kept cryogenic isolation, a chilled protective state in which the seeds are maintained. These banks allow for the preservation and repopulation of plants. While this establishment got the ball rolling, new technologies have since developed, allowing for the greater aid of endangered plants. Plant Propagation One such technology, plant propagation, arose in the late 20th century after scientists had established a sufficient understanding of genetics and cellular biology. This technology

essentially allows for the cloning of plants, albeit through a complex process. Because each cell of a plant contains all the material required for its reproduction, a diminutive sample from a parent plant can result in the formation of a new organism (Asexual propagation, n.d.). However, this outcome requires significant assistance from scientists and technology. First, propagators must remove a sample from the parent plant in a sterile and precise manner. Next, the sample must be maintained in an environment that will stimulate its growth. Additionally, all disease and sickness should be prevented from affecting the sample, and hormones and other supplements should be used to encourage its rapid growth. If all of these conditions are met, a sample such as a part of the stem of a plant can be used to create an entirely new organism. This technology could be incredibly useful for maintaining a critically endangered species, as it is not a difficult process and is generally successful. Also, the process is not cost or labor intensive, making it even more desirable. However, it probably could not be used to repopulate an entire species; it is inherently slow and only one organism can be produced at a time. Additionally, this process produces genetically identical offspring. This could result in a reduced genetic diversity of a species, which could be problematic if the species inhabits a changing environment or needs to adapt quickly. Despite these problems, plant propagation is a viable and inexpensive method for maintaining an endangered species.

Micropropagation Perhaps the most modern technology employed in plant preservation is micropropagation, a more advanced and efficient form of plant propagation. Essentially, this technology allows for the rapid replication of plant material, and the formation of hundreds or thousands of new plants from a single parent plant; it is almost a form of rapid cloning. This method is nearly identical to plant propagation; there are only minor differences between the two technologies. However, in micropropagation, after a sample is chosen and isolated, it is allowed to grow until there is a sufficient amount of plant tissue. At this point, this original sample is divided (in some cases with a blender or other methods of separation) into hundreds or thousands of new samples (Micropropagation, 2011). Because each cell of the original sample is capable of regenerating into a new adult plant, this separation allows for more offspring from one original parent organism. Thus, this technology is arguably the most suitable for endangered plants; it can be used to produce hundreds of new organisms from just one parent. Among all strategies of species preservation, micropropagation is the only one that could feasibly be used to repopulate an entire species. However, this technology also has significant drawbacks. It is the most expensive and laborintensive technology and requires highly specialized scientists.

An example of plants created via micropropagation.

Additionally, athough plant propagation may sacrifice some genetic diversity of the target species, micropropagation could certainly eliminate any such diversity; hundreds or thousands of new organisms are cloned from one parent plant, and each offspring will have an identical genome to the parent. Despite these advantages, micropropagation is a growing technology that shows much promise for the revival of endangered species.

Conclusion Currently over one-fifth of the known plants are classified as endangered species. (Threat of extinction, 2010). With numerous species becoming extinct each day, it has become evident that action must be taken in order to save the remaining organisms. The flora of our planet acts as a colossal carbon sink, absorbing enormities of pollutants from the atmosphere. Without this filter, global warming would be accelerated unimaginably, and the world would undergo immense climate change, further endangering other species, perhaps even humans. Additionally, over 80% of modern medicine comes from plants (Patsalides, 2010). Thus, if nothing is done to save plants from extinction, we could lose many potential drugs, which are not yet identified or used. Finally, the global ecosystem could be damaged irreversibly if a significant amount of plants were to reach extinction; the animals that feed on these plants would soon die out, and this could continue throughout the food chain until eventually humans felt the effect. Clearly, the flora of Earth is too valuable to ignore, and something must be done to save it. People and organizations can spread awareness of this dilemma through the use of technology, although this will not directly help the plants. Additionally, scientists can remove organisms from their native environments and ensure their survival in a controlled laboratory setting. While this will not help a species to repopulate, it will at least prevent its extinction. Likewise, biologists can collect the seeds of endangered plants, which will both prevent extinction and

provide a possibility for repopulation. A greater application of technology is found in plant propagation, however, allowing for the cloning of plants. This method only allows for a very slow repopulation of genetically identical plants, however, and is thus undesirable. Genetic engineering also provides a possible solution, but this technology is better used to save plants that are incapable of adapting to their changing environment. The most applicable technology by far is micropropagation, which allows for the rapid cloning of a parent plant and results in hundreds or even thousands of offspring. This method could surely be used to repopulate endangered species at the cost of its genetic diversity. However, this technology is rather expensive and labor intensive, and thus has its drawbacks as well. Nonetheless, all of these technologies could, and arguably should, be employed to help save endangered plants. Without these species, the world will never be the same. In addition to new advances to help save these species, there are many simple methods to help prevent the serious endangerment of future pants and animals. People should be aware of the products they use every day and stay away from harmful chemicals or pollutants. Recycling and promoting green habits can also make a difference. Deforestation and water pollution are examples of ways that destroy the habitats of a countless number of organisms, and once these environments are gone it may be difficult for these organisms to continue living. Only a small precentage of the organisms on Earth have been identified. There are still millions which are unknown to science. Countless numbers of species could become extinct before they have even been discovered, and this loss not only impacts the biodiversity of the world, but eliminates an organism that may have potentially been useful for people. Many plants are the key to important medical research and without the opportunity to observe these species a life changing cure could be potentially lost. Only a small percentage of plants have been tested for medicinal value, and thousands are already being used in various forms of medication. All species on the verge of extinction have some ecological role in the world and should be protected. Endangered species will never diminish completely. Whether the steady decline of natural habitats continues, or the increasing amount of pollution intensifies, there will always be animals that cannot thrive naturally in the wildlife. Although all species cannot survive forever, the number of endangered and extinct animals can be brought down to a tolerable number. The passenger pigeon was brought to extinction by human interference in the surrounding forests. While there will always be the presence of endangered species, reducing the impact made by humans will drastically reduce the increasing number of animals that die each year.

Bibliography A cryogenic seed bank for preservation of endangered plant species. (2003, April 28). Retrieved from http://www.solutions-site.org/artman/publish/article_35.shtml Andrabi, A.M.H. and Maxwell, W.M.C. (2007) A review on reproductive biotechnologies for conservation of endangered mammalian species. Animal Reproduction Science, 99(34), 223-243

Asexual propagation. (n.d.). Retrieved from http://aggiehorticulture.tamu.edu/archives/ parsons/misc/asexualpropagation.html%20 Automatic image recognition technology spots endangered leatherback sea turtle. (2011, February 2). Retrieved March 22, 2011, from http://www.cwi.nl/news Bowie, N., & Chae, E. (2011). Gorillas. Retrieved from http://www.endangeredspecies international.org Cavaliere, C. (2010). Medicinal use of threatened animal species and the search for botanical alternatives. HerbalGram, (86), Retrieved from http://cms.herbalgram.org Conservation. (n.d.). Retrieved from http://ntbg.org/programs/conservation.php Conservation of biodiversity. (2011). Retrieved from http://curiosity.discovery.com Data Organization. (2011). The IUCN red list of threatened species, from www.iucnredlist.org Dodds, J. (1985). Plant genetic engineering. Cambridge University Press. Dublin, H. (2011). Endangered Species. Encyclopedia britannic. Retrieved April 5, 2011, from http://www.britannica.com/EBchecked/topic/186738/endangered-species Endangered Animals. (2011). Retrieved March 24, 2011, from www.britannica.com Endangered species. (2011). Encyclopdia Britannica. Retrieved from http://www.britannica.com Fay, M. R. (1991) In vitro cellular &developmental biology plant. Biomedical and Life Sciences, 1(28), 1-4 Ferraro, P, McIntosh, C, & Ospina, M. (2007). The Effectiveness of the U.S. endangered species act: an econometric analysis using matching method. Journal of Environmental Economics and Management, (54), Retrieved from http://www.sciencedirect.com GPS Tracking Bighorn Sheep In Wyoming. (2011). Helping endangered sheep with GPS tracking, from www.tracking-system.com Hogan, M.C. (2010). Endangered Species. Encyclopedia of earth. Retrieved April 5, 2011, from http://www.eoearth.org Karl. (2010). The King of Our Conservatory, from longwoodgardens.wordpress.com. Kerns, M. (n.d.). About endangered flowers of hawaii. Retrieved from http://www.ehow.com/about_4728549_endangered-flowers-hawaii.html

Kirby, K. et al. (2006). The future of deforestation in the Brazilian amazon. Science Direct, 38(4), Retrieved from http://www.sciencedirect.com/science doi: 10.1016/j.futures.2005.07.011 Maran, T., Pdra, M., Plma, M., & Macdonald, D.W. (2009). The survival of captive-born animals in restoration programmes case study of the endangered european mink Mustela lutreola. Biological Conservation, 142. Retrieved from http://www.science direct.com/ Natural Selection. (2011). Encyclopedia britannica. Retrieved April 5, 2011, from http://www.britannica.com New study shows one fifth of the worlds plants are under threat of extinction. (2010, September 29). Royal Botanical Gardens, Kew News, Retrieved from http://www.kew.org Only 39 percent of North American endangered plant species are protected in collections. (2011, February 7). Retrieved from http://www.bgci.org Owen, J. (2010) Extinctions could have domino effect, study says. National Geographic. Passenger pigeon. (2009). Retrieved from http://www.wbu.com Patsalides, L. (2010, August 15). Medicinal plants are endangered. Retrieved from http://www.brighthub.com/environment/science- environmental/articles/52899.aspx Pecorino, L. (2000). Animal cloning:ole Macdonalds farm is not what it used to be. Retrieved March 24, 2011, from ActionBioscience.org Polar bear. (2011). Retrieved from http://www.nwf.org Reading, R. P. and Miller, B. (2000). Endangered animals: a reference guide to conflicting issues Greenwood Press Sanderson, J., & Moulton, M.P. (1999). Wildlife issues in a changing world [Vol.2]. Retrieved from http://www.crcnetbase.com Santos, R.R. et. al. (2007) Cryopreservation of ovarian tissue: an emerging technology for female germline preservation of endangered species and breeds. Animal Reproductive Science, 122(3-4), 151-163 Seed bank & conservation program needs your help [Web log message]. (2011, February 14). Retrieved from http://randomgardening.blogspot.com

Silva, J, Rylands, A, & Fonseca, G. (2005). The fate of the amazonian areas of endemism. Conservation Biology, 19(3), Retrieved from http://onlinelibrary.wiley.com Simberloff, D. (2000). Introduced species: the threat to biodiversity &what can be done. Retrieved March 24, 2011, from ActionBioscience.org Tucker, A. (2010, February). The venus flytrap's lethal allure. Smithsonian Magazine, Retrieved from http://www.smithsonianmag.com Vie, J.C., Hilton-Taylor, C, & Stuart, S. (2009).Wildlife in a changing world an analysis of the 2008 IUCN red list of threatened species. Retrieved from http://data.iucn.org Why Save Endangered Species? (2005). Retrieved March 24, 2011, from www.fws.gov Why Saving Endangered Species Matter (2002). Retrieved March 24, 2011, from www.fws.gov Wildlife conservation. (2011). Retrieved from http://www.wildlifeconservation.org/ Wolong Nature Reserve. nd. Retrieved April 17, 2011, from www.globio.org Worlds largest tropical rain forest and river basin. (n.d.). Retrieved from http://www.worldwildlife.org

Chapter 14 Invasive Species


Benjamin Wright, Joseph Gencarelli, and Zachary King

Introduction Many ecosystems reach a biological equilibrium in which everything is maintainable. The plants grow quickly enough to support the population of lower animals, the predators have enough prey to survive and feed themselves, and water and shelter are plentiful. Every organic and inorganic component of an ecosystem at biological equilibrium is found in perfect proportion. Invasive species are non-indigenous animal species that migrate into ecosystems in which they did not evolve. Invasive species disrupt the balance of the flora and fauna domestic to the area they attempt annex. Groups of animals migrate all the time, but when a species moves into an entirely new ecosystem, the indigenous plants and animals have no time to adapt to the new species occupying the same territory. In fact, some research shows that invasive species, and often the successful ones, are incredibly highly skilled in some areas while sufficiently functioning in other areas as well (Higher phenotypic plasticity, 2011). When this happens, the invasive species become powerful and disturb the past development of the ecosystem by uprooting the balance. Habitats all over the world are experiencing the negative effects of invasive species. Many technologies and tactics have been developed and implemented to minimize the effects of specific non-native plants and animals. These technologies include detection and prevention educational systems, control and elimination techniques. Detection and Prevention In the age of more powerful computers and more readily available technologies, ecologists are turning more towards computer generated models for assistance in stopping ecological disasters. One team of computer scientists, ecologists, field experts, and environmentalists are developing systems that do not focus on one particular species. Instead, this group designed a computer model that can not only map where a species is likely to become an ecological hazard, but also provide a degree of guidance on where field experts should be deployed. This allows local governments and concerned citizens to set up defenses in areas susceptible to biological invasion, and it provides scientists and researchers with locations where they can make a difference. This model and the development of models like it are the future of conservation ecology, and some research in this area has the potential to protect some of the worlds most fragile ecosystems (Mengersen, Jarrad, Barrett, Murray, Stoklosa, & Whittle, 2010). Recently a very new variety of model has appeared in scientific circles: one that can predict what type of organism will become a dangerous invader if it is introduced into a given

environment. Originally developed to predict shifts in the grassland ecosystems in the Australian wilderness, this system has been refined and improved by scientists working with the original developers and ecologists from the University of Florida, and now it can be used to test whether a large variety of plant species can outperform or replace their native counterparts. This is a great asset for farmers and entrepreneurs alike because both groups will now have the knowledge to use the plants deemed unlikely to disrupt the balance in ecosystems across the United States. Pioneers of green technology will also be able to use this model to either refrain from using species likely to take over local environments or target these same species that are already growing locally to prevent them from filling their destructive potential. This type of model has a great deal of potential, and the researchers with the University of Florida are already working on a similar system for new animal species (Gordona, Tancigb, Onderdonkc, & Gantzc, 2011). Even though all invasions of foreign species are dangerous to biodiversity, some provide more assistance than they do trouble. One such animal is the rainbow smelt (Osmerus mordax), a fish native to the waterways of Maine that has found its way throughout Canada and the rest of North America. When dealing with these fish, scientists are beginning to use technology to aid in the tracking (and thus prevention) of the population explosion. In 2006, a team of scientists began researching the ideal conditions for a drastic increase in the number of smelt a common occurrence in New England freshwater. After taking into account variables such as salinity, pH, lake area, lake depth, and coastline perimeter, researchers designed a mathematical model that would use their findings to flag lakes across Ontario, Michigan, Wisconsin, and other areas of Maine as potential sites for invasion. This model allows landowners, public officials, and recreational fishers to work together and devise better methods for the control of the rainbow smelt (Mercado-Silva, Olden, Maxted, Hrabik, & Zanden, 2006). Avoiding the introduction of invasive species is the best way to avert competition between native and non-native plants in natural habitats. Researchers at the Chicago Botanic Garden thoroughly assess the plants assembled from foreign countries before they add them to their collection. The Collection Policy of the Garden states that any plant which has the potential to threaten the genetic diversity of local native populations, overly aggressive behavior (weediness), or the ability to introduce pests or diseases will be screened and evaluated before being accepted into the collection (Invasive Policy, 2011). Aware of the increasing threats of presented by invasive species, The policy is becoming more intensified. Other sections of the policy state that if a plant species is at risk of becoming invasive to its habitat, it shall be removed and destroyed. The garden is also developing a list of non-invasive plants that have similar landscape uses as those plants that have been removed. This list would allow gardeners to be well informed about which species of plants they could buy that can be used in a garden as decorative pieces that will not become harmful to other flora. Although harmful flora can spread by various means, among the most common and problematic methods of dispersion are through trade, elements, and transportation. If the spreading of invasive species can be controlled, then the number of plants introduced to the U.S. and other countries would decrease as well. Once an organism is introduced to a new habitat, it can be costly to maintain, therefore, preventing the dispersion would also decrease the cost of invasive species management (Advisory Committee, n.d.).

By stopping sales of invasive plants and animals at local nurseries and pet stores, the their introduction and spread can be further controlled. Some states are making an effort to reduce the dispersal of invasive species, but efforts are inconsistent. For instance, in Minnesota it is against the law to sell invasive invasive aquatic flora such as the flowering rush (Butomas umbellatus); however, in Wisconsin and some other states, it is legal to purchase these plants. The rhizomes of this plant can be broken, and the fragments easily root and spread the species invasively. If sales of flowering rush continue, then the efforts to try to stop its spread will be wasted and the situation will escalate (Flowering rush, 2009). Travelers spread invasive plants too. People are unaware that they could be carrying the seeds of an unwanted species. It does not take long to step on an invasive plant and to have its seeds adhere to the soles of a shoe. Once the seeds are removed from their natural habitat and plants are established in other ecosystems, it is almost impossible to eradicate them from their new habitat. Because the spread of invasive vegetation takes place mainly along water ways, a method of prevention called bottom plant barriers is used. This method of prevention is adapted to rivers and lakes, and the process of placing the barriers involves placing structure on the bed of a lake. The barriers can affect lake water levels and require permission to be placed, but if applied to the water banks, it could prevent the spread of seeds along lakes and river banks. For instance, purple loosestrife multiplies by seed dispersion, and water barriers would prevent its spread (Neil, 2008).

This picture shows wetlands invaded by purple loosestrife (Lythrum salicaria).

Barrow Island, located just north of the western tip of Australia, has maintained status as a Class A nature reserve for more than four decades because of its ecological stability and widely diverse fauna and flora, but scientists must work to protect the indigenous flora and fauna of the habitat. Barrow Island has been home to different types of rats and mice in the past, which

sometimes have invaded ecosystems in which they did not occur naturally, but some of these invaders died off the island. Now, to prevent invasions and keep Class A status, teams of invasive species experts survey the island for potential threats to the biological equilibrium to the native species of plants and animals (Jarrad, et al. 2011). First, they observe all of the animals interacting in the ecosystem. At least one biologist surveys a specific risk zone of the island once but possible up to five times. The risk zones are denoted by the Barrow Island conservation team as Z1, for the highest risk of invasion; Z2, for a moderate risk of invasion; and Z3, for the lowest risk of invasion. The teams have ten varieties of traps spread over each of the three risk zones; each type of trap targets a specific variety of invasive species. The traps are always most abundant in Z1, usually followed by Z2. Surveyors check each of the traps and record how often a specific type animal is captured. This data from the surveys is used in an analysis on the growth of populations on the island. After mathematical testing and comparisons, they decide which species are dangerous and in need of control. This tactic helps prevent the Barrow Island from being taken over by invasive and non-indigenous species and maintain its high status as a nature reserve. Control and Elimination One pest finding its way through international waterways is the Chinese mitten crab (Eriocheir sinensis), a voracious predator known for devouring native species and damaging the balance of healthy ecosystems. These creatures are entrenched on coastlines throughout the world, and it appears that there is no possible way to completely eradicate them. Instead, the state of California has taken on the issue of minimizing the impact of the mitten crab with new legislation. As of 2003, legislators enacted sweeping new laws that help restrict the imports of such pests by requiring greater oversight in the international shipping traffic that passes through the state, and these new rules also inflict harsher punishments on those seeking to bring this invader into the United States for their own gain. Possibly the most innovative part of the laws is the information management and sharing system it creates. Citizens, shareholders in fishing or seafood companies, and fishermen themselves are now more connected to one another, and this allows these three parties to work together in order to eliminate the threats that the mitten crab poses (National management, 2003). In the past, law makers have considered similar bills, but these discussions are usually limited to ballast water ejected from ships traversing international waters. In 1999, the Clinton administration attempted to impose strict guidelines on ballast water and disposal thereof. It is an infamous vector for invasive aquatic species, and many scientists hold ballast water responsible for many of the worst infestations in history. When these new laws were considered, there were not many ways to destroy harmful organisms in transit, but since then, a large amount of time and effort has been expended on various projects directed at limiting this vector (Saccomano, 1999). One method developed in Sweden combines ultraviolet radiation with flash boiling and churning to eliminate nonnative algae, shellfish larvae, and harmful microorganisms. Extensive research has shown that any of these three methods by themselves do not effectively eliminate potential invaders, but this Swedish system manages the task efficiently, and what makes this

system particularly exciting is that it yields no harmful by-products and it does not require chemicals. Organisms that pass through it are broken down into carbon dioxide, water, and other simple chemical compounds that the environment can absorb. These interesting chain reactions show great promise for the elimination of microbes, and, according to new research, expansion of these methods is not far in the future (Hull, 2007). Possibly one of the best known invasive animals is the zebra mussel (Dreissena polymorpha), a tough and hardy shellfish known for spawning in huge numbers every day and clogging water intakes at industrial sites. For as long as they have known about the dangers of invasive species, people have attempted to discover the best possible way to remove these mollusks from their local environments. In the past, people simply coated their docks and pipes with hazardous chemicals to prevent zebra mussels from taking up residence, but now toxic paints are now banned worldwide. Preventing the spread of these marine pests has, since then, been very dificult, but several scientists have found a new method for removing these nuisances from industrial waterways (Zebra mussel, 2011). Dr. David Aldridge from the University of Cambridge in the UK recently developed a set of anti-shellfish toxins that are ideal for fighting the zebra mussel, and they are coated in a protein that makes the mussels think that they are good for eating. These mussel-fighting molecules are designed to break down in about an hour, and this way, if any of the chemical agent escapes from the pipes into which they are introduced, they will not contaminate the rest of the water in the area. Not only does this eliminate local infestations of mussels, but it also helps industrial and power companies do more to assist in the protection of the environment (Thames, 2009). Across Asia and South America, rice is a key food staple, and it is grown most effectively in marshy wetlands. However, the golden apple snail (Pomacea canaliculata) is known for its devastating effect on these crops in these areas, and scientists are racing to devise a way to prevent this damaging animal from destroying local economies. One group of engineers and ecologists are developing a system that will use molluscicide to deter the snails. This method will augment the most effective method of rice planting, direct seeding, and prevent the ecological damage associated with the unguided use of chemicals. The most important part of this system is the adhesion of the molluscicide to the rice sprouts, which is still under development. This would allow users to coat their seedlings for direct seeding, and as the plants grow, the molluscicide will remain on the submerged part of the stalk. Because the apple snail is an aquatic organism, the process will allow farmers to completely eliminate this threat to their crops, and it will also encourage the use and spread of the most efficient and eco-friendly approach to growing rice (Joshi, et al., 2007). Another aquatic animal known for its tendency to decimate natives is the lionfish (Pterois antennata). This fish is a popular inhabitant of aquariums and the occasional fish tank, which is part of the problem surrounding it. Groups and people attempting to sell lionfish are constantly introducing new populations to the delicate reef environment in which they thrive, and this makes the prevention of lionfish invasions extremely difficult (Padilla & Williams, 2004). Recently, models showed that the distribution of these predatory fish follows oceanic current, and this allows local governments to plan when and where they will need to deploy officials or field experts to asses and control new populations. Furthermore, economic research

(using the lionfish as an example) proves that monetary incentives, like paying bounties for killing individual animals, are very poor choices when considering removal tactics. These economic models are just as important as distribution models because if government agencies behave incorrectly, people driven by the allure of money will begin to introduce more harmful animals if they are given the motivation to do so (Morris & Whitfield, 2009).

This photograph shows a lionfish (Pterois antennata) in the habitat it has invaded, the coral reef. Pest insects, such as those that attack food crops, are some of the most abundant invasive species in the world. This is due to their ability to reproduce quickly and to their small size. Small pests can lay eggs on almost anything, and if that substrate on is en route to a new ecosystem, so is the species of pest. Once the eggs hatch, the pests begin exponentially growing, and because of their quick life cycles, the pest quickly adapt to their new environment. Before any fruits or goods are shipped across borders, the companies involved in the importation must show that there is an incredibly low chance that any pests could survive in the fruit long enough to reach their destination. As the fruits are stored for transport, they are treated to kill most of the pests that can reside within them. First, scientists calculated the most number of pests that could possibly live in a fruit together. Then, they subjected the fruits to harsh conditions that the pests could not possibly survive. Farmers and fruit carriers subject the fruit to many harsh conditions to ensure disinfestations. The fruits were subject to extreme cold and some chemical treatment, guaranteeing that over 99.98% percent of all of the pests in the fruit were killed, statistically leaving less than one mating pair. Although all these measures can be taken to control the growth of pests as invasive species, they may be adapted to survive these very conditions. Pests can collectively maintain themselves as species by multiplying so rapidly. These species are adapted to withstand for aggressive predators like human scientists (Heather & Hallman, 2007). One of the most destructive invasive species, especially in the U.S., is the wild hog or pig, which are hybrids of domestic pigs and wild boar. These animals attack other wildlife and damage land, with an economic cost of $400,000,000 annually. The hogs are widespread,

too. They reside in 39 of the United States and four of the Canadian Provinces. The wild pigs have no natural predators, so humans must target them to try to maintain the population size of this invasive species. They dominate ecosystems because they can quickly multiply, and as omnivores, wild pigs do not have a difficult time finding food. They definitely qualify as an invasive species.

Hybrids of domestic pigs and wild boar routinely destroy farmland. The federal government is making an effort to combat the massive population of the wild hogs, which are infiltrating the southern states. These wild animals are hunted all year because the government puts no limit on how many of these animals a hunter is allowed to kill in effort to control the population and reach a biological equilibrium. Between 2004 and 2009, 461,000 pigs were processed for meat, and hunters killed many more than that. Another tactic that seems to have lessened the economic toll of the pigs is an educational class provided by the Texas AgriLife Extension about how to identify and prevent hog invasions. These classes have decreased the financial impact of the pigs in several regions by two-thirds. In addition to educating the local people on how to combat the hogs, scientists are developing birth control substances to distribute over the wild pig population. Because pigs can begin breeding at five to eight months of age and can continue every year to a year and a half for their entire life span of four to eight years, preventing many pig births would drastically decrease the rate of growth of the pig population. Many gardeners enjoy planting ornamental flowers and plants, whether they are dangerous or not. When they spread further and faster than they had anticipated, they realize what they have just done. The removal of invasive plants that have been placed in gardens is a task that requires commitment and perseverance. Most invasive plants cannot be removed by hand after they begin to spread their roots deeper into the ground. Any attempts to extract the plants would be futile since the majority of the invasive species of flora regenerate from root fragments that have been left behind during the weeding process. Methods of removal are laborious and may cause soil disturbances. Mainly cutting and mowing, other means of plant removal include manually pulling the weeds out, using herbicides, a toxic substance that is used

to eliminate unwanted vegetation, containing glyphosate, or triclopyr. The disadvantages outweigh the advantages; herbicides are toxic and some are non-biodegradable, and if ingested by animals or humans, they can cause illnesses such as cancer, and other pains such as nausea, headaches, and chest pains. Several plant species such as water hyacinth (Eichhornia crassipes), water fern (Salvinia molesta), wild garlic (allium vineale), prickly pear (Opuntia spp.), and lantana (Lantana amara) are serious threats to both terrestrial and aquatic ecosystems. The water hyacinth, introduced to Kenya in 1989, has decreased, and the arrival of the aquatic species has complicated transportation along several lakes; Lake Victoria, Patrol boats can no longer operate, and it takes a long time for transportation boats to dock. The means that have been taken to eradicate these plants have been by manually removing them, and mechanically removing them. The Kenya Agricultural Research Institute (KARI) distributed hand tools and protective gear. The hand tools included wheelbarrows, machetes, rakes, hoes, saws, hand gloves and gumboots (Mailu, Ochiel, Gitonga, & Njoka, n.d.) The Kenya Industrial Research and Development Institute (KIRDI) has developed a prototype of a mechanical harvester that shreds and chops the water hyacinth. Funded partly by an American firm, this machine has to have the ability to completely remove the plant from the water or else seeds would be able to re-generate and cause more problems for the native people. Neither the machines, nor the manual attempts of removing the plants were successful. KARI has been attempting several eradication methods including the use of insects, and indigenous fungal pathogens. Neochetina weevils were introduced to Lake Victoria in 1997, and finally, after a decade of invasion, the water hyacinths were removed. Though it took two years, the weevil were able to feed on the plants, and this diminished the weeds. Prickly pears (Opuntia) compete with native plants and prevent animals from grazingin the vicinity. Most methods of control involve the use of herbicides, but it takes about three years before about 70% of the sprayed prickly pear cacti die (Johnson, n.d.). Their natural predators are no longer present in their new habitat, allowing them to thrive. Conclusion Clearly, the modern global economy has facilitated the growth of many undesirable invasive species, and these invasions pose a threat to the overall biological diversity of the planet. From the zebra mussel and the wild hog to the purple loosetrife and English ivy, there are dedicated professionals working to rectify the mistakes of the past. In fact, a combination of politicians, scientists, researchers, and concerned citizens is working to develop, refine, and distribute technology that can help to limit the impact of invading plants and animals, and these men and women have learned from past mistakes and trials to bring the solutions of tomorrow to the forefront of the research of today. Bibliography Coates, P. (2007). American Perceptions of Immigrant and Invasive Species. New Jersey: University of California Press.

Controlling Invasive Plants at Home. (2011). In New England While Flower Society. Retrieved March 31, 2011, from New England While Flower Society:http://www. newfd.org/ protect/invasive-plants/removal Davidson A., Jennion, M., and Nicotra, A. (2011). Do invasive species show higher phenotypic plasticity than native species and, if so, is it adaptive? A meta-analysis. Ecology Letters, 14, 419-431. Davis, M. (2009). Invasive Biology. Oxford ; New York : Oxford University Press, 2009. Flowering Rush. (2009). Retrieved April 12, 2011 from http://www.seagrant.umn.edu/ais/ floweringrush Godoy, O., Lemos-Filho, J., Valladares, F. (2011). Invasive species can handle higher leaf temperature under water stress than Mediterranean natives. Environmental and Experimental Botany, 71, 207-214. Heather, N., Hallman, G. (2007). Pest Management and Phytosanitary Trade Barriers. Wallingford, Oxon, GBR: CABI Publishing How do invasive species spread? (n.d.) Retrieved April 4, 2011, from http://www.fws.gov/ southwest/refuges/arizona/havasu/invgethere.html Hull, J. (2007) Ballast purification treatment technique passes initial test program. Offshore, 55. Invasive Plants. (2008). Retrieved April 3, 2011, from http://www.usna.usda.gov/ Gardens/invasives.html Invasive Plant Science and Policy. (2011). Retrieved April 15, 2011, from the Chicago Botanic Garden, http://www.chicagobotanic.org/ Invasive species: aquatic species. (2011) Retrieved April 4, 2011, from http://www. invasivespeciesinfo.gov Invasive species in the Thames. (2009).Retrieved April 5, 2011, from http://www.zsl.org/. Jarrad, F., Barrett, S., Murray, J., Stoklosa, R., Whittle, P., Mengersen, K. (2011). Ecological aspects of biosecurity surveillance design for the detection of multiple invasive animal species. Biological Invasions, 222 (5), 398-406. Johnson, G. (n.d.). Cactus Eradication. Retrieved April 15th, from The Texas Idea Farm, http://www.txideafarm.com/cactuscontrol.htm Joshi, R., San Martn, R., Saez-Navarrete, C., Alarcon, J., Sainz, J., Antolin, M., Martin, A., Sebastian, L. (2007). Efficacy of quinoa (Chenopodium quinoa) saponins against golden

apple snail (Pomacea canaliculata) in the Philippines under laboratory conditions. Crop Protection, 27, 553-557. Mailu, A., Ochiel, G., Gitonga, W., and Njoka, S. (n.d.) Water hyacinth: an environmental disaster in the Winam Gulf of Lake Victoria and its control. Proc. Ist IOBC Water Hyacinth Working Group. 101-105. Mangla, S., Sheley, R., James, J., Radosevich, S. (2011). Intra and interspecific competition among invasive and native species during early stages of plant growth. Science+Business Media, 212, 531-542. Mercado-Silva, N., Olden, J., Maxted, J., Hrabik, T., and Zanden, J. (2006). Forecasting the spread of invasive Rainbow Smelt in the Laurentian Great Lakes region of North America. Duluth: University of Minnesota Duluth. Morris, J.A., Jr., and Whitfield, P.E. (2009).Biology, ecology, control and management of the invasive Indo-Pacific lionfish: An updated integrated assessment. NOAA Technical Memorandum NOS NCCOS, 99. 57 Morthland, J. (2011). Texans are battling a shockingly destructive invasive species. Smithsonian. 1, 53-61. National management plan for the genus Eriocheir (Mitten Crabs). (2003) Retrieved April 4, 2011, from http://www.anstaskforce.gov Padilla, D. and Williams, S. (2004). Beyond ballast water: aquarium and ornamental trades as sources of invasive species in aquatic ecosystems. Frontiers in Ecology and the Environment, 2, 131138. Pimentel, D. (ed.). (2002). Economic and Environmental Costs of Alien, Plant, Animal, and Microbe Species. CRC Press. Potential establishment of alien-invasive forest insect species in the United States: where and how many? BioloInvasions, 13, 969-985. Potuchek, J. (2010). Invasive Plant: Whats A Gardener To Do? Retrieved from http://jeansgarden.wordpress. com/ 2010/04/08/invasive-plants-whats-a-gardener-to-do/ Saccomano, A. (1999). Ballast water law mulled. Traffic World, 34. The expansion of invasive species through ship ballast water. (2008). In Encyclopedia of Earth. Retrieved April 4, 2011, from Encyclopedia of Earth: http://www.eoearth.org/article/The _expansion_of_invasive_species_through_ship_ballast_water Zebra mussel. (2011) Retrieved April 4, 2011, from http://nas.er.usgs.govZebra mussel. (2011) Retrieved April 4, 2011, from http://nas.er.usgs.gov

Chapter 15 Restoration Ecology


Kathleen Ross, Grace DiFrancesco, and Susmitha Saripalli

Where there are now buildings and construction sites, there were once natural environments. Humans have demolished habitats and destroyed ecosystems to further development. Now, scientists are trying to reverse these changes with collective methods referred to as restoration ecology. This is officially defined by the Society of Ecological Restoration (SER) as intentional activity that initiates or accelerates the recovery of an ecosystem with respect to its health, integrity, and sustainability. This restoration includes reconstructing of ecosystems where roads, factories, mines, and any other type of development that has stripped the land of life and nutrients have been built. These areas undergo soil and water replacement, in hopes of bringing back a healthy environment. Restoration ecology goes hand in hand with conservation biology, which involves the protection species, habitats, and ecosystems from excessive rates of extinction. Both technologies try to preserve biodiversity, habitats, and ecosystems from extinction. Conservation biology tries to protect existing biodiversity and habitat. Restoration ecology tries to reverse degradation and population decline; it is based on the premise that humans can build a habitat and restore biodiversity. Both restoration and conservation have a temperate terrestrial bioregion bias; however, restoration has a botanical bias, and conservation has a zoological bias. Conservation studies are descriptive, comparative, and unreplicated. Restoration studies let researchers more rigorously test the hypothesis (Clewell, Rieger, & Munro, 2005). Restoration deals with four ecological concepts or theoretical foundations: disturbance, succession, fragmentation, ecosystem function. Disturbance is a change in environmental conditions, which interferes with the biological system. Restoration aims to bring the biological system back to the same condition it was in before any human involvement. Succession refers to the growing process of an ecosystem. Usually, restoration consists of initiation, assisting, or acceleration the ecological successional process to further the growth of a newly rebuilt ecosystem. Fragmentation is the emergence of spatial discontinuities in a biological system. Restoration ecology attempts to reverse its effects and increase habitat connectivity. The final goal of restoration ecology is to recreate a self-sustaining, self-perpetuating ecosystem. Even when habitats are rebuilt, there are still issues with the area. First, if all the species are placed manually in ecosystems, there will be no variety in species. Natural selection would become ineffective if and when any drastic changes occur. As a second step, ecologists want to introduce different forms of the species to further the natural selection process. If all species are fairly ecologically equivalent, then random variation in colonization, migration and extinction rates among species drive differences in species composition between sites with comparable

environmental conditions. Restoring a community involves not only manipulate the timing and structure of the initial species composition. Ecologists must be careful to pay attention to ontogeny, the study of how ecological relations change over the lifetime of an individual. Organisms require different environmental conditions during different stages of their lifecycles.

Freshwater Ecology Restoration A global issue has developed from the degradation of fresh water resources throughout the world. All life needs water in order to survive, and many organisms are dependent on the fresh water resources that have deteriorated in a variety of ways. This decline in ecology is caused by many different sources such as pollution, contamination, climate change, social development, erosion, and sedimentation. Ecologists all over the world have been developing and employing various technologies to restore freshwater habitats before it is too late. Restoration projects have been initiated and completed in order to restore the habitat to its natural state. Principles for Aquatic Restoration Projects In order to restore freshwater ecology, ecologists established guiding principles and different techniques so that organizations attempting a restoration project would have some semblance of guidelines to follow. The first step is to preserve and protect any aquatic resources that are already present. This is because existing, comparatively intact ecosystems are the foundation for conserving biodiversity for the restoration site; they provide the biota, or anthology of organisms native to a geographic region, and other natural materials needed for the recovery of the impaired systems. Thus restoration does not exist to replace the need to protect aquatic resources before humans degrade them, but rather restoration is a complementary activity that should be combined with preservation to improve a large percentage of lakes, streams, rivers and wetlands. Therefore, the first objective should be to prevent further degradation of the water body (U. S. Environmental, 2000). The next step involves restoring as much of the ecological integrity of the degraded ecosystems as possible. Ecological integrity is a measurement of the resilience, self-sustenance, and ability to sustain stress and change of an ecosystem. In doing this restoration ecologists have to take into account whether key ecosystem processes, such nutrient cycles, succession, water levels and flow patterns, and the dynamics of sediment erosion and deposition, are functioning properly. Biologically this entails making sure that its plant and animal communities resemble the native ones, especially in the diversity in which it is found in the region. Structurally, the ecosystems have to contain physical features such that the dimensions of its stream channels are dynamically stable. Restoration typically strives for the greatest ecological integrity achievable within the current limits of the watershed, a geographic area of land that drains water to a shared destination (FISWRG, 1998), by using designs that favor the natural processes and communities that have sustained native ecosystems through time.

Likewise, restoration plans must account for self-sustainability. The site needs to be designed so that it requires a minimum maintenance, such supplying outside water sources, replanting vegetation, or frequent repair of flood damage. High maintenance is not a desirable prospect because it is expensive, and it makes the ecosystem dependent on maintenance rather than sustainability. In addition to minimizing intervention, incorporating self-sustainability also involves favoring the ecosystem with better ecological integrity. Restoring aquatic ecosystems to their natural structure and function are the next obstacles that need to be faced in this type of undertaking. Much of the degradation of aquatic resources stem from channelization, the straightening of kinks in a stream, or the installment of physical barriers such as dams, which may lead to issues such as habitat degradation, siltation, and changes in the duration and timing of flooding resulting from surface water, precipitation, and ground water inflow ("Water Regime," 2011). Stream channelization, ditching in wetlands, disconnection from adjacent ecosystems, and shoreline modifications are all examples of altercations to the structure of the habitat that may need to be addressed in a restoration project. In these cases, the original physical orientation and structure of the site may have to be restored before the ecologists address other problems such as reviving the populations of native organisms and improving the quality of the water. Reverting back to the natural structure of the habitats, or as close to it as possible, will have beneficial outcomes. For example, restoring the bottom elevation in a wetland, lake or pond naturally furthers the reestablishing the water flow regime, natural disturbance cycles, and nutrient fluxes. It is important to make use of a reference site when attempting a restoration project. Reference sites are exactly what they sound like, areas that can be used as a reference or model that is similar to the goal for restoration site. Such a site should be comparable in structure and function to the projected restoration site before it was degraded and to the ideal future habitat. Because of these similarities, the reference site can be used for measuring the progress of the project. This way, the ecologists do not have only potentially inaccurate historical records on which to base their work but have an existing, relatively healthy site that they can use to guide for the project. There is a limit, however, to the usefulness of the reference site because the restoration site will undoubtedly provide unique situations and conditions; in short, no two ecosystems are perfectly identical. Therefore, it is crucial that the project adapt to the circumstances at hand and to any differences between the reference site and the area being restored encountered throughout the process of reconstruction. To achieve the most authentic outcome, native species need to be restored to ecosystems, and non-native species need to be excluded. There have been problems caused by non-native species; sometimes they become invasive and are difficult to remove from the habitat once they are introduced. Many invasive species survive better in a disturbed habitat, and so they will thrive and overwhelm native species. Once invasive species have established themselves in a site, they can hinder restoration efforts and lead to further spread of the undesirable species. Invasive, non-native species should not be used in a restoration project, and special care should be taken to avoid the unintentional introduction of such species at the restoration site when the site is undergoing renovation. For some sites, removing the non-native species is the main goal and the reason for the restoration project. Because of this, it is better to reintroduce the native

plants that have a known affect on the particular habitat and surrounding environment rather than creating a potential problem that would have to be dealt with in the future. When developing a plan for a restoration project, ecologists cannot consider only the body of water and its ecology but the entire watershed. If the project focuses on restoring the most degraded ecology and not the source of the degradation, the site will inevitably return to its prior condition. An isolated restoration project, unless it is the source of the problems, will most likely not solve the degradation issues of the entire watershed. This is why it is important to identify the source(s) of the harm and design the project to concentrate on them specifically. It is imperative that the ongoing causes of degradation are addressed or else the effort towards restoration will be meaningless. Restoration efforts are destined to fail if the original sources of degradation persist. If the sources of the ecological deterioration are not corrected, they will only destroy the restored ecology. Therefore, it is essential to identify the origin of degradation and eradicate it as efficiently as possible. Unfortunately, even though some degradation can be caused by one direct impact such as the filling of a wetland, much degradation is caused by the cumulative effect of numerous, indirect impacts, such as variation in surface flow caused by the gradual increase in the amount of impermeable surfaces in the watershed, such as a dam. In a case such as this where it is unrealistic to deconstruct a man-made dam, the project has to help the site adapt to the changes in water flow. When looking for the source of degradation, it is important to look further than the site itself. If the restoration is on a stream or river or has streams flowing into it, it is important to upstream and up-slope from the immediate site for activities that might indirectly negatively affect the ecology as well as the sources that have a direct impact on the site. In addition, for some situations, it may also be necessary to consider downstream modifications such as dams and channelization. If these factors that are the cause of the degradation cannot be addressed in some way or another, the project has to be redesigned to help the ecology cope better with them. In order for the restoration to be as authentic as possible, natural solutions and bioengineering techniques should be used wherever possible. Bioengineering is a method of construction combining live plants with dead plants or inorganic materials to produce living, working systems that resist erosion, control sediment, and provide habitats. In many cases, it is a more natural and less invasive method choice for soil stabilization than any synthetic methods available. These techniques have been used successfully in the past to stabilize steep upland slopes, riverbanks, drainage channels, lake and pond shores, irrigation ponds, and coastal banks. Finally, it is best to use restoration techniques in moderation and to employ passive restoration whenever it is appropriate. Rather than rushing into techniques for altering a restoration site, ecologists need to establish whether passive restoration is a better approach. If the restoration project does not have specific time constraints then passive restoration, simply reducing or eliminating the sources of degradation and allowing the site time to recover, will be enough to allow the site to regenerate naturally. For some aquatic ecosystems, this method can promote the natural reestablishment of stable channels and floodplains, re-grow vegetation, and improve the fresh water habitats.

These are some of the basic principles for aquatic restoration according to the United States Environmental Protection Agency, but how are the ecosystems actually restored and what is important about the ecosystems? There are many different types of fresh water ecosystems like wetlands, streams, rivers and lakes and they are all significant. Within the United States, wetlands only occupy a small portion of the landscape, 11% in 1780 and 5% in 1980; nonetheless, these habitats are extremely important to wildlife. Between the 1930s and 1950, many acres of wetlands were drained, but during the past 20 years, policies and programs that encourage altering, draining, or filling of wetlands have become less prevalent, and policies that promote wetland conservation and restoration have increased (Stewart, 2007).

Wetlands The term wetland refers to bogs, swamps, fens, ponds, and marshes, habitats that have essential roles in nature. One of their functions is that they are crucial habitats for wildlife because they provide suitable breeding and nesting grounds for fish and birds alike. Also, they are essential transitional zones between terrestrial and aquatic ecosystems because wetlands can sustain species from both ecosystems. Finally, these areas of soggy ground help to regulate water flow which aids in protecting shorelines from erosion. Although these terrains are vital, humans continue to destroy them by using them as waste disposal sites, filling them in order to construct buildings, as well as draining and damming them. These activities have severely impaired their ecosystems and made it do that the wetlands are no longer the refuge for wildlife that they once were. For these reasons, it became necessary to invent various methods and technologies for reconstructing and reviving the ecosystems of wetland terrain. Before restoration: After restoration:

To restore the wetlands, polluted soil and water have to be removed from the site. In some cases, such as a dried up or drained wetland, a stream channel has to be made connecting the wetlands with a source of water. Then, good, untainted soil typical to wetlands has to be put in so that wetland grasses and other plants can be installed. The stream channels have to be protected from erosion by some bioengineering technique or other such as coir fascines, a heavy erosion control blanket of native seed mixes and native plant material. The aquatic planting

design has to be based on the varied hydrological conditions, such as water flow, flooding tendencies and water levels, observed at the restoration site. The plants, like water hemlock (Cicuta maculata), that are installed are usually a native species specially selected to thrive in the water levels in which they are planted. After the native vegetation is installed, native species of wetland animals can be installed and some, such as birds, come on their own (New England Environmental [NEE], n.d.a). Many restoration projects deal with removing industrial waste from wetlands. One project in particular required the removal of coal deposits from a pond, and the restoration of this pond and adjacent uplands. The pond was drained, the contaminated soil excavated and removed, new plants were installed, and native seed mixes were sown in the disturbed areas. Then, contaminated sections of the riverbank were restored following the removal of contaminated materials. The disturbed banks were stabilized by installing biodegradable erosion control fabric, appropriate seed mixes and native tree and shrub plantings (NEE, n.d.b). In other cases, it is necessary to raise the water elevations. To do this, engineers construct berms, man-made mounds of earth, to elevate the water levels in the wetland. Berms have other functions as well; they are used to manage erosion and sedimentation by reducing the rate of surface runoff as well as to reduce the velocity of the water, or direct the water to areas that are not vulnerable to erosion. This reduces the undesirable effects that running water has on exposed topsoil. Berms, although they seem rather simple, are extremely effective at times; for example, after the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, berms were constructed that were designed to prevent the oil from reaching the Louisiana wetlands, thereby preventing massive erosion. These berms were surprisingly effective, especially after copious failures to stop and contain the oil leak using more advanced technologies (The Nature Conservancy, 1998). Rivers and Streams Wetlands are not the only important aquatic ecosystems, freshwater rivers and streams are also frequent restoration sites. These waterways are crucial to the circulatory system of terrestrial ecosystems and execute critical functions for the survival of nature such as climate control, nutrient cycling, and drinking water for flora and fauna alike. Rivers, streams and their surrounding lands supply unique and continuously changing environments for plants and animals throughout the world. At their headwaters, rivers and streams tend to be clearer, colder and more oxygenated which helps fish thrive. At their middle reaches, they often sustain biodiversity, and at their mouths, they are warmer, less oxygenated, and muddy due to sediment accumulation (Global Restoration Network, 2011). Many rivers are degraded by many different sources, human and otherwise ranging from climate change to human induced issues such as social developments and pollution. Because of all these issues, and the importance of fresh and sanitary water, restoration has become a necessity. Erosion and Sedimentation Prevention One major aspect of river ecology degradation that often needs to be addressed is erosion and sedimentation. In order to address problems such as there, bioengineering techniques combined with solutions are employed. For example, a boulder J-hook structure could be

installed to deflect stream flow away from the stream bank and towards the streams centerline to decrease erosion and sedimentation. These structures are patterns of boulders oriented in such a way that they will protect a riverbank. Also, tree and root wad structures may be installed along the stream bank not only to protect stream banks from erosion but to provide a habitat for the resident fish community. Installing trees helps because their roots hold the soil on the banks in place. Installing root wads, tree roots and a portion of the trunk, gives a bank immediate stabilization and provides toe support for bank revegetation techniques and collect sediment and debris that will enhance bank structure over time. The disadvantages of root wad installation are that it is labor intensive and expensive. To help restore the vegetation on the bank and streamside riparian zone, the area between the land and water, a technique called brush layering which can be used to stabilize a slope or river bank against shallow sliding or mass wasting in addition to providing erosion protection. It is more effective than live fascines in terms of mass stability (Environmental Protection Agency, n.d.). However, there are limitations to brush layering; it works better on slopes that have been completely cut out and refilled because much longer stems can be used if the brush is installed in fill during the refilling process (Brush Layering, n.d.).

Root Wad Installation Depending on the needs of the specific restoration site, numerous other forms of restoration may be required. For instance, some projects require dam removal, culvert replacement, or woody habitat installation. Depending on what type of dam needs to be removed, it may be difficult or easy. Most of the time, dam removal involves dislodging beaver dams and dams created by fallen trees, sedimentation, and drift wood. These dams are relatively simple to remove. Sometimes, the project requires the removal of man-made dams which are made of more durable materials than sticks and sediment, whereas culvert replacement and removal involves the building or taking apart bridges. Obstructions such as these can occur on streams

where existing culverts or bridges have collapsed or are undersized, or are not built yet. Where a stream is blocked by the barrier, valuable habitats upstream can be closed to fish. In these cases, a stream junction would open the habitats, increasing the area for breeding grounds and working toward restoring healthy populations (Bernard, n.d.). Finally, large woody debris performs a significant function in stream channel morphology by contributing to formation of pool habitats by increasing water meandering sediment capacity. The debris dissipates water flow energy, resulting in improved fish migration and channel stability (Fischenich & Morrow, 2000). Lakes Lakes and rivers overlap considerably because the rivers feed the lakes. Lakes have three zones: the littoral zone is the closest to the shore and, because of this, supports the most biodiversity including plants, fish, amphibians, and birds; the limnetic zone is the surface habitat in the center of the lake which is usually dominated by plankton and a variety of fish; and the profundal zone which is deep below the surface water of the limnetic zone. This zone is a colder and darker part of a lake where some fish survive on dead organic matter that settles on the bottom. The health and biodiversity of lake and pond ecosystems are being destroyed as a result of human influences that cause acidification, eutrophication, and toxic contamination.

Eutrophication Syndrome: The effects of eutrophication on a lake Acidification occurs when emissions of sulphates and other acidic pollutants occur in lakes in large quantities and acidify them, causing the pH levels to drop. Sometimes the decrease in pH levels is so drastic that the lake can no longer contain aquatic life. One way of addressing this issue utilizes a process called liming. This process is the addition of limestone, otherwise known as calcite, to a lake, pond, stream, or soil in order to neutralize the acid and prevent drastic changes in pH again, thus restoring the important ecological value of the lake ecology. There are advantages to using lime as a solution because it is inexpensive, non-toxic, a natural mineral, easy to distribute and it dissolves in water. All of these qualities make this strategy ideal for improving acidification. Eutrophication is a syndrome of ecosystem responses to human activities that fertilize water bodies with nitrogen and phosphorus (Cloern, 2007). This causes high levels of unwanted

plant, algae, and bacteria growth and fish population deaths. Where does the phosphate and nitrogen originate? It comes from the detergents used to wash clothes, dishes and windows and it comes from the fertilizer farmers put on their land to make their crops grow. This fertilizer seeps into the ground with the rain and is carried into aquatic ecosystems by the runoff water. Many solutions have been found for this one problem, and one of them is termed dilution and flushing. This technique requires the addition of low nutrient water or large volumes of water into the lake to dilute the concentrations of nitrogen and phosphorous and washes out of the algae cells. This method has proven effective and cost efficient but slightly impractical because of the limited supply of low nutrient water and the requirement that the flushing rate must be equal to the growth rate of the algae. Phosphorous removal, on the other hand, involves scraping and drawdown of phosphorous from the water. With this one, people have to be careful where they relocate the phosphorus. Phosphorous inactivation is another method that is used and there are different variations in the technique. One type uses aluminum salts and adds them to the water to produce a floc, which precipitates the phosphorous in the water column and then forms a barrier that slow the release of phosphorous into the water. Another variation is the phosphorous inactivation using iron which acts the same way except there is less potential harm to the ecosystem from the iron than the aluminum; the same situation applies to calcium ions. Then there is biomanipulation which utilizes food web management to control the algae but this disrupts the communities of fish that live in the habitat already. Finally, periphyton management which harvests the algae to remove the nutrients from the system and so-called pretreatment uses wetlands, detention basins or upstream reservoirs to remove nutrients in inflow to lakes. Contamination occurs when harmful substances such as industrial wastes, nitrate, or pesticides from agriculture pollute the water. This causes a deficiency of oxygen in the water and insufficient self-purification. For this issue, there is a process called artificial circulation or destratification which is an injection of compressed air into the water which is then mixed into the lake using machinery. The problem with this method is that it depends too much on the rate of air flow and the depth at which the air is released into the water. Hypolimnetic withdrawal and hypolimnetic aeration or oxygenation is also techniques that involve adding compressed or pure oxygen to the water but with different injection methods. Any of these techniques raise the oxygen levels which enables the ecology to better rid itself of the pollution (U.S. Environmental Protection Agency, 2001). The fresh water resources of the world are limited and it is imperative that efforts are made to stop the degradation while it is still possible. It is important to have some knowledge regarding the restoration of the scarce freshwater ecosystems and the technologies that are available today to reverse deterioration of nature. These are by no means all the technologies for restoring these natural habitats to what they once were, but it is a start.

Forests and Restoration Ecology The goal of forest restoration is to repair the structures and characteristics of a degraded forest to those of a typical natural forest. This process can have a focus on either, species structures or dynamics; however, excluding one of these aspects would be detrimental to the

success of the restoration project. Another use of forest restoration is to combine smaller regions of endangered forests that have been isolated from each other. These fragments of habitats are usually too small to naturally return to their original state, so restoration is required to rehabilitate the forests. After all of these portions of have been combined, they create a more stable, and robust environment to support a larger number of organisms. The most pressing matter in reforestation is to make the habitat more favorable to the species that are at the greatest risk. In forests, this is done by performing sand treatments to enhance the intricacy forest framework and to magnify the volume of dead wood. Another method is to return the natural amount of groundwater to the environment by constructing damns for ditches and drained lake beds and peat lands. Also, certain natural processes, such as forest fires, must be reintroduced to the environment. Several species of trees are dependent on fires and can become locally extinct if fires are prevented indefinitely. Inducing, or prescribing controlled forest fires reduces the likelihood of a serious fire, because the amount of fuel to burn is kept at a minimum (Brown, 2000). Another important aspect in reforestation is to restore the level of biodiversity to the environment. One technique is to plant a small number of fast-growing tree species with short life spans. This replicates the natural occurrence of a successful pioneer species that create a canopy cover, which shades the forest floor. This shade impedes grass and weed growth which reduces the risk of devastating forest fires, and promotes the growth of a wider range of other tree species from adjacent forests. Another technique that can be used to promote biodiversity is direct seeding. This involves planting individual seeds of many different species and is usually done after the canopy has been developed. It can also be used to regenerate forests in open fields; however, the number of different species that can be planted is limited by the availability of seeds. Many of these species are endangered and the seeds are not very abundant so it may not be possible to use direct seeding to restore the population of all tree species (Lamb, Erskine, & Parrotta, 2005).

Forest Restoration: Before and After

Other actions to aid in reforestation include understory planting; planting in shaded areas, which increases the absorption of nitrogen of seedlings because these areas have decreased wind speed; and increased relative humidity. The method of hot burning has been shown to remove fungistasis and free the nutrients from older soil of grasslands. Applying mulch to soil will act as an insulator to retain moisture and reduces diurnal temperature variation. The use of tree guards can benefit forests; they offer shelter from wind and protect from frost and extreme light. In addition to this, it has been exhibited that fortifying the soil of a recovering forest with the soil of a healthy forest benefits the early growth of seedlings (Close, Davidson, Churchhill, & Grosser, 2005). Terrestrial Ecosystems There are many other land based habitats such as tundra, mediterranean, grasslands and deserts. Although it is easiest to identify the need for restoring in habitats such as lakes, rivers, and forests, it is also necessary to preserve the natural resources and ecosystems that are found in other terrestrial habitats. These land areas span many different regions on the global surface, resulting from vastly different climates. Because of the different climates, the restoration process is unique for each individual terrestrial ecosystem; each will have separate environmental conditions that it will have to adapt to in order to become self-sustainable. Tundra The tundra, known for its extreme cold temperature and large amounts of snow, can be identified as two different types of land: the arctic tundra and the alpine tundra. The arctic tundra, found in the Northern Hemisphere is characterized by vast open areas and treeless landscapes. Found in Alaska, Canada, Greenland, Iceland, Norway, Finland, and Russia, this ecosystem comprises minimal vegetative cover, such as moss, lichens, and grasses, and a small number of hibernating animals. The alpine tundra is found in high altitude area, usually above 3,000 meters. It is a transition zone between the tundra and forest. Even though the climate is extremely cold and dry, the alpine tundra is still one of the worlds most fragile ecosystems. The different species make up a simple food web with a delicate balance that can easily be disturbed and ruined (Revkin, 2011). Restoration of the tundra mostly involves revegetation and bioremediation. Revegetation, the replanting and rebuilding of the soil of disturbed land is a natural process that is catalyzed by humans physically planting different species of plants that belong in the area. Unfortunately, not all the species can be accounted for and species variety decreases. When bioremediation is used, it is most often in-situ bioremediation, which is the use of microorganism metabolism to remove pollutants on the contaminated material at the site of restoration. It is extremely useful in reducing hydrocarbon levels, in some cases reducing 90% of these levels. Current studies on tundra restoration include the examination of long-term land rehabilitation, revegetation, and bioremediation techniques on the Alaskan North Slope. Areas of focus include gravel substrates, the most common surface disturbance on the North Slope, and small oil and brine spills. Ecologists are testing sewage sludge to amend gravel substrates for revegetation improvement

and revegetation and bioremediation on oil and brine spills. They are using different combinations of fertilizer, soil amendments, and long-term survival of various native plants to determine relative efficacy (Ecological restoration and land rehabilitation, 2005). Mediterranean Ecosystems Mediterranean ecosystems can be found in North America, Chile, Australia, South Africa, and the Mediterranean Basin. Its semi-arid ecosystems include forests, grasslands, and shrubs or chaparral. The vegetation of these ecosystems is hardy and drought resistant, capable of surviving in shallow soils. These soils need natural fires for regeneration and nutrient cycling. Primary causes of environmental degradation and habitat loss are distributed fire regimes and invasive species (Society for Ecological Restoration International Science & Policy Working Group, 2004).

Bauxite Mines Restoration in Western Australia, Before and After

About one third of the Mediterranean wetlands were drained for malaria control, agricultural expansion, and urban development in the twentieth century. Some of the worst cases have been Greece, Spain, Albania, and Tunisia. Abadonment of saltworks in small Salinas greatly increased between 1950 and 1990. Salt is critically important to local and national economies. Human activity, such as tourism, logging, agriculture, and livestock grazing, decreased biodiversity in these ecosystems. The serious problems in these areas are drought, soil salinization, and desertification. Agricultural impacts occur from activity along the margin or within the salina and a failure to use optimum management techniques throughout the watershed of the salina to minimize erosion, groundwater extraction, nutrients, and pesticides/herbicides. Tourism is becoming increasingly popular in the Mediterranean basin, resulting in the construction of single-family seasonal residences and major hotel/resorts and the supporting associative roads and service industries. Many idle salinas are developed and locals regard these wetlands as untapped economic potential (Crisman). Salinas are one of the most important parts of the Mediterranean to be restored. They aid in keeping an ecosystem functioning and they are important in pollution abatement. Watershed

water courses are rerouted around saltworks and discharged directly into the Mediterranean. In this manner, nutrients, chemical contaminants, and eroded sediments enter the see and can alter salktworks operations. Restoration ecologists have only recently observed the negative impacts and are using simple hydrological alterations can easily fix the problem. If a sheetflow of incoming water is sent through the salina rather than bypassing it, chemicals and inorganic pollutants can be cleaned out prior to entering the Mediterranean waters. This method is inexpensive, yet highly effective. Salinas need to be conserved and simple methods can solve the issue (Crisman, 1999). Grasslands Grasslands are defined by their wide-open, flat areas of wind-pollinated grasses that are constantly mowed by various grazing mammals. The height of the grass correlates with the amount of rainfall received. Fire, drought, and livestock grazing limit the growth of shrubs and trees, and so play an important role in maintaining the biodiversity of the grasslands. Temperate grasslands are found in the Americas, South Africa, and central Eurasia, whereas tropical grasslands are found in South America, Australia, India, and Africa. Temperate grasslands are characterized by their hot summers, cold winters, and moderate rainfall. The steppe ecosystems, found mostly in Eurasia, are arid and dominated by shorter grasses. Many of these ecosystems have been converted to agricultural production or ranching. The tropical grasslands, also referred to as the savannahs, lie close to the equator and remain warm year round, with marked dry and wet seasons. This discourages the growth of forests, and grasses and fobs remain as the predominant forms of vegetation. Grasslands have well drained soils with thin layers of humus. They are the transitional regions between rainforests and deserts (Senate resolution shines spotlight on the importance of soils, 2008). Most large animals indigenous to grasslands are now extinct or seriously endangered because of human activities. Agriculture and livestock grazing lead to degradation. At present, large portions of grassland ecosystems are lost to salinization and dissertation. Human-induced fires, water diversion, and species loss have disturbed natural regeneration cycles and substantially threaten the well-being of these ecosystems (Revkin, 2011). Desert and Arid Land The type of deserts that come to mind most often are arid deserts: the dry, hot deserts that have warm winters and long, hot summers. There are extreme daily temperature fluctuations and short bursts of rain that evaporate quickly. The shallow, coarse, well-drained soil bears the dominant vegetation of the ecosystems: low trees, prostrate shrubs, cacti, and agave. They are found at the lower altitudes in the Americas, Africa, South Asia, and Australia. Semi-Arid deserts have more marked seasons than regular arid deserts and occupy the higher latitudes of North America, Europe, Russia, and North Asia. They have less severe summers and more precipitation (Harvey, 2008).

Before and after photographs of the Mission Creek Preserve Desert restoration Coastal deserts are found on the western edge of continents. They are characterized by long and cool winters and warm summers. The porous soil is able to support salt-tolerant shrubs and grasses with the low to moderate rainfall. Only insects, birds, and reptiles live in these ecosystems year round. Cold deserts do not support these same species but have many of the same characteristics of coastal deserts except cold deserts are covered by snow fields ( Senate resolution shines spotlight on the importance of soils, 2008). Human activity mainly removes plants in used areas. Biodiversity in the region decreases and harms the ecosystems. To reverse these harmful effects, ecologists focus on revegetation. First, they sample the soil and study their collections for disturbances in the soils and the chemical makeup. Then they choose seeds from native perennials and several native annuals from the surrounding areas. Sown into the soil and misted twice daily, hundreds of these seeds germinate in flats. Once the seeds grow into small plants, ecologists move these plants to the restoration site. They add plant supplements and nutrients to the soil before planting the actual vegetation. After the planting is finished, the area, monitored monthly, must have proper water patterns until it can sustain the plants with only natural precipitation. Plant protectors, covering each plant in a plastic sheet, accumulate water that evaporates from these plants so that it can fall back on the plants and be utilized. After a month or so of monitoring, the restored desert can function and prosper on its own (Restoration research at red rock canyon state park).

Bibliography Bernard, C., ( n.d.). Culvert replacement and removal. Retrieved April 16, 2011 from http:// www.fishfirst.org/howtoguides/ff_how-to-guide_culvert_replace_remove.pdf Brush Layering. (n.d.). Informally published manuscript, Agricultural and Department of Biological Engineering, Mississippi State University, Starkville, Mississippi. Retrieved from http://www.abe.msstate.edu/csd/NRCSBMPs/ pdf/streams/bank/brushlayer.pdf Byers, J. E., Cuddington, K., Jones, C. G., Talley, T. S., Hastings, A., Lambrinos, J. G., Crooks, J. A., Wilson, W. G. (2006) Using ecosystem engineers to restore ecological systems. Trends in Ecology & Evolution, 21 (9), 493-500.

Clewell, A.,Rieger, J., & Munro, J. (2005, December). Society for ecological restoration international: guidelines for developing and managing ecological restoration projects, 2nd edition. Retrieved from http://www.ser.org/ Cloern, J.E. (2007). Eutrophication. Encyclopedia of earth. Retrieved April 15, 2011, from http://www.eoearth.org Crisman, T. L. (1999).Conservation of Mediterranean coastal saline ecosystems: the private sector role in maintaining ecological function.Retrieved April 16, 2011, from www.Gnest.org/conferences/Saltworks_post/039-047.pdf Destruction of natural habitats. (n.d.) Retrieved April 5, 2011, from www.nationalzoo.si.edu. Ecosystem. (2008). In Encyclopedia of Earth. Retrieved April 5, 2011, from The Encyclopedia of Earth: www.eoearth.org Ecological restoration and land rehabilitation. (2005). Retrieved from http://www.abrinc.com/ projects/ecological-restoration-and-reclamation.htm Environmental Protection Agency, Connecticut Department of Environmental Protection. (n.d.). Blackledge river habitat restoration project success stories Hartford, Connectciut. Retrieved from http://ct.gov/dep/lib/dep/water/nps/ success_stories/blackledgeriver.pdf Federal Interagency Stream Restoration Working Group (FISWRG). 1998. "Stream Corridor Restoration:Principles, Processes, and Practices." GPO Item No. 0120-A; SuDocs No. A 57.6/2:EN3/PT.653. Fielder, P. L. (2008). Restoration Ecology. Accessscience. Retrieved March 24, 2011, from www.accessscience.org Fischenich J.C. & Morrow J.V. (2000). Streambank Habitat Enhancement with Large Woody Debris. Retrieved April 16, 2011 from: http://www.mass.gov Global Restoration Network. (2011). Rivers/streams. Retrieved April 17, 2011 From: http://www.globalrestorationnetwork.org Grismer, M. E., Schnurrenberger, C., Arst, R., & Hogan, M. P. (2008). Integrated monitoring and assessment of soil restoration treatments in the lake tahoe basin. Environment Moniter, 150. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/18483773 Harvey, F. (2008, July 16). Soil under strain: a thinning layer of life evokes concern. Financial Times, Retrieved from http://soilrestorationnetwork.org/ Hobbs, R. J., Harris, J. A. (2001). Repairing the earths ecosystem in the new millennium. Restoration Ecology, 9 (2), 239-246.

Holl, K. D., Crone, E. E., Schultz, C. B. (2003). Landscape restoration: moving from generalities to methodologies. American Institute of Biological Sciences, 53(5), 491-502. Kulluvainen, T., Aapala, K., Ahlroth, P., Kuusinen, M., Lindholm, T., Sallantaus, T., Siitonen, J., Tukia, H. (2002). Principles of ecological restoration of boreal forested ecosystems: finland as an example. Silva Fennica, 36(1), 409-422. Lamb, D., Erskine, P. D., & Parrotta, J. A. (2005). Restoration of degraded tropical forest landscapes. Science, 310, 1628-1632. Muragaya, Rajeswari, A. P., (2009). Effect of different ages of a rehabilitated forest on selected physic-chemical properties. American Journal of Applied Sciences, 6 (6), 1043-1044. New England Environmental. (n.d.a) 10-acre wetland restoration: Sullivans ledge superfund site, New Bedford, MA. Retrieved April 17, 2011, from http://www.neeinc.com New England Environmental. (n.d.b) COAL TAR REMEDIATION; Liberty Street, Easthampton, MA. Retrieved April 17, 2011, from http://www.neeinc.com Restoration research at red rock canyon state park . (n.d.). Retrieved from http://www.sci.sdsu.edu/SERG/restorationproj/mojave%20desert/redrock.html Restoring the gulf. (2010, August 18). New York Times, Retrieved from http://www.nytimes.com/ Revkin, A. C. (2011, February 28). The limits of laws as a conservation tool. New York Times, Retrieved from http://dotearth.blogs.nytimes.com/ Senate resolution shines spotlight on the importance of soils. (2008 , July 09). Retrieved from http://www.terradaily.com/reports/ Society for Ecological Restoration International Science & Policy Working Group. (2004). The SER International Primer on Ecological Restoration. www.ser.org &Tucson: Society for Ecological Restoration International. Stewart, R.E. National Biological Service, National Water Summary on Wetland Resources. (2007). United States Geological Survey water supply Retrieved April 17, 2011, from http://water.usgs.gov The Nature Conservancy. (1998). Restoration procedures manual for public lands in Florida. Retrieved April 17, 2011, from http://www.dep.state.fl.us U.S. Environmental Protection Agency. (2001).Table 11. Comparison of Alternative Lake Restoration Methods. Retrieved April 13, 2011 from http://www.waterboards.ca.gov/

USEPA, 2000. Principles for the Ecological Restoration of Aquatic Resources. EPA841-F-00003. Office of Water (4501F), United States Environmental Protection Agency, Washington, DC. 4 pp. Water Regime. (2011). Greenfacts. Retrieved April 7, 2011, from http://www.greenfacts.org

Image Credits
Chapter 1 earthembassy.org/eden-homes/passive-solar-heating/ newenergytechnologiesinc.com/solarwindow popsci.com/gadgets/article/2010-07/power-plant popsci.com/gadgets/article/2010-07/power-plant nanosolar.com/technology popsci.com/military-aviation-amp-space/article/2009-06/forever-plane Chapter 2 science.howstuffworks.com/transport/flight/modern/airplane8.htm news.bbc.co.uk/2/hi/science/nature/7827044.stm dailymail.co.uk/sciencetech/article-1297713/Giant-offshore-wind-farm-mimics-sycamore-seed-joins-racedevelop- generation-turbine.html cleantechnica.com/2009/07/06/new-design-integrates-wind-turbines-into-transmission-towers/ Chapter 3 villamimpi.com/index.php?pid=155 britannica.com/ alternative-energy-fuels.com/ our-energy.com/ocean_energy.html oceanexplorer.noaa.gov/ miktechnology.files.wordpress.com/ sciencedirect.com/ Chapter 4 lilith-ezine.com/articles/politics/images/The-United-States-Energy-Crisis.jpg Page 2 questgarden.com/84/32/4/090627151032/images/original.jpg Page 4 ecomodder.com/blog/wp-content/uploads/2008/11/california-high-speed-train.jpg Page 5 inhabitat.com/files/energystarfrontloader.jpg Page 7 eoearth.org/files/122801_122900/122868/RL3428805_Figure_1.png Page 9 environmentalgraffiti.com/sites/default/files/images/http-i43.tinypic.com-u3g9c.preview.jpg Page 10 gogreeninitiative.org/images/GoGreenInitiative.gif Page 12 maximumfx.com/wp/wp-content/uploads/2010/04/light-off.jpg Page 13 i.treehugger.com/images/2007/10/24/numbers-cost-car-bus1.jpg Page 14

Chapter 5 americanscientist.org/Libraries/images/20072510429_846.jpg batteryuniversity.com/learn/article/types_of_lithium_ion openmarket.org/wp-content/uploads/2011/01/electric-car-frankh.jpg

Chapter 6 instablogsimages.com/images/2007/08/10/pollution_4646.jpg volcanoes.usgs.gov/Imgs/Jpg/GasEffects/GasesChemistry.jpg dvorak.org/blog/wp-content/uploads/2009/04/smokestackpollution.jpg knovel.com/web/portal/browse/display?_EXT_KNOVEL_DISPLAY_ bookid=1714 mustknowhow.com/index.php/air-conditioning/central-air-conditioningsystem-installation prlog.org/10349595-world-health-organization-who-sets-radon-action-level-of-27-less-lung-cancer- riskthan-epa-40.html charcoalproject.org/wp-content/uploads/2010/08/wsb_763x512_KEN05_OMORO_3stone+fire_800x5.jpg sfuoutdoors.ca/Various/WindPro.jpg celsias.com/media/uploads/admin/envirofit-cookstove_pg-3.jpg burningissues.org/car-www/images/energyladder.gif

Chapter 7 In situ remediation using vacuum pumps. MacDonald, J. A. and Rittmann, B. E. Vacuum pump in situ remediation below the water table. MacDonald, J. A. and Rittmann, B. E. In situ remediation with peroxide. MacDonald, J. A. and Rittmann, B. E. nmenv.state.nm.us/ust/images/remed-4.gif jenniferwshepard.files.wordpress.com/2010/10/phytoremediation_diagram1.gif Vidali, M. Chapter 8 bp.blogspot.com/ mindfully.org/ epa.gov/epawaste/ Chapter 9 en.wikipedia.org/wiki/File:Sao_Paulo_ethanol_pump_04_2008_74_zoom.jpg en.wikipedia.org/wiki/File:Biogas_pipes.JPG hawaii.gov/dbedt/ert/new-fuel/files/shleser/f201.gif sci.waikato.ac.nz/farm/images/cellulas newscenter.lbl.gov Chapter 10 photosmynthesis.files.wordpress.com/2010/09/water-treatment-plant.jpg coolingtower-design.com/wp-content/uploads/2011/01/reverse-osmosis.jpg schoolworkhelper.net/wp-content/uploads/2010/12/Water-Pollution-from-ndustry.png i1-news.softpedia-static.com/images/news2/Bacteria-Eating-Up-Oil-Spills-and-ProducingBiodegradable-Plastic-2.jpg bioweb.uwlax.edu/bio203/s2009/meinhard_jaso/polluted%20water.jpg images.dpchallenge.com/images_challenge/0999/148/800/Copyrighted_Image_Reuse_Prohibited_4 2109.jpg enviropure.ca/images/ozone_formation.jpg oobybrain.com/wp-content/uploads/2010/05/BP-oil-spill-2010.jpg Chapter 11 en.wikipedia.org accessscience.com/loadBinary.aspx?aID=3619&filename=274100FG0010.gif scientific-computing.com/features/feature.php?feature_id=126.biofuelswatch.com/hydrogen-fuel-cell/ alternative-energy-news.info/hydrogen-fuel-cell-bikes/ Chapter 12 api.ning.com/files/TPII6n9i1E2WGWTTDhXYjfRkRkqomXKb8UgwpPOe6QXhZslm12dH4xCabsWW9 VlzuFkS1Qju9Y9cXc0GLEzsswyN3F5sYIXp/sustainbleagricIAASTD.jpg grain.org/seedling_files/cotton_india.jpg Chapter 13 en.wikipedia.org/wiki/File:Flytrap.jpg ; aquaticscape.com/offsite/ecs/DSC_0106.jpg ; toptourguide.com/worldtourism-updates/wp-content/uploads/2011/02/rainforest1.jpg 2.bp.blogspot.com/_zS2JDRBdNzk/TLBrDsEYcwI/AAAAAAAACfQ/0fekZEYCdGg/ s1600/wood+cycad.jpg ; solarnavigator.net/animal_kingdom/animal_images/Tiger_panthera_tigris_tigris_Bengal.jpg; cwi.nl/news/2011/automatic-image-recognition-technology-spots-endangered-leatherback-sea-turtle; abdn.ac.uk/~nhi708/images/passenger%20pigeon.jpg; news.bbc.co.uk/nol/shared/spl/hi/pop_ups/07/sci_nat_cloning_hall_of_fame/img/5.jpg static.howstuffworks.com/gif/deforestation-2.jpg

Chapter 14 macalester.edu/environmentalstudies/threerivers/studentprojects/LakesStreamsRivers Fall09/InvasiveSpeciesWeb/PurpleLoosestrife.html) negril.com/discus/messages/196641/239919.jpg community.greencupboards.com/wp-content/uploads/2011/03/texas-feral-hog-regulations.jpeg Chapter 15 ser.org/project_showcase/show_13.asp main.net/rootwads.html play-with-water.ch/d4/index.cfm?pageNo=6&systemNo=3&eksperimentNo=304&languagee guidetolivingnaturally.wordpress.com/2010/12/21/what-is-phosphate-free-detergent-a-good-start/ nps.gov/archive/seki/snrm/gf/before_after/before_afternj.htm#2_3132 globalrestorationnetwork.org/ecosystems/mediterranean/case-studies/ justinwp.com/blog/Desert_Restoration_Corp_Training/