Вы находитесь на странице: 1из 97

****A2 EXTINCTION INEVITABLE****

Extinction from space viruses wont occur Britt, Senior Space Writer, 2001 (Robert Roy, Survival of the Elitist:
http://www.space.com/scienceastronomy/generalscience/) Are we doomed? Many scientists argue that there

A2 Alien Plague
BIoterrorism May Spur Space Colonies, October 30,

is no need to worry about the mortality of civilization right now. Eric Croddy is an expert on chemical and biological weapons at the Monterey Institute of International Studies. Croddy said the threat of a virus wiping out the entire human species is simply not real. Even the most horrific virus outbreak in history, the 1918 Spanish Flu epidemic that killed between 20 million and 40 million people, including hundreds of thousands in the United States, eventually stopped. Experts say new strains of the influenza virus emerge every few decades and catch the human immune system unprepared, but prevention measures and ever-evolving medical treatments overcome the outbreaks. "I'd be much more concerned about an asteroid hitting the planet," Croddy said. Croddy accused Hawking of speaking more from a religious, apocalyptic view than from anything based on the facts of science. "What he said is more biblical than scientific," Croddy said. Besides, he added, "Earth's not such a bad place."
Most space-colonization enthusiasts share this planet with Croddy, as well as his view of it. But whether stated or not, the desire to ensure survival has always permeated their plans.

No space diseases Ghaz, 2009, ScienceRay, Do Diseases Come From Space: Comet Controversy, http://scienceray.com/astronomy/do-diseases-come-from-spacecomet-controversy/, KHaze

Other astronomers disagree with this theory. They claim that the effects of interstellar dust on starlight are not what would be expected from clouds of minute living organisms. And epidemics do not seem to result from annual meteor showers the result of dust shed by comers that should, according to Hoyle and Wickramasinghe, be disease bearing.

Anti-gravity cant happen mainstream physics agrees Scotland on Sunday 96 (Geraldine Murray, Anti-gravity theory takes a tumble, September 22, L/N)

A2 Anti-Gravity WeaponsImpossible

PLANS for an anti-gravity device, which would revolutionise space travel and defy Newton's established third law of motion, have been brought back down to earth with a bump. Russian scientist Dr Eugene Podkletnov has withdrawn a paper describing the machine which was due to be published in a leading physics journal next month. The device is thought to reduce the weight
of any object suspended over it by up to 2% and, if Podkletnov's claims are true, could be one of the most radical scientific discoveries in history. Controversy surrounds Podkletnov's decision, which followed queries over the identity of the paper's co-author and a denial by Tampere University in Finland, where the Russian says he is based, that they knew anything about the anti-gravity research. But the paper succeeded in passing the scrutiny of three referees appointed by the Journal of Physics-D: Applied Physics to find flaws in Podkletnov's work. Tests are thought to have ruled out other possibilities for the machine's effect such as air flow or magnetic fields. Most physicists are traditionally sceptical about anti-gravity devices and doubts are already being voiced, despite the paper's acceptance by the respected journal. Podkletnov told New Scientist last week that he stood by his claims. "This is an important discovery and I don't want it to disappear," he said.

Podkletnovs experiment on gravity shielding is bunk no one has repeated it BBC 2 (Boeing tries to defy gravity, July 29, http://news.bbc.co.uk/1/hi/sci/tech/2157975.stm)
Dr Podkletnov claims to have countered the effects of gravity in an experiment at the Tampere University of Technology in Finland in 1992. The scientist says he found that objects above a superconducting ceramic disc rotating over powerful electromagnets lost weight. The reduction in gravity was small, about 2%, but the implications - for example, in terms of cutting the energy needed for a plane to fly - were immense. Scientists who investigated Dr Podkletnov's work, however, said the experiment was fundamentally flawed and

that negating gravity was impossible. Physicists agree anti-gravity is a delusion experiments proving it are subject to measurement error Business Week 97 (Otis Port, ANTIGRAVITY? WELL, ITS ALL UP IN THE AIR, February 17, L/N)
Floating on air? It's possible. Just chill a ceramic superconductor below 90K (-300F) and place it on a magnet. The superconductor will levitate. It's called the Meissner effect, and it might one day lead to an ''antigravity'' machine. John H. Schnurer,

director of physics engineering at Antioch College in Yellow Springs, Ohio, thinks he might have taken a first step in that direction last fall. After chilling a 1-inch-diameter superconducting disk, he threw a switch that sent an electrical current surging
through a set of coils positioned around the disk. Above the disk was a plastic sample hanging from one end of a homemade balance scale containing no metal parts. The plastic sample rose ever so slightly -- corresponding to an apparent 5% loss in the weight of the sample. ''Great fun,'' said

Schnurer -- his restrained way of shouting ''Eureka!'' WEAK FORCE. Many physicists are sure antigravity is a delusion. Even if it does exist, it can't be more than one-millionth as strong as gravity, says Eric G. Adelberger, a professor of physics at the University of Washington who studies gravity. And because gravity itself is such a weak force, tiny magnetic fields and temperature changes can cause spurious results. Adelberger says it's crucial to control temperatures to one-thousandth of a degree -- way beyond the scope of Schnurer's setup.

The author of the article actually admits that Cook is insane and its only a conspiracy theory Rogers 2 (Adam, writer for The Slate, Oct. 18 2002, http://www.slate.com/id/2072733/ Feeling Antigravitys Pull)
Unfortunately, Cook strains his own credibility somewhat. A couple of weeks after his Jane's piece appeared, Cook's book on antigravity research, The Hunt for Zero Point, came out. In it, he claims

A2 Anti-Gravity WeaponsAuthors Insane

that the Nazis built an antigravity device during World War II. Its absence from present-day science, Cook says, implies a vast "black" world of secret antigravity aircraft that might explain the UFOs people see over Area 51. He's a careful investigative reporter, but once you start talking about UFOs and Nazi antigravity you're not far from hidden tunnels under the White House full of lizard-men disguised as Freemasons. Even without Nazis, there are plenty of reasons to doubt Podkletnov. My e-mails to the account listed on his recent articles (not peer-reviewed) went unanswered. Even more problematic, I can't find the institution he lists as his affiliation in Moscow. "Eugene always
expressed his worries that others could copy his work, although as far as I know he never applied for a patent," Giovanni Modanese, a collaborator of Podkletnov's at the University of Bolzano in Italy, wrote in an e-mail (using a Western version of Podkletnov's first name). "Nonetheless, at the scientific level if one wants a confirmation by others and a successful replication, one must give all the necessary elements." Well, yeah. Modanese says that the current version of the device, now called an "impulse gravity generator," is simpler and could be built "by a big-science team of people expert in superconductivity." A Boeing spokesperson didn't respond to follow-up questions. So, either there's nothing going on here, or it's an X-File. And the

science? Ten years is a long time to go without replication. Combine that with Podkletnov's cagey behavior and it's enough to make even sci-fi geeks like me lose hope. But like the core of any good conspiracy, antigravity research has the ring of plausibility. One of the outstanding problems in physics and cosmology today involves the existence of so-called dark
matter and dark energy. They're by far the main constituents of matter in the universe, and nobody knows what they're made ofresearchers have only inferred their existence from gravitational effects. Coming up with a new theory of how gravity works might explain that, though it'd be a scientific revolution on a par with relativity. "Changing gravity is in the cards," says Paul Schechter, an astronomer at MIT. "But so far no one's been able to do better than Einstein." Still, Einstein worked in a lowly patent office. Ron Koczor works for NASA.

Podkletnov is inspired by Star Trek hes an idiot New Scientist 2 (Howard Medhurst, Putting spin on it. . . ., February 9, L/N) The anti-gravity machine described in Podkletnov's 1992 paper seems to be almost identical to the gravity generators used on the starship Enterprise, as described on page 144 of "Star Trek: The Next Generation Technical Manual," by Rick Sternbach and Michael Okuda, copyright Paramount Pictures 1991 - except that the "Star Trek" devices have larger superconducting discs and spin a lot faster. Of course, that can only be a coincidence, can't it ?

Cook and anti-gravity theorists are crackpots they use no citations, no proof and are borderline Nazi sympathizers McClure, No Date Given (Kevin, Review of The Hunt for Zero Point, Written after 2001,
http://www.forteantimes.com/review/huntzero.shtml)

The Zero Point for which Cook hunts is the point where anti-gravity technology achieves the escape of an object from gravitys effect: where it flies because there is nothing to prevent it doing so. He finds it, oddly, in the achievements of Nazi scientists during World War II, though they have never been replicated since and Cook a professional, and award-winning, writer on aerospace matters never tells us what their technology was, or how it worked. Any attempt to replicate Cooks quest is bound to be frustrated. Four of his primary sources are, without explanation, given false names, including one Lawrence Cross, supposedly a former Janes aerospace journalist, now a bureau chief for a rival publication in Australia. Cross
feeds him a raw, uncritical version of the Nazi UFO material I debunked in Phoney Warfare in Fortean Studies 7, and apparently says Its been around for decades, long enough to have been given a name. In the trade, we call it the Legend. Most of this material actually comes from former

Nazis or later sympathisers , and Ive never heard it called the Legend. Equally frustrating is Cooks decision to do without references or an index. There is much waffle here, and the story jumps backwards and forwards. Rumours are presented without noting their likely status, and unless a reader has spent time researching the same material from other sources, it would be impossible to make an
objective assessment of his assertions. His style sometimes descends from the merely credulous to the downright odd. Without pursuing the question further, one of Cooks mysterious sources ends a chapter by saying They were trying to build a fking time machine. More disturbingly, Cook sets out a detailed fantasy of how the (supposed) scientists working on the (supposed) anti-gravity (or time) machine would have been shot and buried by the SS, in line with a paragraph or two from the execution manual it had drawn up for the Holocaust. This tasteless passage seems inappropriate, at best. For a man who tells me he has 20 years experience as a journalist, Nick Cook is remarkably trusting. In particular, he trusts the strange version of history put to him by one Igor Witkowski, a Pole who volunteered to assist him in his research. I spoke to Cook, who describes Witkowski as a former defence journalist, but was unaware of the evidence that Witkowski seems to be a ufologist who is interested in crop circles and the similarities between the technologies of the Nazis and the Aliens. He did not know what Witkowskis self-published tracts are about, having made no attempt to translate them from the Polish. In fact, Witkowski has put out six separate items titled something like Hitlers Supersecret Weapons, advertised along with other publications in the crashed saucer/paranormal field. It is at least possible that, in volunteering his assistance to Cook, Witkowski had an agenda of his own to publicise. Witkowski, too, has a vital, unnamable source. Supposedly, a Polish government official phoned him, inviting him to view documents and take notes about the development and concealment of extraordinary Nazi technology, as given in a record of the activities of a special unit of the Soviet secret intelligence service. Convinced of the veracity of both Witkowski and the source, Cook was driven around Eastern Europe to see evidence of The Bell, a supposed experimental device with two cylinders spinning in opposite directions, which Witkowski said glowed blue and destroyed plants, birds, animals, and sometimes humans. Dangerous tasks connected with its operation were, apparently, performed by concentration camp prisoners from Gross-Rosen. Cook seems to have been inside two constructions which had contained versions of The Bell, but publishes no photographs of this key evidence. Accepting

the reality of dazzling, futuristic Nazi technology, Cook locates a scientist to match. The name he chooses, supplied in The Legend according to Lawrence Cross, is Viktor Schauberger, an Austrian forester who features regularly in pro- and neo-Nazi versions of the Nazi UFO mythos, but nowhere in mainstream scientific history. Ill be returning
to the peculiar creation of the Schauberger-as-Saucer-Builder myth in detail in a forthcoming issue of FT, but suffice it to say that Cooks version of the tale probably originates in 1975, in a book by a prominent Holocaust revisionist. Although Cook visits the Schauberger archives, he does not confirm the tale told him by Cross. And while Cook concludes that via Schauberger, the Nazis had been deeply involved no question in what can only be described as flying saucer technology, we are not allowed to see any pages of Schaubergers diaries or letters to support this extraordinary claim. In the end it is the

lack of evidence, and of traceable sources, that renders this book almost valueless as either history or science. Worse, it may unwittingly be delivering political propaganda, glorifying fictional Nazi achievements , of which I am sure
neither author nor publisher would approve.

No anti-matter- too costly and long to produce ScienceRay, 1-28-11, Should We Fund Antimatter? http://scienceray.com/technology/should-we-fund-antimatter/, KHaze The problem of getting antimatter is that it doesnt exist on earth naturally because any interation of matter will cause them both to annilate and release their energy. It can be produced in particle accelerators at 62.5 quadrillion dollars per gram. We can only produce Less than 1 nanongram at a time so it will take biilions of years to produce it at current rates. There is the possibility of having stray antimatter in the universe
but we can never be sure.

A2 Anti-Matterimpossible

Anti-matter too difficult to produce- no chance itll ever happen Quinn, 2010, Jameson, Programmer, Quora, Will antimatter as a fuel ever be feasible? http://www.quora.com/Will-antimatter-as-a-fuel-ever-befeasible, KHaze Here's some quick, preliminary points to kick things off. -For

the forseeable future, antimatter could only be a storage medium, not an energy source. The only way to get the antimatter is to make it. -Recently, scientists at CERN managed to contain 37 atoms of antihydrogen for one second before deliberately releasing them. At the rate they're going, it would take them more than the age of the universe to create one gram. -Also, the melting point of (anti)hydrogen is 14 K (-259.14 C), and performing fusion with antimatter (to get, say, antilithium) is unthinkably hard, so essentially you have to magnetically contain a gas. That takes insane fields or extreme cold - difficult to square with getting the energy out in a controlled manner. -The energy density is insanely high; however, the efficiency is infinitesimal. Even if efficiency went on a moore's-law streak of increase - which we have no reason to expect - it would take many decades for it to become remotely feasible. Cant make anti-matter weapons- too difficult, long term, and costly Gefter, 2009, Amanda, NewScientist, What about antimatter bombs? http://www.newscientist.com/article/dn17019-what-about-antimatterbombs.html, KHaze

The idea that humanity might one day harness the annihilative power of antimatter for destructive purposes has a ghastly fascination - and it's a central part of the Angels and Demons plot, in which a bomb containing just a quarter of a gram of antimatter threatens to obliterate the Vatican. Relax, says Rolf Landua, a physicist at CERN. There's a very good reason why nothing like that is going to happen any time soon. "If you add up all the antimatter we have made in more than 30 years of antimatter physics here at CERN, and if you were very generous, you might get 10 billionths of a gram," he says. "Even if that exploded on your fingertip it would be no more dangerous than lighting a match." Patients undergoing PET scans have natural radioactive atoms in their bloodstreams emitting tens of millions, if not more, positrons to no ill effect. Even if physicists could make enough antimatter to build a viable bomb, the cost would be astronomical. "A gram might cost a million billion dollars," says Landua. "That's probably more than Barack Obama wants to invest right now." Frank Close, a particle physicist at the University of Oxford, points out the time problem, too. "It would take us 10 billion years to assemble enough anti-stuff to make the bomb Dan Brown talks about," he says. Anti-matter either was permanently destroyed or is too far away to be of consequence

Gefter, 2009, Amanda, NewScientist, What about antimatter bombs? http://www.newscientist.com/article/dn16780-where-is-all-theantimatter.html, KHaze

According to the theory, matter and antimatter were created in equal amounts at the big bang. By rights, they should have annihilated each other totally in the first second or so of the universe's existence. The cosmos should be full of light and little
else. And yet here we are. So too are planets, stars and galaxies; all, as far as we can see, made exclusively out of matter. Reality 1, theory 0. There are two plausible solutions to this existential mystery. First, there might be some subtle difference in the physics of matter and

antimatter that left the early universe with a surplus of matter . While theory predicts that the antimatter world is a perfect reflection of our own, experiments have already found suspicious scratches in the mirror. In 1998, CERN experiments showed that one particular exotic particle, the kaon, turned into its antiparticle slightly more often than the reverse happened, creating a tiny imbalance between the two. That lead was followed up by experiments at accelerators in California and Japan, which in 2001 uncovered a
similar, more pronounced asymmetry among heavier cousins of the kaons known as B mesons. Once the LHC at CERN is back up and running later this year, its LHCb experiment will use a 4500-tonne detector to spy out billions of B mesons and pin down their secrets more exactly. But LHCb won't necessarily provide the final word on where all that antimatter went. "The effects seem too small to explain the large-scale asymmetry," says Frank Close, a particle physicist at the University of Oxford. The second plausible answer to the matter mystery is that annihilation was not total in those

first few seconds: somehow, matter and antimatter managed to escape each other's fatal grasp. Somewhere out there, in some mirror region of the cosmos, antimatter is lurking and has coalesced into anti-stars, antigalaxies and maybe even anti-life. "It's not such a daft idea," says Close. When a hot magnet cools, he points out, individual atoms can force their neighbours to align with magnetic fields, creating domains of magnetism pointing in different directions. A similar thing could have happened as the universe cooled after the big bang. "You might initially have a little extra matter over here and a little extra antimatter somewhere else," he says. Those small differences could expand into large separate regions over time. These antimatter domains, if they exist, are certainly not nearby. Annihilation at the borders between areas of stars and anti-stars would produce an unmistakable signature of high-energy gamma rays. If a whole anti-galaxy were to collide with a regular galaxy , the resulting annihilation would be of unimaginably colossal proportions. We haven't seen any such sign, but then again there's a lot of universe that we haven't looked at yet and whole regions of it that are too far away ever to see.

Anti-matter is impossible and its key to energy production Palmert, 5-30-11, Brian, Washington Post, NASA looks for antimatter. Its not just some sci-fi idea?
Even more perplexing is the fact that all

A2 Anti-Matter Not Dangerous

http://www.washingtonpost.com/national/science/nasa-looks-for-antimatter-its-not-just-some-sci-fi-idea/2011/05/23/AGYAeyEH_story_1.html, KHaze

our high-energy experiments on Earth produce matter and antimatter in equal proportions. Even if the Big Bang did produce tons of antimatter, and it was simply destroyed as it interacted with matter, why was there all this matter thats you, me, and everything around us left over? The AMS is one step toward finding the answer. Its going to sit up in space and try to trace the origins of the antimatter that is floating around the cosmos. Is it possible its all coming from the same direction, and that theres an antimatter universe somewhere? If neither a potential revolution in every physical law we hold dear, nor insight into birth of the universe, interests you, theres a potential practical use for antimatter: energy production. Every time an antiparticle meets a particle, energy is produced with no harmful leftovers. If we could bottle antimatter, you wouldnt need nuclear reactors, you wouldnt need gasoline; you wouldnt need anything, according to Mike Shara, an astrophysicist at the American Museum of Natural History. Youd have the perfect source of energy. Unfortunately, the engineering is way, way behind our imaginations right now. Fermilab manages to produce about two one-billionths of a gram of antiprotons per year. Thats not enough to solve the energy problems of a small village, let alone the world.

A-Life is ridiculous machines can never be like humans Zey 2k (Michael, Professor of Management at Montclair State University, The Future Factor: The Five Forces Transforming Our Lives and Shaping
Human Destiny, p. 231-233)

A2 Artificial Intelligence

Researchers who predict that robots and computers will achieve human capabilities base their contentions on their belief that soon these machines will not only compute but also "think." For decades science fiction novels and movies have featured
smart robots with almost human-like thinking abilities. The movies 2001 and the recent Bicentennial Man predict a future of thinking machines. Can the computer, no matter how complex or massive, ever think in the sense that humans do? Such feats as Deep Blue's victory over Kasparov

have cybernetic scientists and technicians murmuring that we are on the verge of creating a thinking machine to challenge the human species' monopoly on real intelligence. However , many in the cybernetic community express grave doubts over whether such machines actually perform human-like thinking. Marvin Minsky, MIT professor emeritus who is credited with initiating the Artificial Intelligence (AI) movement over 35 years ago, put such proficiency in perspective. According to Minsky, "Deep Blue might be able to win at chess, but it wouldn't know to come in from the rain." 10 Minsky's comment cut to the very
heart of the thinking machine debate. Deep Blue's circuits, wiring, and program, its entire "being," if we can apply such a term to this contrivance, knows nothing except how to play chess. Concepts like "rain" do not even exist in Deep Blue's memory banks. Nor could it even imagine that the rain's overwhelming moisture could impair its circuits, or fashion a strategy to avoid such a misfortune. In addition, skeptics repeatedly remind us that human intelligence created Deep Blue. Yet, instead of celebrating Deep Blue's victory as a testimony to human intelligence, the Al

community congratulates the machine for a job well done. Actually, this debate has already been settled in favor of humankind. Cambridge University physicist Roger Penrose combines information science, cognitive psychology, and physics to make a tightly constructed case against the possibility that computers can ever achieve human intelligence. In two books, The Emperors' New Mind and Shadows of the Mind, Penrose argued that the computer can never be conscious, and thus truly intelligent. 11 When our brains operate, we juggle many different thoughts and thought patterns before zeroing in on one unified pattern that becomes a conscious thought. Some physical mechanism
must exist that helps us achieve, and maintain, this pattern of multiple simultaneously existing "protothoughts" before we focus in on the final thought. Penrose claims that this mechanism acts "non-locally." That is, some aspects of these thought patterns would have

to act more or less at the same instant at widely separated locations of the brain, rather than spreading out relatively slowly, neuron by neuron. The genius of Penrose's theory is the way he applies quantum physics to the operation of the brain. His basic point is that before a thought, or the neural signals that constitute thought, enters our consciousness, it exists in a "quantum wave state." At the threshold of consciousness , the "wave-thoughts" then "collapse'' or coagulate into a single ordinary thought. If, as Penrose claims, such quantum mechanical phenomena are the operating principles behind human consciousness, the brain functions in a way that no mechanical device, computer or otherwise, can ever replicate. Computing devices, artificial neural networks, cannot simulate quantum mechanical phenomena.12 Penrose's theory seems to prove that no matter how complex or sophisticated a computer or computer network is it will never achieve consciousness. And if our smart machines can never reach consciousness, they will never be said to truly think!13 Donald Norman, VP of research at Apple and psychology professor at the University of California does not believe that in the foreseeable future computers and robots will come to mimic and/or surpass people. People and computers operate along completely different principles. According to Norman, the power of biological computation, that is, the human brain, emerges from "a large number of slow, complex devices neuronsworking in parallel through intricate electrical-chemical interactions."14 All this hardwiring enables the human to think in amazingly complex, abstract ways. On the other hand, computers have no problem finding square roots
instantaneously, or adding large columns of eight-digit numbers without hesitation. The computer's ability to perform math with ease and dexterity results from its multitudinous, high-speed devices following binary logic. Errors in the operation of any of the underlying components are avoided either by careful design to minimize failure rates or through error-correcting coding in critical areas. Because of the computer's speed and accuracy,

Norman says, we accord computers positive traits such as precise, orderly, undistractible, unemotional, and logical and label humans vague, disorganized, distractible, emotional, and illogical. According to Norman, we have our priorities backward. To appreciate humankind's natural superiority, he says, let us label humans creative, flexible, attentive to change, and resourceful and stigmatize computers as dumb, rigid, insensitive to change, and unimaginative. A-life impossible language integration is based on physical experience that robots lack Miyakawa 03 (January 1, Mikiko, writer for Daily Yomiuri, Yes, theyre cute Will they think someday? first paragraph on ROBOTS ARCHIVE,
http://www.aaai.org/aitopics//newstopics/robot3.html)

Is it possible for robots to have minds like human beings? Prof. Hiroshi Tsukimoto of Tokyo Denki University attempted to answer this controversial question by focusing on robots' capability of understanding language in his book titled 'Robotto no Kokoro' (Robot's Mind). In considering this issue, ... While many scientists claim that computers will become able to understand and use languages just like people, Tsukimoto, an expert on artificial intelligence, believes it will be impossible for computers to do so as they have no bodies. The professor claims that the comprehension of languages involves

'functional physical movement.' In other words, understanding of words is associated with images built up through one's physical experiences, he said."

Robots would revere human kind and not destroy us Kurzweil 01 (Ray, June 18, http://www.kurzweilai.net/meme/frame.html?main=/articles/art0212.html?, developer of the omni-front OCR, the first
print-to-speech reading machine, the first CCD flat-bed scanner, the first text-to-speech synthesizer, the first music synthesizer capable of creating orchestral sounds, founder of nine businesses, member of the National Inventors Hall of Fame, recipient of the 1999 National Medal of Technology (nations highest tech honor). Basically, he knows his robots. ) What do emotions have to do with intelligence? In my view, our emotional

capacity represents the most intelligent thing we do. It's the cutting edge of human intelligence, and as the film portrays, it will be the last exclusive province of biological humanity, one that machines will ultimately master as well. By the way, if David wishing to become "a real boy" sounds like a familiar fairy tale, the
movie makes the allusion and metaphor of Pinocchio explicit. Even early in his development, David is sufficiently appealing that he wins the sympathies of the Flesh Fair spectators, much to the dismay of the master of ceremonies, who implores the audience to "not be fooled by the talent of this artistry." In the third conception of machines that the movie presents, we see entities that are supremely sublime. I've always maintained that we will ultimately change our notion of what it is to be a machine. We now regard a machine as something profoundly inferior to a human. But that's only because all the machines we've encountered are still a million times simpler than ourselves. But that gap is shrinking at an exponential rate, and the movie examines what I believe will be the last frontier: mastering our most noble emotions, a capability displayed by only one human in the movie and sought by at least one machine. I won't give away the movie's ending by revealing whether David is successful in his quest, but I will say that at one point he does display a decidedly inhuman degree of patience.It was also my feeling that the very advanced entities we meet later in the movie are displaying a noble character that is life-affirming in the Spielbergian sense. I have also maintained that future AI's will appreciate that they are derivative of the human-machine civilization,

and will thereby revere their biological ancestors. This view is supported in Spielberg's conception of the most advanced machines that we
meet in the film.

Advanced AI is impossible we cant understand the complexities of how our own brains work, let alone how to build them Drexler 86 (Engines of Creation Thinking Machines http://www.e-drexler.com/d/06/00/EOC/EOC_Chapter_5.html, K. Eric)
There seems to be only one idea that could argue for the impossibility of making thought patterns dance in new forms of matter. This is the idea of mental materialism - the concept that mind is a special substance, a magical thinking-stuff somehow beyond imitation, duplication, or technological use. Psychobiologists see no evidence for such a substance, and find no need for mental materialism to explain the mind. Because the complexity of the

brain lies beyond the full grasp of human understanding, it seems complex enough to embody a mind. Indeed, if a single person could fully understand a brain, this would make the brain less complex than that person's mind. If all Earth's billions of people could cooperate in simply watching the activity of one human brain, each person would have to monitor tens of thousands of active synapses simultaneously - clearly an impossible task. For a person to try to understand the flickering patterns of the brain as a whole would be five billion times more absurd. Since our brain's mechanism so massively overwhelms our mind's ability to grasp it, that mechanism seems complex enough to embody the mind itself. Machines cant have original intent, only derived intent proves humanity superior Papazian 92 (Dennis R., Ph.D. in International studies, does projects in new technology, St. Petersburg Institute for Informatics and Automation of
the Russian Academy of Sciences, May 1992 http://www.umd.umich.edu/dept/armenian/papazian/robots.html) A fundamental question is: Can

man hope (or fear) that he can create machines which will become more intelligent than he? The be more intelligent than people simply because man is the creator and the machine the created. They supported this view with the proposition that only humans have "original intent" while machines can only have "derived intent." Only time will settle this question; but, hopefully, man still must be the judge. One thing is now clear, that in performing specific and limited tasks, present machines are--even now--more dependable than most people; yet while in dealing with complex matters , these machines can seem rather stupid and inept.
traditional answer of philosophy is that machines, indeed, cannot

Machine programming cant handle situations its not programmed for theres zero intuition Papazian 92 (Dennis R., Ph.D. in International studies, does projects in new technology, St. Petersburg Institute for Informatics and Automation of
the Russian Academy of Sciences, May 1992 http://www.umd.umich.edu/dept/armenian/papazian/robots.html) Furthermore, there is a difference between a machine doing the work of an intelligent person and true intelligence. Computers can now defeat all but grandmasters at chess, they can do your income tax from questions you answer, and they can deliver your mail within a building. These specialized programs and machines, however, cannot at the same time deal with life processes . The mail delivery "robot" would continue on its rounds even if an accidental release of poison gas would empty a building of all its occupants. This

circumscribed "intelligence" of the robot, consequently, has been dubbed "artificial intelligence," or "specialized intelligence," to distinguish it from true human or "life intelligence," much less "creative intelligence." Those who have been on the forefront of creating machine intelligence have observed "how easy the 'hard' things were to do, and how hard the 'easy' things." The application of a machine to certain complex tasks, which often may exceed human ability to equal, can be contrasted with the inability of a machine to comprehend a nursery rhyme or to leave a building in case of fire. Even if A-Life intelligence is good its not transferrable which makes it useless Papazian 92 (Dennis R., Ph.D. in International studies, does projects in new technology, St. Petersburg Institute for Informatics and Automation of
the Russian Academy of Sciences, May 1992 http://www.umd.umich.edu/dept/armenian/papazian/robots.html)

Computers programmed to capture human expertise, to replicate logic and experience, also have significant but limited use. A program to prescribe medicine for bacterial infection may do so better than most physicians, but they cannot distinguish between an
infected woman and one in the pains of childbirth. The answer at present, of course, is to have humans work closely with machines to take advantage of the best elements of both, the machine's logic and memory, and the person's "common sense." Rather than replacing people, these expert systems make people of modest intelligence equal, in certain tasks, to those who are brilliant and have vast experience. Thus we see here the seeds of profound changes in the way people work and the potential benefit for society.Such successes, however, may conceal the fact that machine intelligence is not

transferable. If a program is devised to maximize investments in the stock market by the manipulation of futures, it cannot ipso facto be "intelligent" enough to solve problems of another sort. Unlike human intelligence and learning, machine intelligence cannot be easily transferred to new and unexpected tasks. A-Life cant choose between balanced alternatives which is a key aspect of life theyd fail Chandler 02 (Keith, creator of Mental Realism, and how many people can say they created something with a name like that? Also been published in
peer-reviewed journals more times than God. The Android Myth, page 64)

I cannot help but wonder what choice of behavior an artificial intelligence, programmed with the same set of background beliefs or worldview that are assumed in Hamlet, would make for the hapless Prince of Denmark. How would it compute its way out of his dilemma? Even more to the point, how would it acquire existential concerns of its own that led to the necessity of choosing between evenly balanced alterantives, something humans have to do all the time. What the first three epigraphs tell us is that human thinking proceeds from our existential condition, our aliveness. It would be funny if it were
not so sad to see Ray Jackendoff offering a tentative list of what he calls affects which are intended to explain certain characteristics of visual processing, such as hallucinations, which do not follow the normal rules. These affects are described in terms of binary oppositions (op. cit.: 305-308):

A2 Asteroids
Theres no imminent threat to the Earth and we would have centuries of warning in the status quo BENNETT 2010 (James, Prof of Economics at George Mason, The Doomsday Lobby: Hype and Panic from Sputniks, Martians, and Marauding
Meteors, p. 168-169 Cooler heads intervened. Donald Yeomans of the Jet Propulsion Laboratory said, The comet will pass no closer to the Earth than 60 lunar distances [14 million miles] on August 5, 2126. There is no evidence for a threat from Swift-Tuttle in 2126 nor from any other known comet or

asteroid in the next 200 years.96 Even Brian Marsden concurred. He retracted his prediction, though he held out the possibility that in the year
3034 the comet could come within a million miles of Earth. Surveying this very false and very loud alarm, Sally Stephens, writing in the journal of the Astronomical Society of the Pacific, observed, Marsdens prediction, and later retraction, of a possible collision between

the Earth and the comet highlight the fact that we will most likely have century-long warnings of any potential collision, based on calculations of orbits of known and newly discovered asteroids and comets. Plenty of time to decide what to do.97 Ignore their impactyoure more likely to be killed by lightning than an asteroid SIEGEL 2010 (Ethan, theoretical astrophysicist at Lewis and Clark College, How Afraid of Asteroids Should You Be?
http://scienceblogs.com/startswithabang/2010/11/how_afraid_of_asteroids_should.php) But -- and my opinion here definitely runs against the mainstream -- I think this hysteria almost never hear about are the frequency and the odds of an asteroid strike harming you. If

is absolutely ridiculous. One of the things you large asteroid strikes happened every few decades, we'd have something legitimate to prepare for and worry about. But if you've only got a one-in-a-million chance of an asteroid harming you over your lifetime -- meaning you are over 100 times more likely to be struck by lightning than harmed by an asteroid -- perhaps there are better ways to spend your resources. No comet threat Near-Earth Object Science Definition Team 2003 (August 22, Study to Determine the Feasibility of Extending the Search for
NearEarth Objects to Smaller Limiting Diameters Prepared at the Request of National Aeronautics and Space Administration Office of Space Science Solar System Exploration Division http://neo.jpl.nasa.gov/neo/neoreport030825.pdf) The relative constancy of the long-period comet discovery rate over the past 300 years , the results from the Sekanina and Yeomans (1984) analysis, the Marsden (1992) type analysis and the above reality check all suggest

that the threat of long-period comets is only about 1% the threat from NEAs. Levison et al. (2002) note that as comets evolve inward from the Oort cloud, the vast majority of them must physically disrupt rather than fade into dormant comets; otherwise, vast numbers of dormant long-period comets would have been discovered by current NEO surveys. This conclusion would strengthen the case against there being a significant number of dormant long-period or Halley-type comets that annually slip past the Earth unnoticed. While Earth impacts by long-period comets are relatively rare when compared to the NEA impact flux, the present number of Earth-crossing
asteroids drops very steeply for asteroids larger than 2 kilometers in diameter, more steeply than the flux of cometary nuclei (Weissman and Lowry 2003). Hence, it is possible, perhaps even likely, that long-period comets provide most of the large craters on the Moon (diameter > 60 km) and most of the extinction level large impacts on Earth (Shoemaker et al., 1990). The conclusion is that, while a newly discovered Earth-threatening, long-period comet would have a relatively short warning time, the impact threat of these objects is only about 1% the threat from NEAs. More generally, the threat from all long-period or short-period comets, whether active or dormant, is about 1% the threat from the NEA population. The limited amount of

resources available for near-Earth object searches would be better spent on finding Earththreatening NEAs with the knowledge that these types of surveys will, in any case, find many of the Earth-crossing, long-period comets as well. Finally, it has been argued that we currently enjoy a relatively low cometary flux into the inner solar system and that some future comet shower, perhaps due to a passing star in the Oort cloud or a perturbation of our Oort cloud by the material in the galactic plane, could greatly increase this flux. The time scale for an increased cometary flux of this type is far longer than one hundred years so that current NEO searches can afford to concentrate their efforts on the more dangerous NEAs.

A2 Asteroids Bias
Asteroid calculations are bad sciencejournals publish the most extreme stories and the actual risk is close to zero BENNETT 2010 (James, Prof of Economics at George Mason, The Doomsday Lobby: Hype and Panic from Sputniks, Martians, and Marauding
Meteors, p. 157-158 We should here acknowledge, without necessarily casting aspersions on any of the papers discussed in this chapter, the

tendency of scientific journals to publish sexy articles. (Sexy, at least, by the decidedly unsexy standards of scientific journals.) Writing in the Public
Library of Science, Neal S. Young of the National Institutes of Health, John P.A. Ioannidis of the Biomedical Research Institute in Greece, and Omar AlUbaydli of George Mason University applied what economists call the winners curse of auction theory to scientific publishing. Just as the winner in, say, an auction of oil drilling rights is the firm that has made the highest estimation often overestimation of a reserves size and capacity, so those papers that are selected for publication in the elite journals of science are often those with the most

extreme, spectacular results.63 These papers may make headlines in the mainstream press, which leads to greater political pressure to fund projects and programs congruent with these extreme findings. As The Economist put it in an article presenting the argument of Young, Ioannidis, and Al-Ubaydli, Hundreds of thousands of scientific researchers are hired, promoted and funded according not only to how much work they produce, but also where it gets published. Column inches in journals such as Nature and Science are coveted; authors understand full well that studies with spectacular results are more likely to be published than are those that will not lead to a wire story. The problem, though, is that these flashy papers with dramatic results often turn out to be false.64 In a 2005 paper in the Journal of
the American Medical Association, Dr. Ioannidis found that of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in 19902004 a quarter of the randomised trials and five of six nonrandomised studies had already been contradicted or found to have been exaggerated by 2005. Thus, those who pay the price of the winners curse in scientific research are those, whether sick patients or beggared taxpayers, who are forced to either submit to or fund specious science, medical or otherwise. The trio of authors call the implications of this finding dire, pointing to a 2008 paper in the New England Journal of Medicine showing that almost all trials of anti-depressant medicines that had had positive results had been published, while almost all trials of anti-depressants that had come up with negative results remained either unpublished or were published with the results presented so that they would appear positive. Young, Ioannidis, and Al-Ubaydli conclude that science is hard work with limited rewards and only occasional successes. Its interest and importance should speak for themselves, without hyperbole. Elite journals, conscious of the need to

attract attention and stay relevant, cutting edge, and avoid the curse of stodginess, are prone to publish gross exaggeration and findings of dubious merit. When lawmakers and grant-givers take their cues from these journals, as they do, those tax dollars ostensibly devoted to the pursuit of pure science and the application of scientific research are diverted down unprofitable, even impossible channels. The charlatans make names for themselves, projects of questionable merit grow fat on the public purse, and the disconnect between what is real and what subsidy-seekers tell us is real gets ever wider.65 The matter, or manipulation, of odds in regards to a collision between a space rock and Earth would do Jimmy the Greek proud. As Michael B. Gerrard writes in Risk Analysis in an article assessing the relative allocation of public funds to hazardous waste site cleanup and protection against killer comets and asteroids, Asteroids and comets are the ultimate example of a low-probability/high-consequence event: no one in recorded human history is confirmed to have ever died from one. Gerrard writes that several billion people will die as the result of an impact at some time in the coming half million years,
although that half-million year time-frame is considerably shorter than the generally accepted extinction-event period.66 The expected deaths from a collision with an asteroid of, say, one kilometer or more in diameter are so huge that by jacking up the tiny possibility of such an event even a little bit the annual death rate of this never-beforeexperienced disaster exceeds deaths in plane crashes, earthquakes, and other actual real live dangers. Death rates

from outlandish or unusual causes are fairly steady across the years. About 120 Americans die in airplane crashes annually, and about 90 more die of lightning strikes. Perhaps five might die in garage-door opener accidents. The total number of deaths in any given year by asteroid or meteor impact is zero holding constant since the dawn of recorded time. Asteroid collision studies are exaggerated to get more money for other projects BENNETT 2010 (James, Prof of Economics at George Mason, The Doomsday Lobby: Hype and Panic from Sputniks, Martians, and Marauding
Meteors, p. 166 The authors conclude that increasing public awareness may be the key to demands for action to deal with the impact hazard. And so for the fifteen years since the Gehrels volume was published, those who seek public funds to deal with this rather hazy hazard have done

their best to raise public awareness through various means.85 Curiously, a bracing shot of skeptical clarity appeared in the
toebreakinglyif-you-drop-it long Hazards Due to Comets & Asteroids on page 1203, which one seriously doubts one in a hundred readers ever make it to. P.R. Weissman of the Jet Propulsion Laboratory writes: One problem for those advocating an impact hazard defense and/or

detection system is that their recommendations often appear to be self-serving. Astronomers who study small bodies have advocated an observing program that emphasizes searching for large (> 1 kilometer) Earth-crossing asteroids and comets . These are, in general, the same objects that those astronomers are currently discovering with their existing search programs. Thus, their recommendations can be viewed as motivated by a desire to obtain additional funding and instrumentation for their ongoing work.86 What a world of wisdom and insight is contained in those sentences! Astronomers and engineers whose livelihoods depend on the perception of an impact hazard develop and publish studies concluding that there is an impact hazard. The circle goes round and

round, and fills, gradually, with taxpayer money. The lessons of Chicken Little, suggests Weissman, are one factor that has kept the funding of such programs from really taking off. Weissman urges his colleagues to GO SLOW. Dont attempt to divert substantial resources to a program that, at present, is neither necessary, nor prudent.87 Asteroid threats are exaggerated to get money for space programs BENNETT 2010 (James, Prof of Economics at George Mason, The Doomsday Lobby: Hype and Panic from Sputniks, Martians, and Marauding
Meteors, p. 167-168 That same volume contained a paper by Andrew F. Cheng and Robert W. Farquhar of the Applied Physics Laboratory, J. Veverka of Cornell University, and C. Pilcher of NASA arguing for space missions to NEOs in order to better determine their composition and structure.90 Going them one better were a sextet of researchers from NASA, the Russian Space Agency, and other institutions who wrote of manned exploration of NEOs, which would, among other things, strengthen the integrity of any foreseeable program of human lunar and Mars exploration.91 The NEO scare has many spinoffs, it seems : its missions can be even shake-down cruises for that long-delayed manned mission to Mars. The challenges of dealing with microgravity and cosmic rays, designing effective life support, and ensuring communications across the void of space would be rehearsed in manned NEO missions in preparation for a trip to Mars. Political scientist John Mueller, author of Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them (2006), quotes a Turkish proverb If your enemy be an ant, imagine him to be an elephant which he describes as spectacularly bad advice.92 To drastically misestimate your enemy is to badly misallocate resources. For instance, the Department of

Homeland Security announces, Todays terrorists can strike at any place, at any time, and with virtually any weapon.93 This language, preposterously untrue can terrorists really strike in Baton Rouge with an antimatter gun, or in Yankton, South Dakota, with a secret decoder ring? could easily be transferred into any scare-mongering story about killer comets or rogue asteroids threatening the earth. Carl Sagan said that the extraordinarily remote chance that an asteroid or comet might strike
Earth justified indeed, compelled a vigorous space program. Since, in the long run, every planetary society will be endangered by impacts from space, every surviving civilization is obliged to become spacefaring not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive.94 Now, given that a KT like disaster is expected every 50 or 100 million years, we might be excused

for asking just how imperative it is that our government lead us into the brave new world of spacefaring. To ask this question, however, is to reveal oneself as unimaginative, dull, lacking in foresight, perhaps troglodytic, certainly far from au courant. This is asteroid alarmism as a trick shot, as a bulked-up NEO-detection program leads to an enhanced manned space travel program. Just as professors learned to hustle in the regime of largesse after Sputnik, in Walter A. McDougalls phrase, so did they prove fast on their feet in tracking down killer asteroid funds.95 Warning, in grave sepulchral tones, about the end of the world does tend to concentrate the attention of the listener. And if the Cassandra giving the warning has a Ph.D. after her name, all the better. Surely no doctor of philosophy would exaggerate in order to have a pet project funded! The press does its part, as it always has. Sensationalism sells, and if it isnt exactly grounded in truth, well, wink wink, everyone knows you cant always believe what you read or hear.

A2 Black Holes
No risk of black holes empirics energies from collisions are dwarfed by cosmic collisions, Hawking radiation, and models prove Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL The safety of collisions

at the Large Hadron Collider (LHC) was studied in 2003 by the LHC Safety Study Group, who concluded that they presented no danger. Here we review their 2003 analysis in light of additional experimental results and theoretical understanding, which enable us to confirm, update and extend the conclusions of the LHC Safety Study Group. The LHC reproduces in the laboratory, under controlled conditions, collisions at centre-of-mass energies less than those reached in the atmosphere by some of the cosmic rays that have been bombarding the Earth for billions of years. We recall the rates for the collisions of cosmic rays with the Earth, Sun, neutron stars, white dwarfs and other astronomical bodies at energies higher than the LHC. The stability of astronomical bodies indicates that such collisions cannot be dangerous. Specifically, we study the possible production at the LHC of hypothetical objects such as vacuum bubbles, magnetic monopoles, microscopic black holes and strangelets, and find no associated risks. Any microscopic black holes produced at the LHC are expected to decay by Hawking radiation before they reach the detector walls. If some microscopic black holes were stable, those produced by cosmic rays would be stopped inside the Earth or other astronomical bodies. The stability of astronomical bodies constrains strongly the possible rate of accretion by any such microscopic black holes, so that they present no conceivable danger. In the case of strangelets, the good agreement of measurements of particle production at RHIC with
simple thermodynamic models constrains severely the production of strangelets in heavy-ion collisions at the LHC, which also present no danger.

No risk of extinction decay and empirics Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL

In the case of the hypothetical microscopic black holes, as we discuss in Section 4, if they can be produced in the collisions of elementary particles, they must also be able to decay back into them. Theoretically, it is expected that microscopic black holes would indeed decay via Hawking radiation, which is based on basic physical principles on which there is general consensus. If, nevertheless, some hypothetical microscopic black holes should be stable, we review arguments showing that they would be unable to accrete matter in a manner dangerous for the Earth [2]. If some microscopic black holes were produced by the LHC, they would also have been produced by cosmic rays and have stopped in the Earth or some other astronomical body, and the stability of these astronomical bodies means that they cannot be dangerous. Theres not enough energy or force to produce black holes Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL We recall that the black other hand, each

holes observed in the Universe have very large masses, considerably greater than that of our Sun. On the collision of a pair of protons in the LHC will release an amount of energy comparable to that of two colliding mosquitos, so any black hole produced would be much smaller than those known to astrophysicists. In fact,
according to the conventional gravitational theory of General Relativity proposed by Einstein, many of whose predictions have subsequently been verified,

there is no chance that any black holes could be produced at the LHC, since the conventional gravitational forces between fundamental particles are too weak.

Production of a stable black hole violates laws of nature and is empirically denied Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL One might nevertheless wonder what would happen if a

stable microscopic black hole could be produced at the LHC [2]. However, we reiterate that this would require a violation of some of the basic principles of quantum mechanics which is a cornerstone of the laws of Nature in order for the black hole decay rate to be suppressed relative to its production rate, and/or of general relativity in order to suppress Hawking radiation. Most black holes produced at the LHC or in cosmic-ray collisions would have an electric charge, since they would originate from the collisions of charged quarks. A charged object interacts with matter in an experimentally wellunderstood way. A direct consequence of this is that charged and stable black holes produced by the interactions of cosmic rays with the Earth or the Sun would be slowed down and ultimately stopped by their electromagnetic interactions inside these bodies, in spite of their initial high velocities. The complete lack of any macroscopic effect caused by stable black holes, which would have accumulated in the billions during the lifetime of the Earth and the Sun if the LHC could produce

them, means that either they are not produced, or they are all neutral and hence none are stopped in the Earth or the Sun, or have no large-scale effects even if they are stopped. No risk of black holes not enough energy and it would disintegrate immediately CERN 08 (2008, European Organization for Nuclear Research, The safety of the LHC, http://public.web.cern.ch/Public/en/LHC/Safety-en.html)
HL Nature forms black holes when certain stars, much larger than our Sun, collapse on themselves at the end of their lives. They concentrate a very large amount of matter in a very small space. Speculations about microscopic black holes at the LHC refer to particles produced

in the collisions of pairs of protons, each of which has an energy comparable to that of a mosquito in flight. Astronomical black holes are much heavier than anything that could be produced at the LHC. According to the well-established properties of gravity, described by Einsteins relativity, it is impossible for microscopic black holes to be produced at the LHC. There are, however, some speculative theories that predict the production of such particles at the LHC. All these theories predict that these particles would disintegrate immediately. Black holes, therefore, would have no time to start accreting matter and to cause macroscopic effects. No impact 1) rapid decay, 2) empirics CERN 08 (2008, European Organization for Nuclear Research, The safety of the LHC, http://public.web.cern.ch/Public/en/LHC/Safety-en.html)
HL Although theory predicts that microscopic

black holes decay rapidly, even hypothetical stable black holes can be shown to be harmless by studying the consequences of their production by cosmic rays. Whilst collisions at the LHC differ from cosmic-ray
collisions with astronomical bodies like the Earth in that new particles produced in LHC collisions tend to move more slowly than those produced by cosmic rays, one can still demonstrate their safety. The specific reasons for this depend whether the black holes are electrically charged, or neutral. Many stable

black holes would be expected to be electrically charged, since they are created by charged particles. In this case they would interact with ordinary matter and be stopped while traversing the Earth or Sun, whether produced by cosmic rays or the LHC. The fact that the Earth and Sun are still here rules out the possibility that cosmic rays or the LHC could produce dangerous charged microscopic black holes. If stable microscopic black holes had no electric charge, their interactions with the Earth would be very weak. Those produced by cosmic rays would pass harmlessly through the Earth into space, whereas those produced by the LHC could remain on Earth. However, there are much larger and denser astronomical bodies than the Earth in the Universe. Black holes produced in cosmic-ray collisions with bodies such as neutron stars and white dwarf stars would be brought to rest. The continued existence of such dense bodies, as well as the Earth, rules out the possibility of the LHC producing any dangerous black holes. Empirically disproven by billions of years of cosmic ray collisions Cavagli 10 (2010, Marco Cavagli, an assistant professor of physics at the University of Mississippi, Einstein Online, Particle accelerators as black
hole factories? http://www.einstein-online.info/spotlights/accelerators_bh) Do we need to worry? Might these mini black holes start growing and, eventually, devour the whole earth? We Even if you do not trust the calculations predicting a quick demise for such minuscule black holes, there

should not worry about this. is solid data to go by. If black holes really form in high-energy particle collisions, they are also continuously created in the earth's atmosphere by the collision of Ultra High-Energy Cosmic Rays (UHECRs) with nuclei of oxygen, carbon, nitrogen and other elements present in the atmosphere. UHECRs are particles of unknown origin and identity (protons? light atomic nuclei?) reaching the earth
from outer space. In such collisions with atmospheric nuclei, a shower of new particles is produced (consisting mostly of electrons, their slightly more massive cousins called muons, and photons). These particles can be detected by specialized observatories on earth or in space, as sketched in the following illustration: The collision energies for UHECRs can be enormous - some observations show energies of hundreds of TeV (hundreds of trillions of electron volts), which is much

larger than the collision energies in particle collider experiments. And while the type of collision has been going on for literally billions of years, so an inordinate number of mini black holes would have formed. Since the earth has not (yet!) disappeared into one of these black holes, the much less massive man-made mini black holes should be quite safe.
events with very high energy are exceedingly rare, this

A2 Climate Change
Coral reefs have evolved to overcome climate change NIPCC 10. Nongovernmental International Panel on Climate Change citing Grimsditch et al. Effects of Habitat on Coral Bleaching. 28 December
2010. http://nipccreport.org/articles/2010/dec/28dec2010a7.html

Writing of corals, Grimsditch et al. (2010) say "it has been shown that it is possible for colonies to acclimatize to increased temperatures and high irradiance levels so that they are able to resist bleaching events when they occur." And they note, in this regard, that "threshold temperatures that induce coral bleaching-related mortality vary worldwide -- from 27C in Easter Island (Wellington et al., 2001) to 36C in the Arabian Gulf (Riegl, 1999) -- according to the maximum water temperatures that are normal in the area, implying a capacity of corals and/or zooxanthellae to acclimatize to high temperatures depending on their environment." In further exploration of this phenomenon, Grimsditch et al. examined "bleaching responses of corals at four sites (Nyali, Mombasa Marine Park, Kanamai and Vipingo) representing two distinct lagoon habitats on the Kenyan coast (deeper and shallower lagoons)." This was done for the coral community as a whole, while zooxanthellae densities and chlorophyll levels were monitored for three target species (Pocillopora damicornis, Porites lutea and Porites cylindrica) during a non-bleaching year (2006) and a mild bleachig year (2007). The four researchers

report that "during the 2007 bleaching season, corals in the shallow lagoons of Kanamai and Vipingo were more resistant to bleaching stress than corals in the deeper lagoons of Mombasa Marine Park and Nyali," which suggests, in their words, that "corals in the shallower lagoons have acclimatized and/or adapted to the fluctuating environmental conditions they endure on a daily basis and have become more resistant to bleaching stress." These results bear further witness to the fact that earth's corals have the ability to evolve in such a way as to successfully adjust to changing environmental conditions that when first encountered may prove deadly to a significant percentage of their populations. Those individuals genetically blessed to better withstand various stresses weather the storm, so to speak, to grow and widely proliferate another day.

Dark Matter doesnt exist Gache 7 (Gabriel is a Science News Editor, October 30, Dark Matter Does Not Exist http://news.softpedia.com/news/Dark-Matter-Does-Not-Exist69476.shtml) Or so two

A2 Dark Matter

Canadian astronomers say. In august a group of astronomers from the University of Arizona at Tucson reported the collision of a cluster of galaxies situated 3 billion light-year away from Earth, known as the Bullet Cluster. Images from
NASA's Chandra X-ray telescope, suggest that during the collision, two types of matter where observed, separated from each other, normal matter colored pink in the image on the left (click to enlarge), and dark matter represented in blue. There is a strong debate currently taking place,

regarding the dark matter and its role in the universe. It is well known that the calculations using current theories about the origins of
the universe, say that there is not enough visible matter in the galaxies to keep them from falling apart. So there must be something else to account for the extra gravitational pull exerted of the stars forming the galaxies from scattering. Dark matter was the candidate; however, it has never been

directly observed or detected. In the science community this was no new feature, different theories predicting mysterious things that were not
observed - like elementary particles - only to be discovered years later, is common. John Moffat and Joel Browstein at the University of Waterloo in Canada, say that the announcements of the first ever detection of dark matter was premature, and the observed effect in the Bullet Cluster can be explained by a Modified Gravity theory. While studying images from NASA's Hubble Space telescope and Chandra X-ray telescope the scientists tried analyzing the light from a background galaxy, using light bent by the Bullet Cluster, effect known as gravitational lensing. Gravitational lensing was predicted by Einstein's Relativity Theory, and successfully observed while studying supermassive galaxies, and black holes. Light coming from a background source, into a massive object is bent and focused by its powerful gravity, thus amplifying the light. During the study of the images, the Canadian team's calculations

showed that there is in fact enough normal matter in the Bullet Cluster to account for the observed gravitational effect, and dark matter was not needed to explain the extra gravitational pull. As it is well known in the
physics community, the theory of General Relativity explains only part of the interactions taking place in our universe, especially the theory of gravity, thus making it imperfect, meaning it can not be used as a Theory of Everything. Several theories are being developed, but as far, none produced observable predictions, and has eluded to create experiments. Some calculations show that there is in fact more dark matter in the hole universe than normal matter, but scientists are developing new theories in which the proportion of dark matter present might be considerably reduced. Meanwhile, Douglas Clowe, the astronomer who made the claim that dark matter was finally observed, sustains his beliefs.

Dark Matter doesnt existnew studies prove the theory is full of cracks Vieru 9 (Tudor is a science editor with a background in physics and chemistry, October 1,
Cracks May Exist in Dark-Matter Theory,http://news.softpedia.com/news/Cracks-May-Exist-in-Dark-Matter-Theory-123169.shtml)

According to a new scientific study, it may be that dark matter, the elusive stuff that binds galaxies together, but that cannot be directly observed, does not exist at all. It's either that, or it has a very unusual set of properties, astrophysicists at the University of St Andrews, in the United Kingdom, say. Scientific observations show that dark matter does not simply keep galaxies spinning, as the theory goes. Additionally, galaxies were supposed to only interact with the stuff through gravity alone, and this doesn't seem to be the case, NewScientist reports. Established knowledge on dark matter has it that the stuff helped galaxies form at first, by keeping them together. It is widely believed among astronomers
that each galaxy in the Universe is found at the core of a large, dark-matter concentration, a model that fits the substance's hypothetical properties. Gravity is supposed to be the only thing connecting galaxies and dark matter, but experts noticed that there was more to their interactions than

this. However, since the concept of dark matter emerged, there have been controversies about how it is distributed in the Universe and in galaxies. Its existence and action can only be inferred by the way galaxies and stars move, as the stuff does not emit any light, on any wavelength. Some astronomers believe that it must be distributed in the same concentrations throughout each galaxy, whereas others
say that its concentrations should be larger in galactic cores, on account of gravity's effects. In a recent study, the normal matter at the cores of 28 galaxies of various shapes and sizes was analyzed. The results threw astrophysicists off-guard the investigation revealed that, in regions where

dark-matter density had dropped to one-quarter of its central value, nearly 500 percent more of the stuff existed, in relation to normal matter. The results were unexpected, because the theory predicted that the ratio of dark matter to normal matter should depend on a galaxy's history , as in previous collisions and interactions with black holes and other such things. There is absolutely no rule in physics that explains these results , University of St Andrews expert Hong Sheng Zhao, a co-author of the new study, says. Details of the finds appear in the latest issue of the renowned scientific journal
Nature. Although this clearly shows much more interplay between normal and dark matter than expected, it is too early to say exactly what this means, University of Leicester expert Mark Wilkinson adds. He has not been part of the investigation, but has urged caution in interpreting the results.

Dark Matter and Dark energy dont exist- their detection was the result of a fluke Moskowitz 10 (Clare is a Senior writer for Space.com, June 13, Dark Energy and Dark Matter Might Not Exist, Scientists Allege, Space.com,
http://www.space.com/8588-dark-energy-dark-matter-exist-scientists-allege.html)

Dark matter and dark energy are two of the most mind-boggling ingredients in the universe. Ever since these concepts
were first proposed, some astronomers have worked feverishly to figure out what each thing is, while other astronomers have tried to prove they don't exist, in hopes of restoring the universe to the more understandable place many would like it to be. A new look at the data from one of the

telescopes used to establish the existence of this strange stuff is causing some scientists to question whether they really exist at all. Yet other experts are holding firm to the idea that, whether we like it or not, the "dark side" of the universe is here to stay.
Dark matter is a proposed form of matter that could make up 22 percent of the universe's mass-energy budget, vastly outweighing all the normal matter, like stars and galaxies. Astronomers can't observe dark matter directl y, but they think it's there because of the gravitational pull it seems

to exert on everything else. Without dark matter, the thinking goes, galaxies would fly apart. As if that weren't weird enough, scientists think another 74 percent of the mass-energy budget could be made of some strange quantity called dark energy. This force is thought to be responsible for the accelerating pace of the expansion of the universe. (For those keeping track, that would leave only a measly 4 percent of the universe composed of normal matter.) Some cosmic background One of the prime ways researchers tally how much these components contribute to the overall

makeup of the universe is by measuring a dim glow of light pervading space that is thought to be left over from the Big Bang. The most detailed measurements yet taken of this radiation, which is called the cosmic microwave background (CMB), come from a spacecraft dubbed the Wilkinson Microwave Anisotropy Probe (WMAP). "It's such an important thing the microwave background," said astrophysicist Tom Shanks of Durham University in England. "All the results in dark energy and dark matter in cosmology hang on it, and that?s why I'm interested in checking the results." Recently Shanks and his graduate student Utane Sawangwit went back
to examine the WMAP data and used a different method to calibrate how much smoothing, or blurring, the telescope was causing to its images. This smoothing is an expected affect, akin to the way Earth's atmosphere blurs stars' light so they twinkle. Instead of using Jupiter as a calibration source, the way the WMAP team did, Shanks and Sawangwit used distant astronomical objects in the WMAP data itself that were

emitting radio light. "When we checked radio sources in the WMAP background, we found more smoothing than the WMAP team expected," Shanks told SPACE.com. "That would have big implications for cosmology if we were proven right." If this smoothing error is larger than thought, it could indicate that fluctuations measured in the intensity of the CMB radiation are actually smaller than they originally appeared. The size of these fluctuations is a key parameter used to support the existence of dark matter and dark energy. With smaller ripples, there would be no need to invoke exotic concepts like dark matter and dark energy to explain the CMB observations, Shanks said.
The researchers will report their findings in an upcoming issue of the journal Monthly Notices of the Royal Astronomical Society. Others not so sure Yet other astronomers, particularly those who first analyzed the WMAP results, remain unconvinced. "The WMAP team has carried out extensive checks and we unequivocally stand by our results," said WMAP principal investigator Charles Bennett of Johns Hopkins University in Baltimore, Md. The WMAP researchers take issue with Sawangwit and Shanks' use of dim, far-away radio sources to calculate the telescope's smoothing error. "These are weak sources, so many of them must be averaged together to obtain useful measurements. None of them move with respect to the CMB," said WMAP team member Mark Halpern of the University of British Columbia. "This method is inferior to our main approach." Plus, Halpern said he and his colleagues had

identified an error the other team made in failing to account for the confusing contribution of the CMB ripples themselves. "We can obtain the Shanks result by omitting the step that properly accounts for the background confusion, but this step is necessary,"
Bennett explained. Back in this corner ? Yet Shanks said he's aware of these objections and stands by his calculations. "We don't think that's an issue," he said. Ultimately, Shanks hopes future measurements of the microwave background radiation from new

telescopes will help clear up the issue. The European Space Agency's Planck spacecraft, launched into orbit in 2009, is currently taking new,
even more detailed observations of the CMB. "I'm very interested to see what Planck gets in terms of its results," Shanks said. "And of course we will be there to try and keep everybody as honest as possible. We're hoping we can use our methods in the same way to check their beam profile that they ultimately come up with."

Dark matter isnt needed to explain gravity Dum 6 (Belle is a science writer at PhysicsWeb, Februrary 7, Theorists claim dark energy does not exist,
http://physicsworld.com/cws/article/news/24139) Most cosmologists believe that the universe is dominated by "dark energy" -- a mysterious form of energy that could explain why the universe is expanding and accelerating at the same time. Now, however, theoretical physicists have studied a new model of gravity that can, they

claim, account for the acceleration of the universe without any need for dark energy. Their model relies instead on modifications to the way that gravity behaves at ultra-large cosmological distances (Phys. Rev. Lett. 96 041103). Olga Mena The acceleration of the universe is driven by something that has repulsive rather than attractive gravitational interactions. Although this socalled "dark energy" is thought to account for around two-thirds of the universe, no one knows what it is made of. Possible explanations for dark energy include a "cosmological constant" -- first introduced by Einstein -- or something known as quintessence. However, such explanations are plagued with theoretical and phenomenological problems and scientists would like to find an alternative to dark energy as the source of the universe's acceleration. Jos Santiago Olga Mena and Jos Santiago at Fermilab and Jochen Weller of University College London have now calculated that the acceleration of the universe can be explained without the need for dark energy. What they have done is to modify the laws of gravity in such a way that they look relatively unchanged at short distances but get modified only at distances on the order of the current size of the observable universe -- the only place where the effects of the acceleration are apparent. At these distances the curvature of space is so small that the universe appears flat. Jochen Weller Although the
equations that describe the evolution of the universe in the new model are difficult to solve, Mena and co-workers were still able to do so using approximate analytical methods. This approach allowed the researchers to compare the theoretical predictions of the rate of expansion of the universe to expansion rates obtained using experimental data from type Ia Supernovae. "The agreement is extremely good," says Santiago. However, the model still requires a "dark matter" component. Dark or "invisible" matter is thought to make up 25% of the universe -- even in the model. The good news is that Einstein's theory of general relativity remains intact: "All the tests that Einstein's theory has passed to date are still valid because they were performed at shorter distances," adds Santiago. Robert Caldwell, a cosmologist at Dartmouth College in New Hampshire, thinks the results are interesting and will sustain further investigations of the model. "I'm sure the next target will be to study structure formation and the anisotropies in the cosmic microwave background in their model and check that the model predictions are consistent with observation," he says. "I look forward to their results."

The planet wont be consumed by the sun- it loses mass when dying Cain, 2008, Fraser, Universe Today, Will Earth Survive When the Sun Becomes a Red Giant? http://www.universetoday.com/12648/will-earthsurvive-when-the-sun-becomes-a-red-giant/, KHaze However, as

A2 Death of Sun

the Sun reaches this late stage in its stellar evolution, it loses a tremendous amount of mass through powerful stellar winds. As it grows, it loses mass, causing the planets to spiral outwards. So the question is,
will the expanding Sun overtake the planets spiraling outwards, or will Earth (and maybe even Venus) escape its grasp. K.-P Schroder and Robert Cannon Smith are two researchers trying to get to the bottom of this question. Theyve run the calculations with the most current models of stellar evolution, and published a research paper entitled, Distant Future of the Sun and Earth Revisted. It has been accepted for publication in the Monthly Notices of the Royal Astronomical Society. According to Schroder and Smith, when the Sun becomes a red giant star 7.59 billion years, it will start to

lose mass quickly. By the time it reaches its largest radius, 256 times its current size, it will be down to only 67% of its current mass.

A2 Disease
New pharmaceuticals and biotech checks superdiseases Bax, et al 2k (9/16/2000, Richard Bax, Noel Mullan, and Jan Verhoef, International Journal of Antimicrobial Agents, The millennium bugs--the
need for and development of new antibacterials, http://www.ncbi.nlm.nih.gov/pubmed/11185414) HL Global antibacterial resistance is becoming an increasing public health problem. Bacteria resistant to almost all of the available antibacterials have been identified. The pharmaceutical industry and fledgling biotechnology companies are responding to the threat of

antibiotic resistance with renewed efforts to discover novel antibacterials in attempts to overcome bacterial resistance. Both short term and long term strategies are being vigorously pursued. Short-term efforts are focused on developing novel antibacterial agents with a narrow spectrum of action to combat the problem of gram-positive resistant bacteria. Long-term approaches include the use of microbial genomic sequencing techniques to discover novel agents active against potentially new bacterial targets. Better use of existing agents using pharmacodynamic data to optimise antibiotic regimens is increasingly being addressed and the hope is that such measures will prevail until the newer agents are available. Evolution makes extinction from disease impossible Achenbach 3 (November 2003, Joel Achenbach, Washington Post, "Our Friend, the Plague,"
http://ngm.nationalgeographic.com/ngm/0311/resources_who.html) HL Can germs keep us healthy? Whenever

a new disease appears somewhere on our planet, experts invariably pop up on TV with grave summations of the problem, usually along the lines of, "We're in a war against the microbes"pause for dramatic effect "and the microbes are winning." War, however, is a ridiculously overused metaphor and probably should be bombed back to the Stone Age. Paul Ewald, a biologist at the University of Louisville, advocates a different approach to lethal microbes. Forget trying to obliterate them, he says, and focus instead on how they co-evolve with humans. Make them mutate in the right direction. Get the powers of evolution on our side. Disease organisms can, in fact, become less virulent over time. When it was first recognized in Europe around 1495, syphilis killed its human hosts within months. The quick progression of the diseasefrom infection to deathlimited the ability of syphilis to spread. So a new form evolved, one that gave carriers years to infect others. For the same reason, the common cold has become less dangerous. Milder strains of the virusspread by people out and about, touching things, and shaking handshave an evolutionary advantage over more debilitating strains . You can't spread a cold very easily if you're incapable of rolling out of bed. This process has already weakened all but one virulent strain of malaria: Plasmodium
falciparum succeeds in part because bedridden victims of the disease are more vulnerable to mosquitoes that carry and transmit the parasite. To mitigate malaria, the secret is to improve housing conditions. If people put screens on doors and windows, and use bed nets, it creates an evolutionary incentive for Plasmodium falciparum to become milder and self-limiting. Immobilized people protected by nets and screens can't easily spread the parasite, so evolution would favor forms that let infected people walk around and get bitten by mosquitoes. There are also a few high-tech tricks for nudging microbes in the right evolutionary direction. One company, called MedImmune, has created a flu vaccine using a modified influenza virus that thrives at 77F instead of 98.6F, the normal human body temperature. The vaccine can be sprayed in a person's nose, where the virus survives in the cool nasal passages but not in the hot lungs or elsewhere in the body. The immune system produces antibodies that make the

person better prepared for most normal, nasty influenza bugs. Maybe someday we'll barely notice when we get colonized by disease organisms. We'll have co-opted them. They'll be like in-laws, a little annoying but tolerable. If a friend sees us sniffling, we'll just say, Oh, it's nothingjust a touch of plague. Diseases can either spread more or be lethal, not both Berkeley 7 (Understanding Evolution, UC Berkeley, Evolution from a virus's view,
http://evolution.berkeley.edu/evolibrary/news/071201_adenovirus) HL Since transmission is a matter of life or death for pathogen lineages, some evolutionary biologists have focused on this as the key to understanding why some have evolved into killers and others cause no worse than the sniffles. The idea is that there may be an evolutionary trade-off

between virulence and transmission. Consider a virus that exploits its human host more than most and so produces more offspring than most. This virus does a lot of damage to the host in other words, is highly virulent . From the virus's perspective, this would, at first, seem like a good thing; extra resources mean extra offspring, which generally means high evolutionary fitness. However, if the viral reproduction completely incapacitates the host, the whole strategy could backfire: the illness might prevent the host from going out and coming into contact with new hosts that the virus could jump to. A victim of its
own success, the viral lineage could go extinct and become an evolutionary dead end. This level of virulence is clearly not a good thing from the virus's perspective. Natural selection balances this trade-off, selecting for pathogens virulent enough to produce many

offspring (that are likely to be able to infect a new host if the opportunity arises) but not so virulent that they prevent the current host from presenting them with opportunities for transmission. Where this balance is struck depends, in part, on the virus's mode
of transmission. Sexually-transmitted pathogens, for example, will be selected against if they immobilize their host too soon, before the host has the opportunity to find a new sexual partner and unwittingly pass on the pathogen. Some biologists hypothesize that this trade-off helps explain why sexually-

transmitted infections tend to be of the lingering sort. Even if such infections eventually kill the host, they do so only after many years, during which the pathogen might be able to infect a new host.

Domestication and evolution renders diseases mild PBS 1 (2001, Interview with Paul Ewald, director of the program in Evolutionary Medicine at the Biology Department of the University of Louisville,
Paul Ewald: Infectious Disease and the Evolution of Virulence, http://www.pbs.org/wgbh/evolution/library/01/6/text_pop/l_016_06.html) HL Transcript: Q: How does understanding the evolution of virulence help us to manage infectious disease? A: For most of the last two centuries people have been using interventions to knock down infectious diseases as much as possible. The idea is that we're going to use weapons like vaccines and antibiotics or hygienic interventions to reduce the frequency of infection as much as possible. My point is that there's another way of controlling these disease organisms. Instead of using these weapons -- antibiotics and vaccines and hygiene improvements -- as a way of knocking down the organism, we can use those interventions to control the evolution of the organisms instead of getting the organisms evolving around our interventions. We can get the organisms to evolve to be less harmful than they have been in the past. Essentially, what I'm saying is we can use interventions like vaccines or like hygienic improvements to domesticate these organisms. That argument may seem a little bit surprising, but we've already domesticated organisms

in many ways. One of the most obvious ways is when we make live vaccines in the laboratory. We're actually taking harmful organisms [and] changing the course of their evolution, making them evolve to be mild enough that we can then introduce them into people as a vaccine. Q: Give us a new way to look at disease organisms. A: Some people think
that disease organisms are out to get us -- any false move and we'll be damaged or even killed by them. Other people think that disease organisms really, in the best of all worlds, would be so mild that we wouldn't even know that they're there. And the truth is that both of those explanations are right, and disease organisms can be anywhere in between. I think the right way to look at them is that we're the environment for them and, in some cases, our health is important for their well-being, particularly if their transmission depends on our being healthy. You've got a lot of organisms that are living in and on us that cause us no detectable harm whatsoever. In fact, we wouldn't want to be getting rid of all of the organisms that live in and on us because many of them are protecting us against other organisms. If we can favor those mild organisms, then they can protect us against the harmful

organisms. I think that will be one of the goals of medical work in the next century, especially for those disease organisms that seem to be totally unresponsive to the attacks that we impose on them from antibiotics or vaccines. Q: Tell us about that transmission strategy -- how a disease organism tries to reproduce, how it tries to get to the next host -- and then tell us
how that relates to its virulence. A: I would say that disease organisms are selected to compete with other disease organisms, that's the bottom line. So if a disease organism is transmitted in a way that requires a healthy host, the best competitors will be those disease organisms that are mild enough to keep their host healthy to allow themselves to be transmitted. By focusing on the mode of transmission for disease organisms we can gain a lot of insight into why some disease organisms are harmful and other disease organisms are mild. For example, a disease organism like the rhino virus that causes a common cold really does depend on fairly healthy people to be transmitted. So, not surprisingly, the rhino virus is one of the mildest viruses that we know about. In fact, nobody has ever been known to die from a rhino virus, and that's not true for almost any other disease organism of humans for which we have ample information. Almost all the other disease organisms will cause enough damage so that some people might die, if they're particularly vulnerable. So, in

the case of these mild organisms like the rhino virus, if a person happened to be housing a virulent mutant -these mutants are happening all the time, even in the mild organisms, mutations that might make it a little more harmful or a little less harmful -- then you can ask the question, "Will that organism spread?" The rhino virus is transmitted when people sneeze on other people, or maybe people sneeze in their hands and then they shake hands with other people, and those people then may touch their nose with those contaminated hands. Given that those are the main routes of transmission, it's clear that if we had somebody who is infected with a particularly harmful variant of the rhino virus, a variant that was so harmful that the person would have to stay in bed, that even though that virus might be reproducing a lot more in the short run, in the long run that organism would lose out in competition. A person who is stuck at home in bed is not going around sneezing on their friends. An immobilized person is not going to be a major source of transmission for something like the rhino virus. That explains why the rhino virus has evolved to be fairly mild. New disease control models allow reducing diseases to extinction and predicting outbreaks allowing control Ira, American Physical Society, 2009 (Schwartz, Fluctuations in epidemic modeling - disease extinction and control Physical Society, 2009 APS March
Meeting, March 16-20, 2009, abstract #D7.003http: adsabs.harvard.edu/abs/2009APS..MAR.D7003S) HL The analysis

of infectious disease fluctuations has recently seen an increasing rise in the use of new tools and models arise in modeling fluctuations of multi-strain diseases, in modeling adaptive social behavior and its impact on disease fluctuations, and in the analysis of disease extinction in finite
from stochastic dynamics and statistical physics. Examples population models. Proper stochastic model reduction [1] allows one to predict unobserved fluctuations from observed data in multi-strain models [2]. Degree alteration and power law behavior is predicted in adaptive network epidemic models [3,4]. And extinction rates derived from large fluctuation theory exhibit scaling with respect to distance to the bifurcation point of disease onset with an unusual exponent [5]. In addition to outbreak

prediction, another main goal of epidemic modeling is one of eliminating the disease to extinction through various control mechanisms , such as vaccine implementation or quarantine. In this talk, a description will be presented of the fluctuational
behavior of several epidemic models and their extinction rates. A general framework and analysis of the effect of non-Gaussian control actuations which enhance the rate to disease extinction will be described. In particular, in it is shown that even in the presence of a small Poisson

distributed vaccination program, there is an exponentially enhanced rate to disease extinction. These ideas may lead to improved methods of controlling disease where random vaccinations are prevalent. Initial spread will be slow, giving us a window of opportunity to contain it

Soares 05 (May 05, Christine Soares, Scientific American 2005 Volume 295 Issue 5 May, Cooping Up Avian Flu, http://www.flu.org.cn/en/news7624.html) HL Longini, who expected to have his full results published in April, thinks that in

the real event, an emerging H5N1 virus can be contained with antivirals, provided its R0 is less than 1.4 and the intervention begins within two or three weeks of the outbreaks start. He is already at work on new models to determine how the avian virus is likely to evolve as it gets better at spreading b etween people. "I really strongly believe that the R0 will start out low, probably a little above 1," Longini says, "and then with each generation of transmission it will increase as [the virus] adapts to the human population. It gives us a strong window of opportunity to intervene before the R0 evolves to a high enough level where its basically unstoppable." New tech and drugs solve pandemics Youngerman 08 (2008, Barry Youngerman, Pandemics and Global Health, p 103-4) HL Recent advances in diagnostic technology may also help head off a pandemic, if they can be quickly deployed. For example, the U.S. National Institutes of Health and Centers for Disease Control and Prevention announced in August 2006 that a new microchip had been developed that can take a sample from a patient and diagnose any of 72 different influenza virus strains, including H5N1, within 12 hours instead of a week or more as currently required. The tissue sample would not need to be refrigerated, and the equipment needed to read the results is available in most countries, at least in capital cities. 149 The chip should be commercially available by 2008. Antiviral drugs can make a vital difference in treating avian flu. A study at Oxford University found that H5N1 reproduces very quickly in human patients,
stimulating a cytokine storm of immune proteins that can quickly fill a victims lungs with fluid as happened during the 1918 pandemic. 150 Anti-virals can reduce the viral load, the number of virus particles, to a level that the body can handle, if taken within two days of the first symptoms. Several antiviral drugs are available, such as oseltamivir (Tamiflu), zanamir (Relenza), and amantadine, but stockpiles are not adequate to handle a serious pandemic, especially in poorer countries. Developed countries are now slowly building up stockpiles; the United States plans to keep enough antiviral doses for 25 percent of the population; other countries have targets of 20 to 50 percent. 151 Treatment with antiviral drugs can also keep healthy people from being infected with the flu. However, the drugs must be taken for several days or weeks at substantial cost, and thus are not expected to replace vaccines as a preventive measure.

Status quo monitoring, tech, drugs, and policies solve disease spread Youngerman 08 (2008, Barry Youngerman, Facts on File, Global Issues: Pandemics and Global Health, p 103-4) HL
Yes: given the propensity of the flu virus to mutate, and given the constant growth in international travel, trade, and migration, sooner or later a deadly and contagious influenza virus may well emerge one year and begin to spread from one person to the next and from one region to another. But: the world

community is becoming better equipped from year to year to meet such a threat. Better surveillance, international communication, diagnosis, genetic analysis, vaccine production, antiviral development, and antibiotics will probably prevent anything like the terrible toll of 1918. The longer such a threat can be postponed, the better prepared we are likely to be. With regard to the threat of avian flu, in the 10 years since the pathogenic H5N1 virus emerged to infect its first human victims it has failed to develop permanent mutations that would allow it to spread more easily to people. However, this is not entirely a matter of good luck. Several mutations have emerged that are quite worrisome to scientists, but as far as we know, every single person infected with the mutated virus has been isolated and either recovered or died without passing the mutation on. We must be grateful for the swift response of World Health Organization (WHO) teams who encouraged governments to impose quarantines and arrange for the treatment of everyone involved with antivirals. This quick response has denied the mutated viruses the opportunity to mix with human flu viruses to create a super-pathogenic strain, which probably happened sometime before 1918. 152 The world community must continue to be vigilant, however, and proactively help those
countries that lack the resources to deal with the issue on their own.

No extinction from disease natural selection Posner 5 Court of Appeals Judge; Professor, Chicago School of Law (Richard, Catastrophe, http://goliath.ecnext.com/coms2/gi_01994150331/Catastrophe-the-dozen-most-significant.html) HL Yet the fact that Homo sapiens has managed to survive every disease to assail it in the 200,000 years or so of its existence is a source of genuine comfort, at least if the focus is on extinction events. There have been enormously destructive plagues, such as the Black Death, smallpox, and now AIDS, but none has come close to destroying the entire human race. There is a biological reason. Natural selection

favors germs of limited lethality; they are fitter in an evolutionary sense because their genes are more likely to be spread if the germs do not kill their hosts too quickly. The AIDS virus is an example of a lethal virus, wholly natural, that by lying dormant yet infectious in its host for years maximizes its spread. Yet there is no danger that AIDS will destroy the entire human race. The likelihood of a natural pandemic that would cause the extinction of the human race is probably even less today than in the past (except in prehistoric times, when people lived in small,
scattered bands, which would have limited the spread of disease), despite wider human contacts that make it more difficult to localize an infectious disease.

Its not in a diseases interest to kill all humans Marx 98 AIDS Research Facility at Tulane University (Preson and Ross MacPhee, How did Hyperdisease cause extinctions?,
http://www.amnh.org/science/biodiversity/extinction/Day1/disease/Bit2.html) HL

It is well known that lethal diseases can have a profound effect on species' population size and structure. However,
it is generally accepted that the principal populational effects of disease are acute--that is, short-term. In other words, although a species many suffer substantial loss from the effects of a given highly infectious disease at a given time, the facts indicate that natural populations tend to bounce back after the period of high losses. Thus, disease as a primary cause of extinction seems implausible. However, this is the normal case, where the disease-provoking pathogen and its host have had a long relationship.

Ordinarily, it is not in the pathogens interest to rapidly kill off large numbers of individuals in its host species, because that might imperil its own survival. Disease theorists
long ago expressed the idea that pathogens tend to evolve toward a "benign" state of affairs with their hosts, which means in practice that they continue to infect, but tend not to kill (or at least not rapidly). A very good reason for suspecting this to be an accurate view of pathogen-host

relationships is that individuals with few or no genetic defenses against a particular pathogen will be maintained within the host population, thus ensuring the pathogen's ultimate survival. No extinction empirics, public health, sanitation, and drugs Easterbrook 03
(Gregg Easterbrook, The New Republic Editor, 2003 [Wired, "We're All Gonna Die!" 11/7, http://www.wired.eom/wired/archive/11.07/doomsday.html]) HL 3. Germ warfare! Like chemical agents, biological Weapons have never lived up to their billing in popular Culture. Consider the 1995 medical thriller Outbreak, in which a highly contagious virus takes out entire towns. The reality is quite different. Weaponized smallpox escaped from a Soviet laboratory in Araisk, Kazakhstan, in 1971; three people died, no epidemic followed, in 1979, weapons-grade anthrax got out of a Soviet facility in Sverdlovsk (now called Ekaterinburg); 68 died, no epidemic. The loss of life was tragic, but no greater than could have been caused by a single conventional bomb. In 1989,

workers at a US government facility near Washington were accidentally exposed to Ebola virus. They walked around the community and hung out with family and friends for several days before the mistake was discovered. No one died. The fact is, evolution has spent millions of years conditioning mammals to resist germs. Consider the Black Plguage. It was the worst known pathogen in history, loose in a Middle Ages society of poor public health, awful sanitation, and no antibiotics. Yet it didn't kill off humanity . Most people who were caught in the epidemic survived. Any superbug introduced into today's Western world would encounter top-notch public health, excellent sanitation, and an array of medicines specifically engineered to kill bioagents. Perhaps one day some aspiring Dr. Evil will
invent a bug that bypasses the immune system. Because it is possible some novel superdisease could be invented, or that existing pathogens like smallpox could be genetically altered to make them more virulent (two-thirds of those who contract natural smallpox survive), biological agents are a legitimate concern. They may turn increasingly troublesome as time passes and knowledge of biotechnology becomes harder to control, allowing individuals or small groups to cook up nasty germs as readily as they can buy guns today. But no superplague has ever come Close to wiping out

humanity before, and it seems unlikely to happen in the future. Traveling restriction limit spread Camitz and Liljeros 5 (Martin, Swedish Institute for Infectious Disease Control, Fredrik, Medical Epidemiology and Biostatistics, Karolinska
Institute, "The effect of travel restrictions on the spread of a highly contagious disease in Sweden," Oct 5, http://arxiv.org/ftp/qbio/papers/0505/0505044.pdf) HL

that traveling restrictions will have a significant beneficial effect, both reducing the geographical spread and the total and local incidence. This holds true for all three levels of inter-community infectiousness simulated, g. g is influenced by many factors, most notably by total travel intensity, but also by the medium of travel, the behavior of the traveler, the model of dispersal by travel and by the infectiousness of the disease . Hufnagel
Our results show clearly calibrated g using data from the actual outbreak. As mentioned, no attempt was made on our part to find the true value of g in the new settings, as no such outbreak data is available for Sweden. This would be considered a flaw for a quantitative study on a SARS outbreak in Sweden. By simulating for different values of the parameter, however, we can be confident in the qualitative conclusion, namely, that the same general behavior can be expected in the unrestricted scenario and in response to the control measures, regardless of g. In light of the fact that inter-municipal travel heavily influences incidence even at a local level, one may justifiably be concerned about the boundary conditions. We treat Sweden as an isolated country, but quite obviously, the incidence will be underestimated for areas with frequent traffic across the borders. This includes in particular the resund region around Malm, and to a lesser extent, international airports and the small towns bordering on Norway and Finland. Even though there is presently no treatment or vaccine for SARS, results show that limited quarantine as suggested here drastically decreases the risk of transmission and this may well turn out to be the most expedient form of intervention. In many countries, Sweden included, limiting freedom of travel is unconstitutional and must take the form of general recommendations. Additionally, certain professions of crucial

importance to society during a crisis situation must be exempt from travel restrictions . The study shows that even if a substantial fraction of the population breaks the restrictions, this strategy is still viable. For other types of disease for which preventive treatment (pandemic flu) or vaccine (small-pox) are available, our results show that long-distance travelers are an important group for targeted control measures . New drugs solve superdisease and resistance Fumento 05 (6/23/05, Michael Fumento, Scripps Howard News Service, In Man versus Microbe, Germs Will Lose,
http://fumento.com/disease/tygacil.html) HL On June 17 the

especially useful against the common and drug-resistant bacteria Staphylococcus aureus, better known as "staph." But its a

FDA approved New Jersey-based Wyeth Pharmaceuticals Tygacil, primarily intended for skin infections and abdominal wounds. Its true broad-spectrum

drug, applicable against a host of different bacteria so much so that doctors can feel confident in administering it as a first-line treatment even if they have no idea what germ or germs theyre up against. Tygacil is the first antibiotic approved in a new class called glycylcyclines, expressly developed to bypass the mechanisms that made bacteria resistant to the tetracycline family of drugs. The major drawback of Tygacil is that it must be administered intravenously, yet thats also a plus. Perhaps the main reason bugs develop
resistance to antibiotics is that doctors overprescribe the drugs. They hand out pills like Pez candy because patients demand it. But patients dont demand IVs; therefore limiting usage of Tygacil and probably greatly forestalling the day when it too leads to resistant strains. But Tygacil is not alone in new and forthcoming antibiotics designed to tell germs: "Resistance is futile!" The injectable Cidecin from Cubist of Lexington, Mass., also targeted at nasty hospital infections, may soon receive FDA approval. Just four days before the approval of Tygacil, the FDA also gave the green light to Pfizers Zmax , an important new formulation of an older antibiotic. Zmax is administered

with merely one dose rather than given over a period of seven to 10 days an all guns blazing assault on bacteria that cause sinusitis and pneumonia. "An antibiotic taken just once can address compliance issues and may minimize the emergence of antibiotic resistance," Dr. Michael Niederman,
Chairman of the Department of Medicine at Winthrop University Hospital in Mineola, New York notes in a Pfizer press release. New Jersey-based

Johnson & Johnson has two different broad-spectrum drugs in late-stage testing that are designed to overcome antibiotic resistance, ceftobiprole and doripenem. The holy grail of anti-bacterial drugs is one to which bugs cannot become resistant. One step in that direction may be further development of bacteriophages (Greek for "bacteria eaters.") These are viruses that attach to the bacterial surface and inject their DNA, which replicates until the bacterium explodes. These phages evolve just as bacteria do. The same forces that select for resistant bacteria also select for viruses that overcome that resistance. Anything virulent enough to be a threat would destroy its host too quickly Lederberg 99 (1999, Joshua Lederberg, professor of genetics at Stanford University School of Medicine, Epidemic: The World of Infectious
Disease, p. 13) HL The toll of the fourteenth-century plague, the "Black Death," was closer to one third. If the bugs' potential to develop adaptations that could kill us off were the whole story, we would not be here. However, with very rare exceptions, our microbial adversaries have a shared interest in

our survival. Almost any pathogen comes to a dead end when we die; it first has to communicate itself to another host in order to survive. So historically, the really severe host- pathogen interactions have resulted in a wipeout of both host and pathogen . We humans are still here because, so far, the pathogens that have attacked us have willynilly had an interest in our survival. This is a very delicate balance, and it is easily disturbed, often in the wake of large-scale ecological
upsets.

A2 Environment threats exaggerated


The environmental movement misuses rhetorical figures to prove points that are false in the real world. Lomborg 1 [Bjorn Lomborg, associate professor of statistics at university of Aarhus. The skeptical environmentalist; measuring the real state of the
world. 2001, p. 27]

One of the main rhetorical figures of the environmental movement is to pass off a temporary truism as an important indicator of decline. Try to see what your immediate experience is of the following quote from the Worldwatch Institute: "As a fixed area of arable land is divided among ever more people, it eventually shrinks to the point where people can no longer feed themselves."215 This statements sounds like a correct prediction of problems to come. And yes, it is evidently true - there is a level (certainly a square inch or a speck of soil) below which we could not survive. However, the important piece of information is entirely lacking because we are not told what this level is, how close we are to it, and when we expect to cross it 216 Most people would probably be food production is seen to have become less scarce, not scarcer."218 Thus, the argument as stated is merely a rhetorical trick to make us think, "oh yes, things must be getting worse." This rhetorical figure has been used

surprised to know that, with artificial light, each person can survive on a plot of 36 m2 (a 6 m square), and that companies produce commercially viable hydroponic food with even less space.21' Moreover, FAO finds in its newest analysis for food production to 2030 that "land for

a lot by Worldwatch Institute. Talking about increasing grain yields (which we will discuss in Part III), Lester Brown tells us that "there will eventually come a point in each country, with each grain, when the farmers will not be able to sustain the rise in yields."219 Again, this is obviously true, but the question is how far away is the limit? This question remains unanswered, while Brown goes on to conclude the some-what unimaginative rerun of the metaphor: "Eventually the rise in grain yields will level off everywhere, but exactly when this will occur in each country is difficult to anticipate."220 Likewise, Lester Brown tells us that "if environmental degradation proceeds far enough, it will translate into economic instability in the form of rising food prices, which in turn will lead to political instability."221 Again, the sequence is probably correct, but it hinges on the untold if - is environmental degradation taking place and has it actually proceeded that far? That information is never demonstrated.

Environmental doomsday literature mishandles scientific fact; the media turns it into propaganda. Lomborg 1 [Bjorn Lomborg, associate professor of statistics at university of Aarhus. The skeptical environmentalist; measuring the real state of the
world. 2001, p. 12-13]

It is crucial to the discussion about the state of the world that we consider the fundamentals. This requires us to refer to long-term and global trends, considering their importance especially with regard to human welfare. But it is also crucial that we cite figures and trends which are true. This demand may seem glaringly obvious, but the public environment debate has unfortunately been characterized by an unpleasant tendency

towards rather rash treatment of the truth. This is an expression of the fact that the litany has pervaded the debate so deeply and for so long that blatantly false claims can be made again and again, without any references, and yet still be believed. Take notice, this is not due to primary research in the environmental field; this generally appears to be professionally competent and well balanced.70 It is due, however, to the communication of environmental knowledge, which taps deeply into our doomsday beliefs. Such propaganda is presented by many environmental organizations, such as the Worldwatch Institute, Greenpeace and the World Wide Fund for Nature, and by many individual commentators, and it is readily picked up by the media. The number of examples are so overwhelming that they could fill a book of their own. I will consider many of them in the course of this book, and we will look
specifically at their connection to the media in the next chapter. However, let us here look at some of the more outstanding examples of environmental mythmaking.

Environmentalists have an incentive to exaggerate harms in order to secure spending. Lomborg 1 [Bjorn Lomborg, associate professor of statistics at university of Aarhus. The skeptical environmentalist; measuring the real state of the
world. 2001, p.. 38-39] Thus as the industry and farming organizations have an obvious interest in portraying the environment as just-fine and no-need-todo-anything,

the environmental organizations also have a clear interest in telling us that the environment is in a bad state, and that we need to act now. And the worse they can make this state appear, the easier it is for them to convince us we need to spend more money on the environment rather than on hospitals, kindergartens, etc. Of course, if we were equally skeptical of both sorts of organization there would be less of a problem. But since we tend to treat environmental organizations with much less skepticism, this might cause a grave bias in our understanding of the state of the world. Negative effects are overplayed environmental concerns empirically can do more good than harm. Lomborg 1 [Bjorn Lomborg, associate professor of statistics at university of Aarhus. The skeptical environmentalist; measuring the real state of the
world. 2001, p.. 40-41] Finally, it is the media that pass on the results of research, possibly helped along by the organizations. The media play a central role in this connection because the world has become so complex that we can no longer rely primarily on our own experiences. Instead, the mass media provide much

of our understanding of reality. But their particular way of providing us with news profoundly influences our view of the world. There is of course rarely much doubt that facts reported in an article or a news report are generally true. In that sense, the media simply reflect the world as it is. What is interesting, however, is the long and winding path between an event taking place in the world and its possible appearance and placement in the media. Looking at news reporting in this way shows how the media systematically present us with a lopsided version of reality: a picture of reality which is incoherent and sporadic, though at the same time reassuringly predictable and familiar: A picture where problems fill every column, and the emphasis is on drama and conflict. As an editor-in-chief has put it: "Producing a paper is a question of distorting proportions."275 This media-based reality has numerous consequences. First, the incoherent information we are given provides us with too little knowledge of concrete problems to enable us to take part in a democratic decision-making process. Second, we feel sufficiently comfortable that we believe we actually do have sufficient knowledge to partake in the debate and to make valid decisions. Third, we will often get a far too negative and distorted impression of the problems. The media is biased they focus only on negative aspects of the environment and highlight individual rather than policy action. Lomborg 1 [Bjorn Lomborg, associate professor of statistics at university of Aarhus. The skeptical environmentalist; measuring the real state of the
world. 2001, p.. 41] In the hunt for good news, conflict is also brought into focus. A

conflict has that grip-ping dramatic element familiar from fairy tales and other literature, a battle between good and evil which the audience must follow to the bitter end to find out what happens. Journalists are actually taught how to tailor their stories to patterns from fairy tales.297 Closely related to the story of conflict is the question of guilt.298 It is not uncommon for one of the involved parties to be given the blame for the conflict, which helps to give the news a more human touch. We have seen examples of this in the US, where efforts to do something about garbage dumps is given far higher priority compared to combating radio-active radon, even though combating radon would be far more effective. Why? Because a garbage dump provides "good pictures" and because garbage dumps are "somebody's fault."299 It is generally important to journalists that their stories are "close" to the reader. This is often a question of involving people in a story and being able to explain what is going on in simple terms. Finally, a story has to be new and exciting. A story about a new problem or new conflict is potentially far more interesting than describing an already familiar, traditional problem. The consequences One consequence of the demand for rapid news delivery is that our view of the world becomes fragmented. Our demand for interesting and sensational news means that our picture of the world becomes distorted and negative. Coupled with the finely

tuned PR units of the environmental organizations and problem-oriented research, this can provide serious bias towards a negative appraisal of the state of the world.

Fusion wont consume the earth John Leslie, Professor Emeritus at the University of Guelph and Fellow of the Royal Society of Canada, 1997 The End of the World: The Science and Ethics of Human Extinction p.45-7 Of course, neither the atmosphere nor the oceans (where, Be the adds, 'the problem is more subtle) have been ignited even by H-bombs. This presumably proves that objectively', out there in reality', the risk has been zero at the temperatures so tar attained.
What had seemed potentially dangerous, though, were deuterium-deuterium and proton-deuterium reactions. Now, during the recent furor over reports that 'cold fusion had been produced in a test tube, S.E.Koonin and L.Nauenberg calculated that in some circumstances deuterium-deuterium fusion would proceed some ten billion times faster than had previously been estimated, while proton-deuterium fusion would be yet faster, by a factor of a hundred million."3 Naturally, these authors might be wrong. And supposing they were right, the reactions in question would, as they say, be nowhere near fast enough to produce interesting amounts of cold fusion. What is more, cold fusion and nuclear bombs would have little in common. Still, this tale docs at least illustrate how alarmingly unreliable the calculations of experts can be. Further calculations preceded the start-up of Bevalac, an accelerator producing violent collisions between atomic nuclei, S.D. Gupta and G.D.Wcstfall report: In the early 1970s Tsung Dao Lcc and Gian-Carlo Wick discussed the possibility that a new phase of nuclear matter might exist at high density, and might lie lower in energy than the most common type of matter in a nucleus. The Bevalac seemed to be the ideal instrument with which to make and discover this new matter. If it existed and was more stable than ordinary matter, it would accrete ordinary matter and grow. Eventually it would become so massive that it would fall to the floor of the experimental hall and be easily observed. But what would stop it from eating the Earth? [Eating it entirely, that's to say, by converting it all into matter of the new type.] Knowledge of dense nuclear matter was so poor at the time that the possibility of this disaster was taken seriously. Meetings were held behind closed doors to decide whether the proposed experiments should be aborted. Experiments were

A2 Fusion

eventually performed, and fortunately no such disaster has yet occurred.114 Ruthen explains that the decision to proceed was inspired by the argument that nature had already performed the relevant experiment: the Earth, moon, and all celestial bodies are constantly bombarded with an extraordinary number of highenergy particles that are produced by stars. Some of the particles collide with atoms on the earth and create conditions that equal or surpass anything that Bevalac could do. Bevalac has since been replaced by accelerators reaching still higher energies, but it appears that any high-density matter produced in them expands and disintegrates quickly.

A GRB wouldnt happen anywhere near earth- they dont happen in metal rich galaxies like the milkyway Than 6 (Ker is a space science and environment Journalist, Interstellar Deathray Not Likely to Hit Earth, Space.com, http://www.space.com/2332interstellar-deathray-hit-earth.html) Doomsayers and Chicken Little-types can now strike "deathray from a star " from their list of possible ways to die. A new study finds that the chances of a gamma ray burst going off in our galaxy and destroying life on Earth are comfortingly close to zero. Gamma ray bursts, or GRBs, are focused beams of gamma radiation emitted from the magnetic poles of black holes formed during the collapse of ancient, behemoth stars. They can also form when dead neutron stars merge with each other or with black holes. It's been speculated that if a GRB went off near our solar system, and one of the beams hit Earth, it could set off a global mass extinction. But in a new study to

A2 Gamma Ray Bursts

be published in the Astrophysical Journal, researchers found that GRBs tend to occur in small, metal-poor galaxies and estimated that the likelihood of one occurring in our own metal-rich Milky Way is less than 0.15 percent. "There are a lot of people who have wondered whether GRBs could be blamed for mass extinctions early in Earth's history, and our work
suggests that is not the case," said study team member Krzysztof Stanek from Ohio State University. Destroyer of life GRBs can last anywhere from a few milliseconds to several minutes and are one of the brightest, and potentially the most deadly phenomena in the universe. So powerful are these events that some scientists have speculated they could help explain the so-called Fermi Paradox: If the universe is teaming with advanced alien civilizations as some theories predict, then why have we never found any traces of them? One answer could be that events like GRBs turn galaxies into giant autoclaves that sterilize life forms on planets before they have can develop interstellar travel. Some scientists think that such an event might have already occurred in our own galaxy. Trigger for a mass extinction? Last year, scientists from NASA and the University of Kansas speculated that a GRB might have triggered the Ordovician-Silurian extinction 450 million years ago, one of the five worst extinction events in our planet's history. A computer model found that if a GRB were to strike Earth for even 10 seconds, it would deplete up to half of the atmosphere's protective ozone layer and blanket the planet in a thick, brown smog of nitrogen dioxide, a poisonous compound found in air pollution. The model estimated that recovery from such an event would require at least five years, during which time ultraviolet radiation from the Sun could kill off microorganisms and disrupt the food chains of animals around the world. An unlikely culprit But in their study, Stanek and colleagues found that GRBs tend to occur in small, deformed galaxies that are

poor in elements heavier than hydrogen and helium. Our Milky Way, in contrast, is a large spiral galaxy rich in heavy elements. Therefore, the chances of a GRB occurring within our galaxy are extremely unlikely, the researchers say. It's thought that stars with low metallicity are less likely to lose mass as they burn and are thus more massive and rotate faster when they die. The more massive a star is, the more likely it is to form black holes--one suspected GRB source--and rapid spin is believed to be crucial for powering the burst. "All models for gamma ray bursts these days
require rapid spin," said supernova and GRB expert Standford Woosley from the University of California, Santa Cruz who was not involved in the study. "Rotational energy is essentially where the energy for the burst comes from," Woosley told SPACE.com. In the new study, the researchers

compared the properties of four galaxies where GRBs had been detected with other galaxies recorded in the Sloan Digital Sky Survey. They found that of the four galaxies, the one with the most metals--and therefore most similar to ours--had only a 0.15 percent chance of hosting a GRB. Our situation Since the Milky Way's metal content is twice as high as that galaxy, its odds of hosting a GRB would be even lower. Also, not only are GRBs unlikely to strike Earth, they are unlikely to strike any planet where life could develop, Stanek said in a telephone interview. Planets need metals to form, so a low-metal galaxy--while more likely to have GRBs--will have fewer planets and fewer chances for life.
Woosley said that while he thinks it's unlikely a GRB will form in our galaxy, he wouldn't rule out the chances of such an event just yet. "There still may be channels of binary evolution that give the necessary rapid rotation to the star when it dies," he said.

GRB wont happen in the Milky Way Hubble 6 (Hubble Finds that Earth is Safe from One Class of Gamma-ray Burst, May 10, Hubble Site, NASA,
http://hubblesite.org/newscenter/archive/releases/2006/20/full/)

Homeowners may have to worry about floods, hurricanes, and tornadoes destroying their homes, but at least they can remove longduration gamma-ray bursts (GRBs) from their list of potential natural disasters, according to recent findings by NASA's Hubble Space Telescope. Long-duration gamma-ray bursts are powerful flashes of high-energy radiation that are sometimes seen
coming from certain types of supernovae (the explosions of extremely massive stars). If Earth were flashed by a nearby long-duration burst, the devastation could range from destroying the ozone in our atmosphere to triggering climate change and altering life's evolution. Now astronomers analyzing

long-duration bursts those lasting more than one to two seconds in several Hubble telescope surveys have concluded that the Milky Way Galaxy is an unlikely place for them to pop off. They find that blasts tend to occur in small irregular galaxies where stars are deficient in the heavier elements. The Milky Way's starry population, by contrast, is rich in elements heavier
than hydrogen and helium. Suspecting that knowledge of their environments might help determine what types of stars produce gamma-ray bursts, the astronomers, led by Andrew Fruchter of the Space Telescope Science Institute in Baltimore, Md., used Hubble to

examine the environments of 42 long-duration bursts and 16 supernovae. They found that the small fraction of
supernovae that produce the bursts live in a very different environment from the average supernova. Fruchter's results appear in the May 10 online edition of the journal Nature. Fruchter's team found that most of the long bursts in the sample were detected in small, faint, misshapen, (irregular) galaxies, which are usually deficient in heavier elements. Only one of the bursts was spotted in a spiral galaxy like our Milky Way, suggesting that our galaxy is an

unlikely host for long-duration bursts. By contrast, the hosts of supernovae were divided equally between spiral and irregular galaxies, those
with greater or smaller concentrations of the heavier elements. Fruchter's team also found that long bursts are far more concentrated in the brightest regions of their host galaxies where the most massive stars reside. Supernovae, on the other hand, occur throughout their host galaxies. "The discovery

that long-duration gamma-ray bursts lie in the brightest regions of their host galaxies suggests that they come from the most

massive stars 20 or more times as massive as our Sun," Fruchter said. "Their occurrence in small irregulars implies that only stars that lack heavy chemical elements tend to produce long-duration GRBs." This means that long
bursts happened more often in the past when galaxies did not have a large supply of heavy elements. Galaxies build up a stockpile of heavier chemical elements through the ongoing evolution of successive generations of stars. Early generation stars formed before heavier elements were abundant in the universe. Massive stars abundant in heavy elements are unlikely to trigger bursts because they may lose too much material through stellar "winds" off their surfaces before they collapse and explode. When this happens, the stars don't have enough mass left to produce the proper conditions that would trigger the phenomenon. Astronomers think that gamma-ray bursts are produced by rotating black holes left over from stellar explosions. The energy from the collapse of a star's core escapes along a narrow jet, like a stream of water from a lawn sprinkler. The jet burns its way through the remnants of the star. The formation of directed jets, which concentrate energy along a narrow beam, would explain why the bursts are so powerful. But if a star loses too much mass, it may only leave behind a neutron star, not a black hole, and thus cannot create the jet. On the other hand, if the star loses too little mass before its collapse, the jet cannot burn its way through the dense outer layers of the star. This means that extremely high-mass stars that puff away too much material may not be candidates for long bursts. Likewise, neither are stars that give up too little material. "It's a Goldilocks scenario," Fruchter said. "Only supernovae whose progenitor stars have lost some, but not too much, mass appear to be candidates for the formation of GRBs."

Short bursts dont cause extinction Hubble 6 (Hubble Finds that Earth is Safe from One Class of Gamma-ray Burst, May 10, Hubble Site, NASA,
http://hubblesite.org/newscenter/archive/releases/2006/20/full/) Gamma-ray bursts can be divided into two classes: short bursts , which last between milliseconds and about two seconds, and produce very high-energy radiation, and long bursts, which last between two and tens of seconds, and create less energetic gamma rays. Although long bursts are unlikely to strike in galaxies like our Milky Way, short bursts could still happen. Short bursts are believed to arise from collisions

between two compact objects, such as neutron stars. However, even with their higher-energy radiation, short bursts are typically 100 to 1,000 times less powerful overall than long bursts and would pose much less of a threat to life if one were to occur
in our galaxy.

Gamma Ray bursts are highly improbably National Survival Center 9 ( GAMMA RAY BURST SURVIVAL INFORMATION, 2009, National Survival Center,
http://nationalsurvivalcenter.com/gammarayburst.html) Gamma Ray Bursts are the result of an exploding star and are the most powerful explosions in the Universe. Most of the energy is released as Gamma Rays in very short bursts lasting from micro-seconds to as much as 100 seconds, hence the name Gamma Ray Burst. The Gamma Ray Burst is released in a focused "beam" much like the light from a laser and is sometimes referred to as a "Death Ray." Since the Gamma Ray Bursts that have

been observed happened millions of years ago, to stars millions of light years away, and since the chances of a focused Gamma Ray Burst from one of those explosions hitting the Earth is almost non-existent , there is little to be concerned about. If a star within our own Milky Way Galaxy were to explode, and it was close enough...within about 6,000 light years,...and
the focused beam of Gamma Rays were to hit the Earth, it COULD trigger a mass extinction. The absorption of the radiation of the Gamma Rays in the upper atmosphere would cause the nitrogen to generate nitric oxide that would act as a catalyst to destroy the ozone layer. With even one half of the ozone layer destroyed, the direct UV irradiation from the burst combined with additional solar UV radiation passing through the diminished ozone layer, would have significant impact on the food chain thereby triggering mass extinction. IE: STARVATION! NASA scientists are positive this type of mass extinction occurred on Earth hundreds of millions of years ago. Their models show that a Gamma Ray Burst originating within approximately 6,000 light years, and lasting for just 10 seconds can cause years of devastating ozone damage. The most extreme "theory" is that a HUGE Gamma Ray Burst from within our own Galaxy would INSTANTLY kill everything on Earth, but that we wouldn't know anything about it since it would be over before we knew it began. What

are the chances of a Gamma Ray Burst hitting the Earth in our life-time? ... What are the chances of winning the lottery? ... Getting hit by lightning? ... the Cubs winning the World Series? ... It COULD happen. It IS POSSIBLE. But is it probable? No one knows! Could anyone survive a Gamma Ray Burst? Yes they could. But as is the case in any of the "extreme
natural disaster events," it would require a lot of luck, being in the proper location and having the necessary supplies to be able to survive until "things" returned to normal. In addition it would probably be necessary to reside underground during that time period. For the average person ... the probability of surviving a massive Gamma Ray Burst is not very good.

GRBs wouldnt happen in the Milky Way- its spiral galaxy proves Williams 6 (Christopher has a masters degree in astrophysics from the university of Liverpool, Astronomers: gamma ray death from above
'unlikely', The Register, http://science.nasa.gov/science-news/science-at-nasa/2003/29dec_magneticfield/)

A gamma-ray burst (GRB) in our galactic neighbourhood could decimate life, destroy the ozone layer and trigger drastic climate change. A new study of Hubble data, published in Nature, has found such a cosmic deathray scenario is less likely than previous doomsday predictions. The radiation the international team investigated comes with some of the biggest star
explosions in the universe long GRBs, which occur when a core-collapse supernova triggers an even more violent explosion. Supernovae occur all over the universe and throughout galaxies. The researchers were testing the assumption that the galaxies with most supernovae would suffer the most GRBs. Instead, they found that long GRBs tend to originate from the runts of the cosmic litter small, faint, irregularly shaped galaxies (pictured below; crosshairs indicate locations of GRBs). Our own Milky Way is a regular spiral, so GRB armageddon from a nearby star death is

unlikely, they reckon. Out of 42 GRBs measured, the astronomers found just one in a Milky Way-like spiral. Lead author Andrew Fruchter of the Space
Telescope Science Institute said: [GRBs] occurrence in small irregulars implies that only stars that lack heavy chemical elements tend to produce longduration GRBs. Heavy elements are themselves produced in supernovae and have built up in the universe over time. The implication is that long

GRBs are getting less common in the universe generally. They also found that unlike standard supernovae, long GRBs are
concentrated at their host galaxy's brightest region. Study co-author Andrew Levan explained: "The discovery that long-duration GRBs lie in the brightest

regions of their host galaxies suggests that they come from the most massive stars perhaps 20 or more times as massive as our Sun." The

team

concludes that GRBs are "relatively rare" in the Milky Way. GRBs are nearly impossible Oregon State University 6 ( April 19, Deadly Astronomical Event Not Likely To Happen In Our Galaxy, Study Finds, National Science
Foundation, http://www.spaceref.com/news/viewpr.nl.html?pid=19624) COLUMBUS, Ohio -- Are you losing sleep at night because you're afraid that all life on Earth will suddenly be annihilated by a massive dose of gamma radiation from the cosmos? Well, now you can rest easy. Some scientists have wondered whether a deadly astronomical event

called a gamma ray burst could happen in a galaxy like ours, but a group of astronomers at Ohio State University and their colleagues have determined that such an event would be nearly impossible. Gamma ray bursts (GRBs) are
high-energy beams of radiation that shoot out from the north and south magnetic poles of a particular kind of star during a supernova explosion, explained Krzysztof Stanek, associate professor of astronomy at Ohio State. Scientists suspect that if a GRB were to occur near our solar system, and one of the beams were to hit Earth, it could cause mass extinctions all over the planet. The GRB would have to be less than 3,000 light years away to

pose a danger, Stanek said. One light year is approximately 6 trillion miles, and our galaxy measures 100,000 light years across. So the event would
not only have to occur in our galaxy, but relatively close by, as well. In the new study, which Stanek and his coauthors submitted to the Astrophysical Journal, they found that GRBs tend to occur in small, misshapen galaxies that lack heavy chemical elements (astronomers often refer to all elements other than the very lightest ones -- hydrogen, helium, and lithium -- as metals). Even among metal-poor galaxies, the events are rare -- astronomers only detect a GRB once every few years. But the Milky Way is different from these GRB galaxies on all counts

-- it's a large spiral galaxy with lots of heavy elements. The astronomers did a statistical analysis of four GRBs that happened in nearby
galaxies, explained Oleg Gnedin, a postdoctoral researcher at Ohio State. They compared the mass of the four host galaxies, the rate at which new stars were forming in them, and their metal content to other galaxies catalogued in the Sloan Digital Sky Survey. Though four may sound like a small sample compared to the number of galaxies in the universe, these four were the best choice for the study because astronomers had data on their composition, Stanek said. All four were small galaxies with high rates of star formation and low metal content. Of the four galaxies, the one with the most metals -- the one most similar to ours -- hosted the weakest GRB. The astronomers determined that the odds of a GRB occurring in a galaxy like that one to be approximately 0.15 percent. And the Milky Way's metal content is twice as high as that galaxy, so our odds of ever having a GRB would be even lower than 0.15

percent. "We didn't bother to compute the odds for our galaxy, because 0.15 percent seemed low enough," Stanek said. He figures that most people
weren't losing sleep over the possibility of an Earth-annihilating GRB. "I wouldn't expect the stock market to go up as a result of this news, either," he said. "But there are a lot of people who have wondered whether GRBs could be blamed for mass extinctions early in Earth's history, and our work suggests that this is not the case."

A2 Grey Goo
Self-replicating nano-bots are impossible Smalley 3 [Rick- University Professor, Gene and Norman Hackerman Professor of Chemistry and Professor of Physics & Astronomy, 1996 Nobel Prize
Winner Chemistry and Engineering News, Nanotechnology: Drexler and Smalley Make the Case for and Against Molecular Assemblers, December 1, Volume 81, Number 48, p. 37-42] <But where

does the enzyme or ribosome entity come from in your vision of a self-replicating nanobot? Is there a living cell somewhere inside the nanobot that churns these out? There then must be liquid water present somewhere inside, and all the nutrients necessary for life. And now that we're thinking about it, how is it that the nanobot picks just the enzyme molecule it needs out of this cell, and how does it know just how to hold it and make sure it joins with the local region where the assembly is being done, in just the right fashion? How does the nanobot know when the enzyme is damaged and needs to be replaced? How does the nanobot do error detection and error correction? And what kind of chemistry can it do? Enzymes and ribosomes can only work in water, and therefore cannot build anything that is chemically unstable in water. Biology is wonderous in the vast diversity of what it can build, but it can't make a crystal of silicon, or steel, or copper, or aluminum, or titanium, or virtually any of the key materials on which modern technology is built. Without such materials, how is this self-replicating nanobot ever going to make a radio, or a laser, or an ultrafast memory, or virtually any other key component of modern technological society that isn't made of rock, wood, flesh, and bone? I can only guess that you imagine it is possible to make a
molecular entity that has the superb, selective chemical-construction ability of an enzyme without the necessity of liquid water. If so, it would be helpful to all of us who take the nanobot assembler idea of "Engines of Creation" seriously if you would tell us more about this nonaqueous enzymelike chemistry. What liquid medium will you use? How are you going to replace the loss of the hydrophobic/hydrophilic, ion-solvating, hydrogen-bonding genius of water in orchestrating precise three-dimensional structures and membranes? Or do you really think it is possible to do enzymelike chemistry of arbitrary complexity with only dry surfaces and a vacuum? The central problem I see with the nanobot self-assembler then is primarily

chemistry. If the nanobot is restricted to be a water-based life-form, since this is the only way its molecular assembly tools will work, then there is a long list of vulnerabilities and limitations to what it can do. If it is a non-water-based life-form, then there is a vast area of chemistry that has eluded us for centuries.> The grey goo theory is sheer fiction none of their predictions have any scientific validity Philip Ball 3, a science writer and a consultant editor of Nature, 200 3 Nanotechnology Science's Next Frontier or Just a Load of Bull? New Statesman,
June 23, 2003 www.questia.com

Such concerns say more about human nature than about nanotechnology. These fears loom large not because we are terrified, but because we are fascinated by them. Any nanotech researcher will tell you that assessing the prospects of this field on the basis of grey goo is like basing

predictions of the impact of space travel on Star Trek. No one has the faintest idea how to make a replicating nanobot. "The nearest we can get to a self-replicating machine such as a mosquito is a helicopter," says Kroto--that is, big, cumbersome and not self-replicatingat all. The assembly-line approach to nanotechnology on which Drexler's grey goo idea was based, in which nanoscale robotic arms pick up and manipulate molecular fragments like so many factory components, is sheer fiction. Even Drexler no longer rates grey goo as an important concern for nanotechnology. Grey goo is impossibleDrexler has conceded New Scientist 4 (U-turn on goo, June 12, L/N) GREY goo is no more. Eric Drexler, the futurist who dreamed up the vision of self-replicating nanomachines spreading across the planet has publicly repudiated his idea. In his 1986 book Engines of Creation, Drexler suggests that microscopic machines might one day be able to
manipulate individual molecules to build any desired structure. If such a machine contained its own blueprint and could scavenge raw materials, it could take over the planet in a chain reaction of self-replication, he warned. "The self-replicating machine idea has had a profound effect on nanotechnology, or rather its perception - mostly for the bad," says Mark Welland, editor-in-chief of the journal Nanotechnology. Now, in an article in the June

issue of Nanotechnology, Drexler says that nanofactories will be desktop-sized machines that can manipulate large numbers of molecules. Although such a factory could be directed to build a copy of itself, says Drexler, it would not be able to do so on its own. And neither would it be able to spread around the planet, Hollywood style, eating everything its path.

Self-replicating nanobots are a scienfitifc impossibilitycontrary thoughts ignore basic chemistry and would require a living enzyme Smalley 3 [Rick- University Professor, Gene and Norman Hackerman Professor of Chemistry and Professor of Physics & Astronomy, 1996 Nobel Prize
Winner Chemistry and Engineering News, Nanotechnology: Drexler and Smalley Make the Case for and Against Molecular Assemblers, December 1, Volume 81, Number 48, p. 37-42] <You still do not appear to understand the impact of my short piece in Scientific American. Much

like you can't make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion along a few degrees of freedom in the assembler-fixed frame of reference. Chemistry, like love, is more subtle than that. You need to guide the reactants down a particular reaction coordinate, and this coordinate treads through a many-dimensional hyperspace. I agree you will get a reaction when a robot arm pushes the molecules together, but most of the time it won't be the reaction you want. You argue that "if particular conditions will
yield the wrong product, one must either choose different conditions (different positions, reactants, adjacent groups) or choose another synthetic target." But in all of your writings, I have never seen a convincing argument that this list of conditions and synthetic targets that will actually work reliably with mechanosynthesis can be anything but a very, very short list. Chemistry of the complexity, richness, and precision needed to come anywhere close to making a molecular assembler--let alone a self-replicating assembler--cannot be done simply by mushing two molecular objects

together. You need more control. There are too many atoms involved to handle in such a clumsy way. To control these atoms you need some sort of molecular chaperone that can also serve as a catalyst. You need a fairly large group of other atoms arranged in a complex, articulated, three-dimensional way to activate the substrate and bring in the reactant, and massage the two until they react in just the desired way. You need something very much like an enzyme. In
your open letter to me you wrote, "Like enzymes and ribosomes, proposed assemblers neither have nor need these 'Smalley fingers.'" I thought for a while that you really did get it, and you realized that on the end of your robotic assembler arm you need an enzymelike tool. That is why I led you in my reply into a room to talk about real chemistry with real enzymes, trying to get you to realize the limitations of this approach. Any such system will need a liquid medium. For the enzymes we know about, that liquid will have to be water, and the types of things that can be synthesized with water around cannot be much broader than the meat and bone of biology. But, no, you don't get it. You are still in a pretend world where atoms go where

you want because your computer program directs them to go there. You assume there is a way a robotic manipulator arm can do that in a vacuum, and somehow we will work out a way to have this whole thing actually be able to make another copy of itself. I have given you reasons why such an assembler cannot be built, and will not operate, using the principles you suggest. I consider that your failure to provide a working strategy indicates that you implicitly concur--even as you explicitly deny--that the idea cannot work. A few weeks ago I gave a talk on nanotechnology and energy titled "Be a Scientist, Save the
World" to about 700 middle and high school students in the Spring Branch ISD, a large public school system here in the Houston area. Leading up to my visit, the students were asked to write an essay on "Why I Am a Nanogeek." Hundreds responded, and I had the privilege of reading the top 30 essays, picking my favorite five. Of the essays I read, nearly half assumed that self-replicating nanobots were possible, and most were deeply worried about what would happen in their future as these nanobots spread around the world. I did what I could to allay their fears, but there is no question that many of these youngsters have been told a bedtime story that is deeply troubling. You and people around you have scared our children. I don't expect you to stop, but I hope others in the chemical community will join with me in turning on the light, and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your

dreams.> No Gray Goo safeguards solve and waste heat means expansion would be slow and we could develop counter-measures Webb in 2 (Stephen, Physicist at Open University of London, If the Universe Is Teeming with Aliens... Where Is Everybody? Fifty Solutions to Fermi's
Paradox and the Problem of Extraterrestrial Life, p. 127) The young boy in Woody Allen's Annie Hall becomes depressed at the thought that the Universe is going to die, since that will be the end of every-thing. I am becoming depressed writing this section, so to cheer up myself and any young Woodys that might be reading I think we have to ask

whether the gray goo problem is even remotely likely to arise. As Asimov was fond of pointing out, when man invented the sword he also in-vented the hand guard so that one's fingers did not slither down the blade when one thrust at an opponent. The engineers who develop nanotechnology are certain to develop sophisticated safeguards. Even if self-replicating nanobots were to escape or if they were released for malicious reasons, then steps could be taken to destroy them before catastrophe resulted. A population of nanobots increasing its mass exponentially at the expense of the biosphere would immediately be detected by the waste heat it generated. Defense measures could be deployed at once. A more realistic scenario, in which a population of nanobots increased its mass slowly, so the waste heat they generated was not immediately detectable, would take years to convert Earth's biomass into nanomass. That would provide plenty of time to mount an effective defense. The gray goo
problem might not be such a difficult problem to overcome: it is simply one more risk that an advanced technological species will have to live with.

Self-replicating nanobots are impossiblethree reasons Science 2k [Robert F. Service, Is Nanotechnology Dangerous? Volume 290, Number 5496, November 24,
http://www.sciencemag.org/cgi/content/full/290/5496/1526) <Richard Smalley, a Nobel Prize-winning chemist at Rice University in Houston, Texas, says that there

are several good reasons to believe that nanomachines of the sort imagined by Drexler and company can never be made. "To put it bluntly, I think it's impossible," Smalley says. As he sees it, the idea of little machines that grab atoms and assemble them into desired arrangements

suffers from three faults. First, he says, it's wrong to think you can just manipulate an individual atom without handling the ones around it as well. "The essence of chemistry is missing here. Chemistry is not just sticking one atom in one place and then going and grabbing another. Chemistry is the concerted motion of at least 10 atoms." That means to move that one atom where you want it, you'll need 10 nanosized appendages to handle it along with all of its neighbors. Which raises the second problem--what Smalley calls the "fat fingers" problem. A nanometer is just the width of eight oxygen atoms. So even if you're trying to build something hundreds of nanometers in size, "there's just not enough room" in that space to fit those 10 fingers along with everything they are trying to manipulate. Finally, there's the "sticky fingers" problem: Even if you could wedge all those little claspers in there with their atomic cargo, you'd have to get them to release those atoms on command. "My advice is, don't worry about self-replicating nanobots," says Smalley. "It's not real now and will never be in the future.">

Empirically denied HAARP technology has been in use since the 1970s and the project isnt big enough to do any damage. Busch 97 (Linda, February 21, 1997, Ionosphere Research Lab Sparks Fears in Alaska, Science magazine, writer for the American Association for the
Advancement of Science, http://www.sciencemag.org/cgi/content/full/275/5303/1060? maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=weather+manipulation&searchid=1&FIRSTINDEX=20&resourcetype=HWCIT) But anti-HAARP skeptics claim that the military has even bigger plans for the project . HAARP's somewhat menacing appearance surely hasn't helped resolve its public-relations problem: 48 21-meter radio antennas now loom behind the Gakona facility's barbed-wire fence, and, when completed, the 9-hectare antenna farm will be stuffed with 180 towers. In his book, Begich, who is the informal spokesperson for the loosely knit anti-HAARP

A2 HAARP

coalition, writes that all this technology is part of a DOD plan to raise a Star Wars-type missile shield and devise technologies for jamming global communications worldwide. Physical chemist Richard Williams, a consultant for the David Sarnoff Institute in Princeton, New Jersey, further argues that HAARP could irreparably damage the ionosphere : "This is basically atmospheric
physicists playing with the ionosphere, which is vital to the life of this planet." Also, he asserts that "this whole concept of electromagnetic warfare" needs to be "publicly debated." The HAARP critics have asked for a public conference to discuss their concerns and hear more details about the science from the military. They have written hundreds of letters to Alaska's congressional delegation and have succeeded in getting the attention of several state legislators, who held legislative hearings on the subject last year. Many scientists who work on HAARP are dumbfounded by the charges.

"We are just improving on technology that already exists," says Heckscher. He points out that the Max Planck Institute has been running a big ionospheric "heater" in Troms, Norway, since the late 1970s with no lasting effects. U.S. scientists don't have good access because the United States did not join the Norwegian consortium. Also, the United States already operates two other small ionospheric heaters, at the Arecibo Observatory in Puerto Rico and at HIPAS, operated by the University of California, Los Angeles,
325 kilometers down the road from HAARP in Chena Hot Springs, Alaska. The HAARP facility, with three times the power of current facilities and a vastly more flexible radio beam, will be the world's largest ionospheric heater. Still, it will not be nearly powerful enough to change Earth's climate, say scientists. "They are talking science fiction," says Syun-Ichi Akasofu, who heads the University of Alaska's Geophysical Institute in Fairbanks, the lead institution in a university consortium that made recommendations to the military about how HAARP could be used for basic research. HAARP won't be doing

anything to the ionosphere that doesn't happen naturally as a result of solar radiation, says Akasofu. Indeed, the beam's effect on the ionosphere is minuscule compared to normal day-night variations. "To do what [the critics] are talking about, we would have to flatten the entire state of Alaska and put up millions of antennas, and even then, I am not sure it would work." Weather is generated, not in the ionosphere, but in the dense atmosphere close to Earth, points out University of Tulsa provost and plasma physicist Lewis Duncan, former chair of the U.S. Ionospheric Steering Committee. Because HAARP's radio beam only excites and heats ionized particles, it will slip right through the lower atmosphere, which is composed
primarily of neutral gases. "If climate modifications were even conceivable using this technology, you can bet there would be a lot more funding available for it," he jokes.

Their impacts are science fiction their authors assume a project 1,000 times more powerful than HAARP Cole 95 (September 17, writer for Fairbanks News-Miner and 5-time published nonfiction author, HAARP Controversy
http://www.haarp.alaska.edu/haarp/news/fnm995.html) Alaskan Nick Begich Jr., who recently got a doctorate in the study of alternative medicine from a school based in Sri Lanka, has written and published a new book in which he alleges director of the

that HAARP could lead to "global vandalism" and affect people's "mental functions." Syun Akasofu, Geophysical Institute, said the electric power in the aurora is hundreds of thousands of times stronger than that produced by HAARP.The most outlandish charges about HAARP are that it is designed to disrupt the human
brain, jam all communications systems, change weather patterns over a large area, interfere with wildlife migration, harm people's health and unnaturally impact the Earth's upper atmosphere. These and other claims appear to be based on speculation about what might happen if a

project 1,000 times more powerful than HAARP is ever built.That seems to be in the realm of science fiction.

HAARP is key to solving Ozone Depletion Rembert 97 (Tracey C, January 11 1997, Discordant HAARP; High-Frequency Active Auroral Research Program, E: The Environmental Magazine,
coordinator of Co-op America, Editor of Shareholders Action Quarterly http://www.findarticles.com/p/articles/mi_m1594/is_n1_v8/ai_19192505/pg_3) So far, proponents of HAARP have concentrated solely on its defensive and tactical military applications, but one patent speculates that the device

A2 HAARPGoodOzone

would be able to alter "upper-atmosphere wind patterns...so that positive environmental effects can be achieved...For example, ozone, nitrogen and other concentrations in the atmosphere could be artificially increased." HAARP could also theoretically create rain in drought-ridden areas, decrease rains during flooding and redirect hurricanes, tornadoes and monsoons away from populated areas.

Ozone depletion causes extinction Greenpeace 95 (Full of Homes: The Montreal Protocol and the Continuing Destruction of the Ozone Layer,
http://archive.greenpeace.org/ozone/holes/holebg.html) When chemists Sherwood Rowland and Mario Molina first postulated a link between chlorofluorocarbons and ozone layer depletion in 1974, the news was greeted with scepticism, but taken seriously nonetheless. The vast majority of credible scientists have since confirmed this

hypothesis. The ozone layer around the Earth shields us all from harmful ultraviolet radiation from the sun. Without the ozone layer, life on earth would not exist. Exposure to increased levels of ultraviolet radiation can cause cataracts, skin cancer, and immune system suppression in humans as well as innumerable effects on other living systems . This is why Rowland's and Molina's theory was taken so seriously, so quickly - the stakes are literally the continuation of life on earth .

Wrong HAARP wont lead to ionization two reasons The HAARP Scientists 1 (March 14, http://www.haarp.alaska.edu/haarp/ion4.html, the scientists who work on HAARP writing about it on
the HAARP website, What Are the Effects of HAARP on the Ionosphere?) During active ionospheric research, a small, known amount of energy is added to a specific region of one of the ionospheric layers as discussed previously. This limited interactive region directly over the facility, will range in size, depending on the frequency of operation and layer height, from as little as 9 km in radius to as much as 40 km in radius and may be as much as 10 km in thickness. The interactions occur only with ionized particles in the layer; neutral (non-ionized) particles, which outnumber ionized particles by 500:1 or greater, remain unaffected. HAARP is not able to produce artificial ionization for the following two reasons. 1. The frequencies used by the HAARP facility are in the High Frequency (HF) portion of the spectrum.

A2 HAARP Ionization

Electromagnetic radiation in the HF frequency range is non-ionizing - as opposed to the sun's ultraviolet and X-ray radiation whose photons have sufficient energy to be ionizing. 2. The intensity of the radiation from the completed HAARP facility at ionospheric heights will be too weak to produce artificial ionization through particle interactions. The power density produced by the completed facility will not exceed 2.8 microwatts per cm2, about two orders of magnitude below the level required for that process.

HAARP wont cause warming its signals are a million times less powerful than government approved safety levels Rozell 97 (Ned, science writer at Geophysical Institute University of Alaska Fairbanks, June 5, Why All the Harping About HAARP?
http://www.gi.alaska.edu/ScienceForum/ASF13/1340.html) Is HAARP dangerous? Well, HAARP

A2 HAARP Warming

signals are one million times less dangerous than government-approved safety levels for any electrical signal. HAARP's transmitter currently has a power of 1/3 megawatt, which might be boosted to 3 megawatts in a few years, Heckscher said. He compared HAARP's effect on the vast ionosphere to the warming that would be experienced by the whole Copper River if you dipped in a small electric coil of the type used to warm one single cup of coffee. This is why Akasofu describes rumors he's heard circulating about HAARP as dangerous to people or the environment as pure science fiction. HAARP could present a potential danger to electronic equipment in aircraft that is flying overhead when the
transmitter is turned on, but there are safety precautions against that. HAARP operators notify the Federal Aviation Administration with the HAARP transmission schedule and engineers are installing an aircraft-detection radar at HAARP to further ensure the safety of overflying aircraft. This same procedure is followed when rockets are launched from Poker Flat Research Range into the upper atmosphere.

No ice age for 80,000 years and humanity can adapt Revkin, 2003, Andrew, New York Times, When Will the Next Ice Age Begin? http://www.nytimes.com/2003/11/11/science/when-will-the-nextice-age-begin.html, KHaze

A2 Ice AgeAdaption Possible

The next ice age almost certainly will reach its peak in about 80,000 years, but debate persists about how soon it will begin, with the latest theory being that the human influence on the atmosphere may substantially delay the transition. This is no mere intellectual exercise. The equable conditions of the Holocene, which has lasted 10,000 years so far, have enabled the flowering of agriculture, technology, mobility and resulting explosive population growth that has made the human species a global force. Any substantial climate shift is likely to pose enormous, though probably surmountable, challenges. Humans will adapt- we lived during the last ice age Than, 2005, Ker, Hungry humans killed off Ice Age mammals, http://www.msnbc.msn.com/id/8826636/ns/technology_and_sciencescience/t/hungry-humans-killed-ice-age-mammals/, KHaze

Weapon-wielding humans, and not warming temperatures, killed off the sloth and other giant mammals that roamed North America during the last Ice Age, a new study suggests. The arrival of humans onto the American continent and the great thaw that occurred near the end of the last Ice Age both occurred at roughly the same time, about 11,000 years ago. Until now, scientists were unable to tease apart the two events. To get around this problem, David Steadman, a
researcher at the University of Florida, used radiocarbon to date fossils from the islands of Cuba and Hispaniola, where humans didnt set foot until more than 6,000 years after their arrival on the American continent. The West Indian ground sloth, a mammal that was the size of a modern elephant, also disappeared from the islands around this time. "If climate were the major factor driving the extinction of ground sloths, you would expect the extinctions to occur at about the same time on both the islands and the continent since climate change is a global event," Steadman said. His findings are detailed in the Aug. 2 issue of the journal for the Proceedings of the National Academy of Sciences. This could also explain why more than three-

fourths of the large Ice Age mammal species -- including giant wooly mammoths, mastodons, saber-toothed tigers and giant bears -- that roamed many parts of North America became extinct within the span of a few thousand years. "It was as dramatic as the extinction of dinosaurs 65 million years ago," Steadman said. If climate change were the major factor in the mass extinction, fewer animals might have been affected, since most species of plants and animals can adapt to temperature changes. Steadman said that temperature changes might have still played an important role in their demise, however, making some animal species more vulnerable to humans than they might otherwise have been. No extinction- humans will retreat to warm climates, empirics prove Firth, 2010, Niall, Humans survived ice age by sheltering in 'Garden of Eden', claim scientists, http://www.dailymail.co.uk/sciencetech/article1297765/Last-humans-Earth-survived-Ice-Age-sheltering-Garden-Eden-claim-scientists.html, KHaze

The last humans on Earth may have survived an ice age by retreating to a small patch of land nicknamed 'the garden of Eden'. The strip of land on Africa's southern coast - around 240 miles east of Cape Town - became the only place that remained habitable
during the devastating ice age, scientists claim. The sudden change in temperature wiped out many species elsewhere around 195,000 years ago. Researchers believe this could account for the fact that humans have less genetic diversity than other species.

Some scientists even believe that the human race's population may have fallen to just a few hundred individuals who managed to survive in one location. Professor Curtis Marean, of the Institute of Human Origins at Arizona State University discovered
ancient human artifacts in the isolated caves around an area known as Pinnacle Point, South Africa. 'Shortly after Homo sapiens first evolved, the harsh climate conditions nearly extinguished our species,' said Professor Marean. 'Recent finds suggest the small population that gave rise to all

humans alive today survived by exploiting a unique combination of resources along the southern coast of Africa.'

Human induced climate change prevents the ice age Hathaway, 2010, Jim, Technology, It's Official: The Next Ice Age Is Cancelled, http://technology.gather.com/viewArticle.action?
articleId=281474978628018, KHaze The global

A2 Ice AgeWont Happen

warming trend has accelerated melting of the Greenland ice sheet and Arctic permafrost, triggering a positive feedback loop which is warming the oceans; which, in turn, will melt more of the Earth's remaining freshwater ice sheets. The resultant warming trend will affect climates in northern mid-latitudes, including North America, as part of a cycle which is unlikely to reverse direction. These are the main findings of an international team of scientists monitoring several data points for the National Oceanic and Atmospheric Administration's (NOAA) Arctic Report Card. The report card's 2010 update observes "many indications of warming" impacting biology, ocean, and land, and "consistent evidence of warming" in Greenland, summer sea ice, and the Arctic climate. The conclusion of these findings is that a "Return to previous Arctic conditions is unlikely." While it's true that the earth's concurrent orbital eccentricity and obliquity cycles affect the Arctic's winter temperature, allowing the formation of ice sheets, it is the Arctic's summer temperature that determines the seasonal longevity of the ice sheet . As geophysical sciences professor Dr. David Archer explained in a public lecture Friday night, the increasing amount of CO2 in the atmosphere is staving off the possibility of a trigger event, historically a cold summer precipitated by a drop in CO2, for a next ice age to be realized. Ice age isnt coming anytime soon- sunspots are an inaccurate indicator Chameides, 2008, Bill, Dean, and Nicholas Professor of the Environment Earth & Ocean Sciences. PhD, Yale University MS, Yale University BA,
Global Warming and Predictions of an Impending Ice Age Predicting Future Climate, http://www.nicholas.duke.edu/thegreengrok/futureclimate, KHaze Our current concerns about climate change focus on the coming decades to the next century - the time period relevant to our children's and grandchildren's experience. But the ice age/warm period cycle operates on a time scale of tens of thousands of Scientists have figured out that ice

years. ages are triggered by subtle changes in the Earths orbit about the Sun. The next such triggering is not expected to occur any time soon tens of thousands of years from now. Not quite soon enough to be relevant to our children's well-being. The Sun as the Deux Ex Machina Saving Us from Global Warming Others
argue that the Suns output will suddenly shut down and stop global warming. Indeed, some now predict that a brutally cold period much like the Little Ice Age that began in the 1200s and ended in the mid-1800s has already begun. There are two intriguing aspects to this claim: The Little Ice Age was likely caused, at least in part, by a relatively quiescent Sun. Characteristic of this quiescence was the Maunder Minimum (~1645-1715) when there was an anomalous absence of observed sunspots (see my earlier post). We are currently in an unusually long and strong solar minimum. In fact, 2008 has been a year of unusually low solar activity, with more days sans sunspots (called spotless days) than any year in the past five decades and on a pace to challenge 1913 as the century's most quiescent year (see graphic). Think (and Grok) Before You Leap to Conclusions Of course, proving such a prediction wrong is impossible, but please keep a number of things in mind before you run out to buy that snow parka: Using one years worth of sunspot data

to infer a long-term climatic trend is a highly questionable practice. While the Sun's current quiescent period is unusual, it is not unprecedented. The year 1913 had more than 300 spotless days, and an ice age did not follow. Indeed, the longterm trend in global warming continued apace. Sunspots affect climate, but they are not the only factor. Dont forget that 2007, the third most spotless year of the past 50 (see graphic), was also the third warmest year on record.
And while the winter of 2008 was unusually cold (possibly because of La Nina), the rest of the year has been quite warm. According to the National Oceanic and Atmospheric Administration, March 2008 was the second warmest March on record; the spring was the eighth warmest on record; and the summer was the ninth warmest. And bear in mind that these relatively high temperatures occurred while the Sun was at its minimum and a strong La Nina persisted in the South Pacific.

Intertia weapons are key to deterrence, they are our only chance at avoiding complete extinction this is their author, and we are the only ones reading the conclusion of his article Smith 03 (Wayne, Space Daily 4/14, The Ultimate Weapon, http://www.spacedaily.com/news/nuclear-blackmarket-03b.html) Nuclear bombs are arguably the most devastating military weapon ever deployed by humankind. As a consequence of their development we have ironically enjoyed generations of relative peace on this planet.Everyone is just too frightened to start another world war. However, the holiday may be coming to an end as nuclear proliferation starts to escalate uncontrollably. In the beginning only the US had access to this technology and used it to finally end the greatest war this world had ever witnessed.
Right or wrong, nobody can seriously question the total unconditional surrender of Japan as not being a direct consequence of the Hiroshima and Nagasaki bombings.Now the nuclear club is growing towards double figures although many of its new members aren't "officially" recognised. Many nations leaders are unhappy about the way some other countries have the bomb and they don't.Even those in as close proximity to the US as Mexico have expressed grievances over this issue. It is believed that more than a couple of countries are taking matters into their own hands by developing nuclear weapons arsenals secretly.It certainly wouldn't be the first time and a nuclear strike is not so intimidating a threat when everybody has the ability to counterstrike. As the number of global arsenals increase so grows the possibility they might in fact be used. Then all hell breaks loose and you can kiss your pension goodbye. We had many close calls during the cold war and can look forward more in the future. It might be the result of international tensions. A flock of birds mistakenly judged by radar operators to be a first strike. Perhaps a terrorist act or a meteor.One time the Russians mistook a rocket carrying a weather satellite on its way to study the aurora borealis as being a thermonuclear warhead targeted for Moscow. Accidents happen. How do you say sorry for mistakenly decimating a capital city. Is any nation on earth pussy enough not to retaliate if it has the means?Something of course needs to be done but nobody has any workable answers. Clearly everyone can't be trusted to disarm. Not in the real world. The temptation to hide some warheads would be too great and the shifts in international power would impact us in quite devastating ways. Conventional wars wouldn't be stymied by the nuclear card any more. What if all the racial tensions, political turmoil and religious zeal that has brewed and festered in its kettle for past generations proved stoppable only by the nuclear genie? China would probably invade Taiwan for a start.Nukes have made more conventional weapons pale into

A2 Inertia WeaponsGood

insignificance and countries like North Korea, India, Pakistan and Israel realise the political clout afforded to them by ballistic missiles with nuclear warheads attached. It seems to be a vicious circle we can't escape but can only watch tighten around us.Only one weapon can do to the nuclear arsenals of this world what nuclear arsenals have done to conventional arms. Yes, a bigger stick does exist although it isn't much talked about. One that makes nukes a less attractive poor cousin by comparison.Inertia weapons have that potential. What's an inertia weapon? On a smaller scale, inertia weapons known as cars kill over a million people every year. To nations wanting the ultimate weapon no matter what the cost, a space inertia weapon is the holy grail .We are no
strangers to this horror. It has visited numerous mass extinctions upon us in the past. Some of them responsible for removing up to 95% of life on Earth in one swift hammer blow. Everybody now knows that the most likely cause for the demise of the dinosaurs was a comet or asteroid striking around 65 million years ago.They also know that this created an opportunity for our small furry rat like ancestors to step in and take control. In fact its now believed the biosphere of our planet has almost started over from scratch many many times because of such planetary impacts.There has been much talk of late on how we might detect and even defend ourselves from such a catastrophe in the near future but nobody seems to be asking the next obvious question. Could such a weapon now be wielded by humans? The answer is a definite yes.While a nuclear explosion might destroy a maximum radius of approximately 37km due to the curvature of the earth, a large asteroid could decimate an entire continent. Asteroids require no replenishment of fissionable elements or other expensive maintenance and there are millions of them within easier reach than the moon.It's just like playing billiards. Every object in the universe in accordance with Newtonian laws travels in a straight line unless another force is applied to it.Unlike billiards there is virtually no friction in space so an object will maintain any velocity and heading indefinitely. At least until its redirected or something stops it.A spacefaring nation would have no trouble calculating the mathematical solutions for precisely changing an asteroid's trajectory. Then its a simple matter of nudging it. Push in the right spot and maintain the pressure until your gun is pointed at an appropriate target. This might be achieved in many ways.Reaction mass to drive your inertia weapon could be rocket propellant or the asteroids own mass. Just attach explosives or a few mass drivers. Whoever reaches deep space first will therefore be faced with the choice of utilising these 'inertia weapons' and the temptation will be great indeed. A big space rock could wipe out any enemy and the threat alone would equate to political clout beyond human comprehension.A city can after all be evacuated if a nuclear strike is threatened, but a country?If a nation chose to conquer the high ground of space then keeping everybody else out of it would be all that's necessary to ensure world dominance. Inertia weapons cannot proliferate unless more than one nation can actually reach them. The race to space could therefore end up being a race for control of the earth and solar system. I doubt any of this has escaped our leaders, both east and west.Would this be a bad thing? No worse than the first

atomic bomb. The fact that it's unavoidable if we want space travel makes the question absurd. Why wouldn't a space faring nation seize a weapon
ensuring it world dominance? Suppose this capability fell into the wrong hands though or was allowed to be owned by many spacefaring nations. Should that happen we might still see nuclear weapons become redundant and inertia weapons replace them as the newest threat to humanity.It would mean a new "Cold War" on a scale to dwarf the previous US and Russian one. A nuclear war despite all the bad press is in fact survivable. Not all human life would be eradicated and if all the nukes in the world were launched then we in the west might be set back a century. It would be nasty but not the end. It might seem like it but we would eventually recover.The same can't be said for a space war where mountains are directed at the earth.When the first space probe experimentally landed on the asteroid Eros recently, that celestial bodies motion was imperceptibly changed by the gentle bump of a manmade spacecraft for the very first time. A herald of greater things to come maybe. Nobody can accurately predict the future and I don't want to add my name to the long list of failed seers in history.The technology however is more than a prediction and has existed for a very long time. A new space race looks set between the US and enthusiastic newcomer China which I would call a safe bet. Now what do we do if China one day announces Ceres to be on course for the US and they want Taiwan in exchange for "assistance"?We might truly learn what it means to see the sky falling.In the movies "Armageddon" and "Deep Impact"

we saw nukes save the earth in from both a rogue asteroid and a comet. Perhaps it will ironically prove to be the other way around. The threat of genocide from space might be persuasive enough to make nations disarm.Ultimately averting a future nuclear war. Expensive nuclear warheads would become second rate weapons.Expensive and
redundant ones in the face of that firepower.

Directing an asteroid at earth would require too much energy it is a small risk and is redundant with nukes Space Policy in 02 (Book Review; Target Earth, Volume 18, Issue 1, February, http://abob.libs.uga.edu/bobk/ccc/cc021502.html)
There is still the question as to what could or should be done if an impact threat is discovered. The MIT Project Icarus in 1967 calculated that six Saturn V launchers carrying 100 nuclear warheads would be needed to divert that asteroid if it became a hazard, as in its present orbit it conceivably could. Saturn V is no longer available but a similar effort could no doubt be mounted, given sufficient warning. The problem is the `Deflection Dilemma': if you can

deflect asteroids or comets away from the Earth, that raises the possibility of deflecting them towards it. Duncan
Steel's answer to that is not to build such a system until an actual threat is detected, but there's still the possibility of things sneaking up on us: one reason why we're still arguing about the nature of the Tunguska object in 1908 is that it approached from the direction of the Sun and wasn't seen until it entered the atmosphere. Watching for that would require eternal vigilance in space as well as on Earth, and we know how quickly governments tire of such things: the US administration turned off the science stations left by astronauts on the Moon only 5 years after Apollo, and cancelled the Search for Extraterrestrial Intelligence long before there was a realistic chance of success. But those of us who would like to see deflection systems

developed now can take heart from a contribution to the 2001 Charterhouse conference on British rocketry by David
Asher and Nigel Holloway. They made headlines with an outline of what it would take to bring down a 500-m asteroid on Telford and devastate England from the Scottish Borders to Devon. It was worth attending just to witness the stunned silence in which veterans of Britain's nuclear weapons programme heard details of how a single asteroid, under malevolent control, could reduce the UK to rubble. As one 80-year-old remarked, "If it takes 12 years

and 15 nuclear warheads to bring down an asteroid on us, why not just use the weapons in the first place?" On the
more serious level of preventing the impacts, another old-timer remarked that the UK share of the events wouldn't pay for a new housing estate, let alone what it would cost to rebuild the country after such an occurrence. But the study demonstrates that using asteroids as weapons

takes much more effort than simply turning them aside from Earth, so the Deflection Dilemma has lost much of its force. Gravity tractors solve the impact New Scientist in 05 (Gravity Tractor to Deflect Earth Bound Asteroids, November 9, http://www.newscientist.com/article.ns?
id=dn8291&feedId=online-news_rss20) NASA scientists have come up with a surprisingly simple yet effective way to deflect an Earth-bound asteroid park a large spacecraft close by and let gravity do the work. Previous suggestions have focused on deflecting an incoming asteroid with nuclear explosions. But NASA

experts believe a "gravity tractor" should be able to perform the same feat by creating an invisible towline to tug the rock off its deadly course. Asteroid deflection solves asteroid deflection Space.com in 06 (Robert Roy Britt, Science Writer, New Cosmic Defense Idea: Fight Asteroids with Asteroids, June 20,
http://www.space.com/scienceastronomy/060620_science_tuesday.html) No asteroids are presently known to be on collision courses with Earth. But existing holes in the ground suggest that inevitably one will eventually be found. There is no firm plan for how to deflect or destroy an incoming asteroid, though scientists have pondered firing rockets at them, moving them gently with solar sails, or nudging them with nuclear explosions. Lock and load The

new idea is to capture a relatively small asteroidperhaps 100 feet (30 meters) wideby sending a robot to it. The robot would heave material from the asteroid's surface into space, and the reaction force would gradually direct the asteroid to a Lagrange point, one of a handful of nodes along Earth's orbit where the gravity of Earth and the Sun balance out. Scientists know that objects can be kept stable at a Lagrange point with little or no energy. The captured rocky weapon would be held there, traveling around the Sun ahead of or behind the Earth, held until needed. Then, if a large asteroid threatens to hit us, the small one is moved into its path, using the same heaving technique. The rocks collide, and the big one is broken into somewhat less harmfull bits. The
collision disperses the fragments of the incoming asteroid, so that not all of them hit the planet.

A2 Magnetic Monopoles
Monopoles would exit the Earth without harm Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL In some grand unified theories, though not in the Standard Model of particle physics, magnetic monopoles might also catalyze nucleon decay, by transforming protons and neutrons into electrons or positrons and unstable mesons. In this case, successive collisions with large numbers of nuclei would release considerable energy. The magnetic monopoles that might have such properties are expected to weigh 1015 GeV or more, far too heavy to be produced at the LHC . Nevertheless, here we consider the possibility of producing light proton-eating magnetic monopoles at the LHC. A quantitative discussion of the impact of such magnetic monopoles on Earth was presented in [1], where it was concluded that only a microgram of matter would be destroyed before the monopole exited the Earth. Independently of this conclusion,

if monopoles could be produced by the LHC, high-energy cosmic rays would already have created many of them when striking the Earth and other astronomical bodies. Since they would have large magnetic charges, any monopoles produced by cosmic rays would have been stopped by the material of the Earth [2]. The continued existence of the Earth and other astronomical bodies after billions of years of high-energy cosmic-ray bombardment means that any monopoles produced could not catalyze proton decay at any appreciable rate. If the collisions made in the LHC
could produce dangerous monopoles, high-energy cosmic rays would already have done so. The continued existences of the Earth and other astronomical bodies such as the Sun mean that any magnetic monopoles produced by high-energy cosmic rays must be harmless. Likewise, if any monopoles are

produced at the LHC, they will be harmless. No impact too heavy to be produced and the Earth would trap them CERN 08 (2008, European Organization for Nuclear Research, The safety of the LHC, http://public.web.cern.ch/Public/en/LHC/Safety-en.html)
HL Magnetic monopoles are hypothetical particles with a single magnetic charge, either a north pole or a south pole. Some speculative theories suggest that, if they do exist, magnetic monopoles could cause protons to decay. These theories also say that such monopoles would be too heavy to be

produced at the LHC. Nevertheless, if the magnetic monopoles were light enough to appear at the LHC, cosmic rays striking the Earths atmosphere would already be making them, and the Earth would very effectively stop and trap them. The continued existence of the Earth and other astronomical bodies therefore rules out dangerous protoneating magnetic monopoles light enough to be produced at the LHC.

The collision wont be violent- the galaxies will just merge and itll take billions of years Dubinski, 2006, John, New Track Media, The Great Milky Way Andromeda Collision,
http://www.galaxydynamics.org/papers/GreatMilkyWayAndromedaCollision.pdf, KHaze

A2 Milky Way Collision

As the Sun proceeds on its orbit around the galactic center, and as our galaxy approaches M31, the night sky will continually morph. Since the Suns orbit is roughly circular, the familiar arch of the Milky Way will not change signicantly. But al-Sus Little Cloud will grow into a Big Cloud, and any skywatchers will enjoy their rst close-up view of another galaxy lling the sky (July issue, page 108). It will have the same gossamer texture as the Milky Way, but its bulge and spiral pattern will be readily apparent. As the two galaxies overlap in about 3 billion years, Andromeda will tip edge-on from our perspective and form an interstellar intersection of two apparent Milky Ways in the night sky. Andromeda will then recede, but the strong tidal interaction will produce a
two-armed spiral pattern and extended tidal tails, like those we see in the Antennae (NGC 4038 and NGC 4039 in Corvus). The pull of the Milky Ways dark halo will reduce Andromedas orbital energy, limiting how far it can recede to a few hundred thousand light-years. M31 will then fall in for a

second pass within a few hundred million years. This time the collision will be nearly head-on, and the two galaxies will undergo a nal spasm of quick, convulsive encounters over 100 million years until they nally merge into a single elliptical galaxy, surrounded by ne shells, ripples, and two extended tidal tails. Right after the rst encounter, the complex gravitational interactions will strongly perturb the Suns circular orbit, plunging us through the heart of the galaxy. The night sky will oscillate between a distant view of two interacting spiral galaxies and a sky densely lled with
bright stars as we y through the galactic bulge.

No impact to collision and its too far off NASA, no date, Milky Way vs. Andromeda, http://www.nasa.gov/audience/forstudents/5-8/features/F_When_Gallaxies_Collide.html, KHaze Our galaxy, the Milky Way, has collided with another galaxy, called Andromeda. Although the two galaxies are passing through each other at a million miles an hour, the whole process will take many millions of years to complete. And when everything settles down, the two galaxies will have merged into one. The students fear that this may be the end of life, as they know it. But, their teacher reassures her class that there is very little chance of stars from the Andromeda galaxy hitting the Sun or the Earth. Even though the galaxies pass clear through each other, she says, stars in a galaxy are spaced very far apart. They are like grains of sand separated by the length of a football field. The
Andromeda stars simply pass by. But galaxies are more than just stars. They contain giant clouds of gas and dust. And, when galaxies collide, these clouds smash into one another. The clouds contain the raw materials needed to make new stars. It is the collision between clouds that has triggered a starry baby boom! Our story describes an event in the distant future for the Milky Way. But, galaxy collisions are common in the universe. But

don't worry about a collision during your lifetime. Right now, the Milky Way and Andromeda are so far apart that even light takes two million years to journey between them. But on the scale of galaxies, they are quite close together. Imagine
the Milky Way galaxy as a music CD (the thickness compared to its diameter is about right).

Collision wont harm Earth Dubinski, 2001, John, The Merger of the Milky Way and Andromeda Galaxies,http://www.galaxydynamics.org/tflops.html, KHaze When will this collision occur? Plausible orbits and models of the Milky Way and Andromeda galaxies suggest that the big event could occur in as soon as 3 billion years. The Sun will still be burning brightly when this collision occurs and
maybe life of some sort will still be around on Earth (or at least within the solar system) at that time. So what would people see in the night sky during this billion year galactic dance? As Andromeda approaches, it will grow in size and just before the collision the night sky will be filled by a giant spiral galaxy.

When the two galaxies intersect, our familiar Milky Way arch over the sky will be joined by a second intersecting arch of stars but this will only last for 100 million years or so and will be a very confusing state of affairs for galactic astronomers. Finally, when the two galaxies merge our view will depend on which direction the Sun is thrown. There are two possible fates fort the Sun which depend closely on the details of where it is in its galactic orbit at the time of the collision. In the first case the Sun may take a ride on a tidal tail and be ejected into the darkness of intergalactic space. In this case, our star would be all alone with few stellar neighbours so the night sky would be very dark with few stars to see -- maybe like the disappointing view of the nightsky from an urban centre like downtown Toronto. In the second case, the Sun may be thrown right into the centre of the merging pair where a great starburst will be underway . The huge number of stars forming will result in supernovae going off at a rate of a few per year in the new merged galaxy. While these will likely not present a direct hazard to the Earth, they will truly light up the sky letting you read at night but
probably frustrating the endeavours of backyard astronomers!

No collision- most recent data proves Softpedia, 1-31-11, Andromeda Might Avoid Collision With Milky Way, http://news.softpedia.com/news/Andromeda-Might-Avoid-Collisionwith-Milky-Way-181541.shtml, KHaze

Astronomers are currently in the process of refining their original predictions about the fate of the AndromedaMilky Way galactic system. They say that it could be possible the two objects will not hit each other after all. The earliest predictions about a potential collision were based on the fact that our neighboring galaxy was found to be moving towards us fast. This implied that, a few billion years from now, the two would collide, giving birth to a massive cosmic
object. The resulting spiral galaxy would have been incredibly large, given that the Milky Way is already 100,000 light-years in diameter. The collision would have also triggered intense stellar formation. This process takes place as masses of hydrogen gas in both galaxies would have been stirred by the massive gravitational and tidal interactions caused by the two galaxies' large masses. But the latest data on the nearby galaxy indicate

that there is a possibility that the two will not collide. This has yet to be established for sure, but investigators in the United States are
currently in the process of confirming the potential finding, Daily Galaxy reports. The research is conducted by scientists at the Socorro, New Mexico-based National Radio Astronomy Observatory (NRAO), who are led by expert Lornt Sjouwerman. The group managed to discover a laser-like spot of light called a maser, which it plans to use for analyzing Andromeda. The maser (Microwave Amplification by Stimulated Emission of Radiation) source is being used in ongoing studies to determine the amount of sideways motion that Andromeda is displaying. If that amount is significant, then our neighbor may be

moving fast enough to avoid a full-blown collision with the Milky Way billions of years from now. Collision doesnt threaten the Earth or sun Dubinski, no date, John, Hayden Planetarium, Milky Way-Andromeda Galaxy Collision,
http://www.haydenplanetarium.org/resources/ava/galaxies/G0601andmilwy, KHaze We live in the Milky Way Galaxy, a collection of gas, dust, and hundreds of billions of stars. About two million light years (20 billion billion kilometers) away lies the Andromeda Galaxy, a spiral galaxy similar in size and shape to our Milky Way. Current measurements suggest that, in about

three billion years, the Milky Way and Andromeda galaxies may collide. What will happen? The stars in the galaxies, our Sun included, will probably not hit each other, but the galaxies' mutual gravity will probably pull, twist, and distort them until, about a billion years later, a new elliptical-shaped galaxy is born.

A2 Nano-Tech/Nano-Tech Good
Nanotechnology would permantently solve war- makes deterrence obsolete Gudbrum 97 [Mark Superconductivity Researcher @ the University of Maryland Nanotechnology and International Security,
http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html]

The nanotechnic era will be fundamentally different from the era in which nuclear weapons were developed and came to dominate the possibilities for global violence. The bombed-out cities of the Second World War, and the nuclear holocausts of our imagination, have persuaded rational minds that there can be no expectation of a meaningful victory in total war between states armed with hundreds of deliverable nuclear weapons. From that point of view, war is obsolete, at least direct and open war between great powers. Nanotechnology will carry this evolution to the next step: deterrence will become obsolete, as it will not be possible to maintain a stable armed peace between nanotechnically-armed rivals. The implications of this statement stand in sharp contradiction to the traditions of a warrior culture and to the assumptions that currently guide policy in the United States and in its potential rivals. Nanotech would provide freedom from the necessities of life Drexler 86 (Eric Nanotechnologist Engines of Creation, http://www.e-drexler.com/d/06/00/EOC/EOC_Chapter_15.html]
<Nanotechnology

will open new choices. Self-replicating systems will be able to provide food, health care, shelter, and other necessities. They will accomplish this without bureaucracies or large factories. Small, selfsufficient communities can reap the benefits. One test of the freedom a technology offers is whether it frees people to return to primitive ways of life. Modern technology fails this test; molecular technology succeeds. As a test case, imagine returning to a stone-age style of life - not by simply ignoring molecular technology, but while using it. > Nanotech would create a utopia Drexler 86 (Eric Nanotechnologist Engines of Creation, http://www.e-drexler.com/d/06/00/EOC/EOC_Chapter_15.html]
<This, then,

is the size of the future's promise. Though limits to growth will remain, we will be able to harvest solar power a trillion times greater than all the power now put to human use. From the resources of our solar system, we will be able to create land area a million times that of Earth. With assemblers, automated engineering, and the resources of space we can rapidly gain wealth of a quantity and quality beyond past dreams. Ultimate limits to lifespan will remain, but cell repair technology will make perfect health and indefinitely long lives possible for everyone. These advances will bring new engines of destruction, but they will also make possible active shields and arms control systems able to stabilize peace. In short, we have a chance at a future with room enough for many worlds and many choices, and with time enough to explore them. A tamed technology can stretch our limits, making the shape of technology pinch the shape of humanity less. In an open future of wealth, room, and diversity, groups will be free to form almost any society they wish, free to fail or set a shining example for the world. Unless your dreams demand that you dominate everyone else, chances are that other people will wish to share them. If so, then you and those others may choose to get together to shape a new world. If a promising start fails - if it solves too many problems or too few-then you will be able to try again. Our problem today is not to plan or build utopias but to seek a chance to try.>

Nanotech solves all disease Spence 99 [Bill, 12/99, Alcor Life Extension Foundation, http://www.stii.dost.gov.ph/infoscience/jun2001/jun01_6.htm]
<Supermedicine Nanotechnology can have innumerable applications in various fields including industry, agriculture, energy, ecology, and health, among others. Supermedicine, including cryonics (roughly, freezing the terminally with the intention of reviving it in the future when nanotechnology already has the cure for the illness) excites more people now than anything else. Building on the atomic scale, mechanical computers with the

power of a mainframe can be manufactured so small, that several hundred will fit inside the space of a biological cell. If one combines microscopic motors, gears, bearings, plates, sensors, power and communication cables, etc. with powerful microscopic computers, one has the makings of a new class of materials, programmable microscopic smart materials that can be used in medicine. Medical nanite can patrol the body, armed with a complete knowledge of a person's DNA, and dispatch any foreign invaders. Such cell sentinels will form an artificial immune system and immunity to not only the common cold, but also AIDS and any future viral or bacterial mutations. The nanites can do what the plastic surgeon does, only better. No pain, no bruising, and results overnight. People can sculpt their own bodies.
People who feel they were born with the wrong gender can really make the change, taking on the full attribute of the opposite sex. Men can bear children. Imagine having one's body and bones woven with invisible diamond fabric. Simple calculations show, a properly

engineered body reinforcement net (possibly bio-diamond composite) woven with nanites smaller than a human cell can increase tolerance to "G" forces to the point that one can fall out of a building and walk away unhurt and alive. In the event
of a fire or a chemical spill, should the air become toxic, microscopic diamond vessels just ten billionths of a meter wide, pressurized with 1,000 atmospheres of pure oxygen could sense oxygen levels in the blood and provide respiration requirements of the body for hours. Fatal accidents can be walked away from, thanks to a range of safety devices possible only with nanotechnology. Even more astounding, nano computers ad manipulators will be small enough to insert into cells, without compromising cellular function ad perform a myriad novel functions. One particularly interesting function is to take an inventory of the host cell's structures using the cell's DNA as a blueprint. Should a foreign nasty element arrive, something outside the inventory as stated by the cell's DNA, the cell sentinel will destroy the invader before it has time to cause damage. The nano computer will not need to know what disease the invader represents, it will not matter. If it is not included in the DNA code, it is destroyed. Viruses, prions (microscopic protein particle similar to a virus but lacking in nucleic acid), parasites and bacteria continue to mutate ad produce new diseases which man's immune system may or may not handle. In theory, a nano cell sentinel can make the body immune to any present or future infectious diseases. Imagine a

child growing up disease-free. There will be no more painful childbirth. With mature nanotechnology capable of cellular
manipulation, there is no reason a woman should experience agonizing hours of labor at the miraculous moment of birth. Dilation can be controlled by the mother without pain. Birth will no longer be traumatic with nanotechnology. There will be nanites for cellular structural repairs (radiation damage, etc.) ad genetic modifications, like disabling biological death gene clocks. >

Nanotech solves everything Joy 2k [Bill co-founder of Sun Microsystems April, Why the Future Doesnt Need Us, Wired Magazine]
<A subsequent book,Unbounding the Future: The Nanotechnology Revolution, which Drexler cowrote, imagines some of the changes that might take place in a world where we had molecular-level "assemblers ." Assemblers could make possible incredibly low-cost solar power,

cures for cancer and the common cold by augmentation of the human immune system, essentially complete cleanup of the environment, incredibly inexpensive pocket supercomputers - in fact, any product would be manufacturable by assemblers at a cost no greater than that of wood - spaceflight more accessible than transoceanic travel today, and restoration of extinct species. I remember feeling good about nanotechnology after readingEngines of Creation. As a technologist, it gave me a sense of calm - that is, nanotechnology showed us that incredible progress was possible, and indeed perhaps inevitable. If nanotechnology was our future, then I didn't feel pressed to solve so many problems in the present. I would get to
Drexler's utopian future in due time; I might as well enjoy life more in the here and now. It didn't make sense, given his vision, to stay up all night, all the time.>

We have to develop nanotech stopping now would just allow bad people to get it on balance allowing free technology advancement is better than stifling it R21 2 [May 20, http://www.r21online.com/archives/000007.html]
<Nanotech

packs something for everyone to worry about. Yet, while Joy's concerns are worth considering, and his call for discussion is not at all clear that we are headed for Armageddon. And if we are, the path to dystopia has always been at the hands of humans bent on control, using technology as their tool, not the other way around. Joy's solution requires a global unanimity impractical today and very likely impossible ever. Even if "good" scientists held back from certain areas of research, this may simply put that knowledge exclusively in the hands of the "bad" scientists who may use it for destruction, with no one capable of countering them. Knowledge among virtuous may be the only thing to protect us from the wicked. There is the
appropriate, his quest for relinquishment is not. The opinions of brilliant technologists notwithstanding, it chance that the evil potential of nanotech is unleashed unwittingly by those with good intentions. But given the amount of introspection on this topic (The Foresight Institute, for example, has developed a list of self-governing principles in the development of nanotechnology), it is far more likely that

it will be evil that will beget evil. With that being the case, who knows what virtues nanotech could bring to the world in the intervening decades before it can become a danger? Joy seeks to determine for the rest of us that the costs of this technology may outweigh the benefits, but it is simply impossible for one individual to tell whether the reverse may be the case. Despite the horror stories, the development of technologies, though not without costs, has overwhelmingly proved a net positive force for the state of humanity. While human suffering continues, as a global society we live longer, healthier, and wealthier lives, thanks not exclusively, but in great part due to innovation. Thomas Malthus was wrong, and while
Joy or some other neo-Malthusian may be right in the future, history suggests that the surer path to dystopia is not to pursue knowledge, but to ban it.>

Nanotech solves world hunger, new cures, and the environment Chen in 2002 [Andrew, The Ethics of Nanotechnology, http://www.actionbioscience.org/newfrontiers/chen.html] It would not take much of a leap, then, to imagine disassemblers dismantling garbage to be recycled at the molecular level, and then given to assemblers for them to build atomically perfect engines. Stretching this vision a bit, you can imagine a Star Trek type replicator which could reassemble matter in the form of a juicy steak , given the correct blueprints and organization of these nanomachines. Nanotechnology could also benefit medicine and the environment.
Just given the basic premises of nanotechnology, you can imagine the vast potential of this technology. Some of its more prominent benefits would be: Manufacturing > Precision Manufacturing > Material Reuse > Miniaturization Medicine > Pharmaceutical Creation > Disease Treatment > Nanomachineassisted Surgery Environment > Toxin Cleanup > Recycling > Resource Consumption Reduction Doctors could repair our bodies

microscopically with nanomachines. Along with all the obvious manufacturing benefits, there are also many potential medical and environmental benefits. With nanomachines, we could better design and synthesize pharmaceuticals; we could directly treat diseased cells like cancer; we could better monitor the life signs of a patient; or we could use nanomachines to make microscopic repairs in hard-to-operate-on areas of the body.3,12 With regard to the environment, we could use nanomachines to clean up toxins or oil spills, recycle all garbage, and eliminate landfills, thus reducing our natural resource consumption. Nanotechnology solves death and illnesses Zey in 94 (Michael Ph.D in sociology, executive director of the Expansionary Institute Seizing the Future, p. 147) Nanotechnology will also play a major role in postoperative healing. Quite simply, repair machines will help the heart grow fresh muscle by resetting cellular control mechanisms. Stroke victims will be helped to regenerate fresh brain tissue even
where there has been significant damage. The ultimate goal here is not merely

to cure disease, but rather to establish lifetime health, perhaps even immortality. This will transpire when we have achieved a complete understanding of the molecular structure of healthy tissue. Then we will have the knowledge to diagram, as if were, the structure of a healthy heart cell or a healthy liver and transfer to tiny
machines the accurate information about that organs molecules, cells, and tissues.

A2 Ocean Acidification
Acidification wont kill ocean life Idso et al 8 [Keith and Craig Idso - Research Physicist with the U.S. Department of Agriculture's Agricultural Research Service, Vice President of the
Center for the Study of Carbon Dioxide and Global Change with a PhD in Botany, former Director of Environmental Science at Peabody Energy in St. Louis, Missouri and is a member of the American Association for the Advancement of Science, American Geophysical Union, American Meteorological Society, Arizona-Nevada Academy of Sciences, Association of American Geographers, Ecological Society of America, and The Honor Society of Phi Kappa Phi. Ocean Acidification and Jellyfish Abundance - Volume 11, Number 48: 26 November 2008, http://www.co2science.org/articles/V11/N48/EDIT.php, Chetan] In a paper recently published in Limnology and Oceanography, Richardson

global ocean since the start of the Industrial Revolution, and that "such acidification

and Gibbons (2008) say there has been a drop of 0.1 pH unit in the of the ocean may make calcification more

difficult for calcareous organisms ," resulting in the "opening [of] ecological space for non-calcifying species." In line with this thinking, they report

that Attrill et al. (2007) have argued that "jellyfish may take advantage of the vacant niches made available by the negative effects of acidification on calcifying plankton," causing jellyfish to become more abundant; and they note that the latter researchers provided some evidence for this effect in the westcentral North Sea over the period 1971-1995. Hence, they undertook a study to see if Attrill et al.'s findings (which were claimed to be the first of their kind) could be replicated on a much larger scale. Working with data from a larger portion of the North Sea, as well as throughout most of the much vaster Northeast Atlantic Ocean, Richardson and Gibbons used coelenterate (jellyfish) records from the Continuous Plankton Recorder (CPR)

and pH data from the International Council for the Exploration of the Sea (ICES) for the period 1946-2003 to explore the possibility of a relationship between jellyfish abundance and acidic ocean conditions. This work revealed that there were, as they describe it, "no significant relationships between jellyfish abundance and acidic conditions in any of the regions investigated." In harmony with their findings, the two researchers note that "no observed declines in the abundance of calcifiers with lowering pH have yet been reported." In addition, they write that the "larvae of sea urchins form skeletal parts comprising magnesium-bearing calcite, which is 30 times more soluble than calcite without magnesium," and, therefore, that "lower ocean pH should drastically inhibit [our italics] the formation of these soluble calcite precursors." Yet they report "there is no observable negative effect of pH." In fact, they say that echinoderm larvae in the North Sea have actually exhibited "a 10-fold increase [our italics] in recent times," which they say has been "linked predominantly to warming (Kirby et al., 2007)." Likewise, they further note that even in the most recent IPCC report, "there was no empirical evidence reported for the effect of acidification on marine biological systems (Rosenzweig et al., 2007)," in spite of all the concern that has been raised by climate alarmists claiming that such is, or should be, occurring. In light of this body of real-world evidence, or non-evidence, Richardson and Gibbons conclude (rather generously, we might add) that "the role of pH in structuring zooplankton communities in the North Sea and further afield at present is tenuous." Coral is resilient and can adapt Idso et al 8 [Keith and Craig Idso - Research Physicist with the U.S. Department of Agriculture's Agricultural Research Service, Vice President of the

Center for the Study of Carbon Dioxide and Global Change with a PhD in Botany, former Director of Environmental Science at Peabody Energy in St. Louis, Missouri and is a member of the American Association for the Advancement of Science, American Geophysical Union, American Meteorological Society, Arizona-Nevada Academy of Sciences, Association of American Geographers, Ecological Society of America, and The Honor Society of Phi Kappa Phi. High-Temperature Tolerance in Corals - Vol. 11, No. 39: September 2008, http://www.co2science.org/articles/V11/N39/EDIT.php, Chetan]

the Australian scientists say that "the range in bleaching tolerances among corals inhabiting different thermal realms suggests that at least some coral symbioses have the ability to adapt to much higher temperatures than they currently experience in the central Great Barrier Reef ," citing the work of Coles and Brown (2003) and Riegl (1999, 2002). In addition, they note that "even within reefs there is a significant variability in bleaching susceptibility for many species (Edmunds, 1994; Marshall and Baird, 2000), suggesting some potential for a shift in thermal tolerance based on selective mortality (Glynn et al., 2001; Jimenez et al., 2001) and local population growth alone." Above and beyond that, however, they say that their results additionally suggest "a capacity for acclimatization or adaptation." In concluding their paper, Maynard et al. say "there is emerging evidence of high genetic structure within coral species (Ayre and Hughes, 2004)," which suggests, in their words, that "the capacity for adaptation could be greater than is currently recognized ." Indeed, as we note in our Editorial of 20 February 2008, quoting Skelly et al. (2007), " on the basis of the present knowledge of genetic variation in performance traits and species' capacity for evolutionary response, it can be concluded that evolutionary change will often occur concomitantly with changes in climate as well as other environmental changes." Consequently, it can be appreciated that if global warming were to start up again (it has been in abeyance for about the last decade), it need not spell the end for earth's highly adaptable corals.
As for the significance of these and other observations,

The threat of ocean acidification is greatly exaggerated overwhelming scientific proof

Ridley 10 [Matt Ridley was educated at Eton College from 1970-1975 and then went on to Magdalen College of the University of Oxford and completed
a Bachelor of Arts with first class honours in zoology and then a Doctor of Philosophy in zoology in 1983. Threat From Ocean Acidification Greatly Exaggerated, June 15th, 2010, http://www.thegwpf.org/the-observatory/1106-matt-ridley-threat-from-ocean-acidification-greatlyexaggerated.html, Chetan] Lest my critics still accuse me of cherry-picking studies, let me refer them also to the Science 86:157). Far from being a cherry-picked study, this

results of Hendrikset al. (2010, Estuarine, Coastal and Shelf is a massive meta-analysis. The authors observed that `warnings that ocean acidification is a major threat to marine biodiversity are largely based on the analysis of predicted changes in ocean chemical fields rather than empirical data. So th ey constructed a database of 372 studies in which the responses of 44 different marine species to ocean acidification induced by equilibrating seawater with CO2-enriched air had been actually measured. They found that only a minority of studies demonstrated `significant responses to acidification and there was no significant mean effect even in these studies. They concluded that the world's marine biota are `more resistant to ocean acidification than suggested by pessimistic predictions identifying ocean acidification as a major threat to marine biodiversity and that ocean acidification `may not be
the widespread problem conjured into the 21st centuryBiological processes can provide homeostasis against changes in pH in bulk waters of the range predicted during the 21st century. This important paper alone contradicts Hoegh-Gudlbergs assertion that `the vast

bulk of scientific evidence shows that calcifiers are being heavily impacted already. Acidification is contradicted by chemistry and empirical data dont evaluate pessimists Ridley 10 [Matt Ridley was educated at Eton College from 1970-1975 and then went on to Magdalen College of the University of Oxford and completed
a Bachelor of Arts with first class honours in zoology and then a Doctor of Philosophy in zoology in 1983. The Rational Optimist: How Prosperity Evolves 2010 (pg 340-341), Chetan]

Ocean acidification looks suspiciously like a back-up plan by the environmental pressure groups in case the climate fails to warm: another try at condemning fossil fuels. The oceans are alkaline, with an average pH of about 8.1, well above neutral (7). They are also extremely well buffered. Very high carbon dioxide levels could push that number down, perhaps to about 7.95 by 2050 ...still highly alkaline and still much higher than it was for most of the last 100 million years. Some argue that this tiny downward shift in average alkalinity could make it harder for animals and plants that deposit calcium carbonate in their skeletons to do so. But this flies in the face of chemistry: the reason the acidity is increasing is that the dissolved bicarbonate is increasing too .. and increasing the bicarbonate concentration increases the ease with which carbonate can be precipitated out with calcium by creatures that seek to do so. Even with tripled bicarbonate concentrations, corals show a continuing increase in both photosynthesis and calcification. This is confirmed by a rash of empirical studies showing that increased carbonic acid either has no effect or actually increases the growth of calcareous plankton, cuttlefish larvae and coccolithophorcs. My general optimism is therefore not dented by the undoubted challenge of global warming by carbon dioxide. Even if the world warms as much as the consensus expects, the net harm still looks small alongside the real harm now being done by preventable causes; and if it does warm this much, it will be because more people are rich enough to afford to do something about it. As usual, optimism gets a bad press in this debate. Optimists are dismissed as fools, pessimists as sages , by a media that likes to be spoon-fed on scan' press releases. That does not make the optimists right, but the poor track record of pessimists should at least give one pause.

A2 Oxygen Crisis
Oxygen Crisis impacts are a lie Tatchell is a dirty and expensive piece of toilet paper Motl 8 ( Lubos Motl is a theoretical physicist and assistant professor at Harvard University, August 17, The Reference Frame, The Oxygen Crisis,
http://motls.blogspot.com/2008/08/oxygen-crisis.html) Most mainstream media have abandoned almost below the image of science as presented in the leading pornographic magazines. The

all quality control in their science reporting that is now arguably slightly latest extreme example of this observation comes

from a Gentleman called Peter Tatchell, a political campaigner from the left wing of the Green party (a description that probably makes Karl
Marx a staunch conservative in comparison; he's been also denounced by the British Parliament as a "homosexual terrorist" in 1994): The Guardian, China Daily He argues that there exists a more serious crisis than the "CO2 crisis" : the oxygen levels are dropping and the human activity has decreased them by 1/3 or 1/2, he

says. Wow. ;-) The reality is, of course, that the oxygen percentage in the atmosphere has been 20.94 or 20.95 percent for thousands of years and probably much longer than that (see the historical graph on page 2 of Dudley 1998 that covers 600 million years). The amount of oxygen in the atmosphere is so huge that the biosphere (and fossil fuels which used to belong to the biosphere as well) is completely unable to change this amount significantly. It may be useful to mention that the oxygen is only 1/5 of the atmosphere and the atmosphere is just 1/1,200,000 of the mass of the Earth. However, the
Earth is damn heavy, 6 x 10^{24} kilograms, so the mass of the oxygen in the atmosphere is something like 10^{18} kilograms - about 150,000 tons per capita. Be sure that we can't burn that much oxygen even if everyone in the world were using a private jet on a daily basis. ;-) There is a simpler way to see that man-made changes to the oxygen levels are trivial and we will look at it now. Estimating the oxygen change For a schoolboy who is not skipping his science classes at the elementary school, it shouldn't be difficult to see why we can't significantly influence the amount of oxygen in the atmosphere. How can he do it? Well, he must realize that virtually all processes related to life and human activity breathing (by animals and plants) and burning (combustion) - exchange

the atmospheric O2 molecules by CO2 molecules or vice

versa. Sometimes, one needs two O2 molecules and only produces one CO2 molecule but this subtlety won't change our final result significantly. Virtually
all other compounds participating in the relevant chemical reactions are either liquids or solids which is why they don't influence the composition of the atmosphere and we will ignore them. When you realize what the words above mean, you will see that the man-made decrease of oxygen (O2)

is controlled by the increase of carbon dioxide: they're inseparably linked to one another. The human activity has
increased the CO2 concentration from 280 ppm two centuries ago to 385 ppm today (the schoolboy should have seen these elementary numbers during his "CO2 crisis" classes). Because many people don't know what the acronym ppm (parts per million) really means, even if they like to use it, let me tell you that it is the same thing as 0.0001%. So the carbon dioxide went from 0.028% to 0.038%: the difference is 0.01% of the volume of the atmosphere. Because O2 and CO2 molecules occupy the same volume at a given pressure and a given temperature (since pV = NkT), the decrease of O2 should be equal to the increase of CO2 if the molecules were exchanged for one another: the oxygen should drop by 0.01% of the volume of the atmosphere. As we have already mentioned, two oxygen molecules are replaced in typical "combustion" chemical reactions for one carbon dioxide molecule, so the oxygen drop might be 0.02% instead of 0.01%. However, in the long run, there exist other processes besides the combustion-like processes involving CO2 that we have considered - for example processes involving deep ocean sediments - and these processes tend to restore the oxygen levels (as well as the CO2 levels). At any rate, you see that the oxygen level couldn't have decreased by more than 0.01% or so, from 20.95% to 20.94%, which is pretty much exactly what was observed. We needed centuries or millenia to achieve this modest "goal". It is very clear that even if we burned all

forests, plants, animals, and fossil fuels in the world, we couldn't get the oxygen levels below 20% (and maybe not
even 20.9%). Evaluating the impact Does the tiny decrease of oxygen levels change some important things? It doesn't. The most "spectacular" change is that the wildfire risk decreases by something like 0.01%, too (and maybe slightly more), as the oxygen levels drop. Because wildfires are somewhat unpopular and their decrease would be good news, you won't read about it. ;-) At any rate, all these changes are negligible given the tiny change of the O2 levels. Tatchell writes "I am not a scientist, but this seems a reasonable concern." It seems reasonable to whom? To me, worries about the "oxygen

crisis" seems to be a ticket for someone to be stored in a mental asylum. The point here is not whether Tatchell is a scientist: he's clearly not. The question is whether he is dangerous enough a weirdo to be isolated from the society. We won't be able to change the oxygen level in any significant way. Incidentally, while the overall amount of oxygen in the atmosphere is essentially constant, the amount of oxygen in various parts of organisms varies dramatically. For example,
the human body must keep the concentration of this harmful-if-abundant gas around 5% in most organs. Oxygen is not only a corrosive gas but also a metabolic poison under most cellular reactions. Its optimal percentage depends on the life forms which is why the varying percentage of oxygen in amber - a point mentioned by Tatchell - says absolutely nothing about the overall O2 volume. Men have been able to change the overall carbon dioxide (CO2) concentrations measurably because it is a trace gas: there was almost none to start with, so it is easy to change its volume by relatively large amounts, proportionally speaking. But oxygen is one of the gases that the Earth's atmosphere has been made out of for 0.5 or even 2.5 billion years. You can't change that. Incidentally, if you care how the oxygen became so important, probably 500 million years ago, the Earth needed an intense period of upheavals in its crust and it still took about 2 million years for all the change to materialize: see Science Daily. This rate is very fast from a geologist's viewpoint but surely not fast enough to be considered an urgent problem for policymakers. ;-) Other errors Tatchell writes a lot of other incredible nonsense,

for example that the oxygen in cities is much (by 15%?) lower than it is in the countryside . He probably believes that the pressure drops from 1000 to 900 millibars in the cities. ;-) He also tries to pretend that some scientists support his idiotic propositions. Gimpy, who respects Tatchell's courage, explains that Tatchell has all the symptoms that define a crank. He satisfies most of my defining criteria of crackpots, too. Is someone at the Guardian who has some common sense left? Could you please stop printing insane people like Peter Tatchell who help to transform your daily into an expensive and dirty piece of toilet paper? Experts agree that the oxygen crisis is just scare mongering and that tatchells a crank -more precise measuring techniques are responsible -oxygen levels are regulated by deep-ocean sediments

- too much O2 would cause rampant forest fires SPPI 8 (August 18, Science and Public Policy Institute, "Oxygen Scarcity Threatens
Humankind",http://scienceandpublicpolicy.org/scarewatch/oxygen_scarcity.html?Itemid=0) The scare: As the peer-reviewed literature is filled with a growing proportion of learned papers demolishing the imagined consensus that anthropogenic global warming will prove catastrophic, the less serious newspapers are looking for new scares to peddle to the feeble-minded. In mid-August 2008, The Guardian, Britains silliest newspaper, printed an article by Peter Tatchell suggesting that the worlds oxygen is running out

because of humankinds use of fossil fuels. Atmospheric oxygen trend from Cape Grim, Tasmania. Tatchell says: Little or no attention is
being paid to the long-term fall in oxygen concentrations and its knock-on effects. Compared to prehistoric times, the level of oxygen in the Earths atmosphere has declined by over a third and in polluted cities the decline may be more than 50%. Much of this recent, accelerated change is down to human activity, notably the industrial revolution and the burning of fossil fuels. This change in the makeup of the air we breathe has potentially serious implications for our health. Indeed, it could ultimately threaten the survival of human life on earth. The truth: Dr. Roy Spencer, of the

University of Alabama at Huntsville, says: The O2 concentration of the atmosphere has been measured off and on for about 100 years now, and the concentration, at 20.95%, has not varied within the accuracy of the measurements . Only in recent years have more precise measurement techniques been developed, and the tiny decrease in O2 with increasing CO2 has been actually measured. But I believe the O2 concentration is still close to 20.95%. There is so much O2 in the atmosphere, it is believed not to be substantially affected by vegetation, but it is the result of geochemistry in deep-ocean sediments. No one really knows for sure. Since too much O2 is not good for humans, the human body keeps O2 concentrations down to around 5% in our major organs . Extra O2 can give you a burst of energy, but it will harm you (or kill you) if the exposure is too long. It has been estimated that global wildfire risk would increase greatly if O2 concentrations were much more than they are now. To say that there is an impending oxygen crisis on Earth is the epitome of fear- mongering . Meteorologist Anthony Watts , of www.wattsupwiththat.com, adds: This is the sort of story I would expect in the supermarket tabloids next to a picture of Bat Boy. For the UK Guardian to say there is a oxygen crisis, is not only ignorant of the facts, but simple fearmongering riding on the coat-tails of the CO2 crisis . I really wish the media would do a better job of researching and reporting science stories. This example from the Guardian shows how bad science and bad reporting combine to create fearmongering. Consensus is on our side experts agree that the oxygen crisis is nonsense Canada Free Press 8 (August 17, Inventing the Next Phony Green Crisis, http://www.canadafreepress.com/index.php/article/4508)
The August 13 article suggested that there had been a long-term fall in oxygen concentrations around the Earth. The basis for the next great crisis, an Earth with less oxygen, was being tested to see if it had any legs. Dr. Roy Spencer, a NASA scientist, summed up the reaction of

his colleagues. It doesnt get much more stupid than this. Then he provided the real science as opposed to the hodge-podge of nonsense in the Guardian article. The O2 (oxygen) concentration of the atmosphere has been measured off and on for about 100 years now and the concentration (20.95%) has not varied within the accuracy of the measurements. There is SO much 02 in the atmosphere, said Dr. Spencer, it is believed to not be substantially affected by vegetation, but is the result of geochemistry in deep-ocean sediments. No one really knows for sure. The reference to vegetation reflects the way all vegetation takes in CO2 for its growth and gives off O2, in the process. Animals breathe in oxygen and exhale carbon dioxide. It is the symmetry of all life on earth. Based on a forthcoming book, The Oxygen Crisis by Roddy Newman, the alleged loss of oxygen was causing deserts to spread and
forests to decline. It posed, of course, a threat to all mankind. The real crisis for the Greens is that they and their allies in the mindless media have run out of ways to frighten huge numbers of people who are more rightly concerned about the price of a gallon of gasoline, crazed Islamic fundamentalists, and the prospect of the Russians starting World War III. With great enthusiasm, Dr. Spencers many colleagues joined in the discussion and,

no doubt, if the idiotic assertion that the Earth is running out of oxygen should pop its head up out of its little Green hole, it will be assailed at great length.

A2 Particle Accelerators
No impact the universe is making the total number of collisions made by the LHC 10 trillion times a second Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL Before discussing these two hypothetical phenomena in more detail, we

first review in Section 2 estimates of the rates of collisions of high-energy cosmic rays with different astronomical bodies, such as the Earth, Sun, and others. We estimate that the Universe is replicating the total number of collisions to be made by the LHC over 1013 times per second, and has already done so some 1031 times since the origin of the Universe. The fact that astronomical bodies withstand cosmic-ray bombardment imposes strong upper limits on many hypothetical sources of danger. In particular, as we discuss in Section 3, neither the creation of vacuum bubbles nor the production of magnetic monopoles at the LHC is a case for concern. No impact the universe has already conducted trillions of collisions Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL Moreover, our

Milky Way galaxy contains about 1011 stars with sizes similar to our Sun, and there are about 1011 similar galaxies in the visible Universe. Cosmic rays have been hitting all these stars at rates similar to collisions 31 with our own Sun. This means that Nature has already completed about 10 LHC experimental programmes since the beginning of the Universe. Moreover, each second, the Universe is continuing to repeat about 3x1013 complete LHC experiments. There is no indication that any of these previous LHC experiments has ever had any large-scale consequences. The stars in our galaxy and others still exist, and conventional astrophysics can explain all the astrophysical black holes detected. Particle Accelerators wont cause extinction physicists agree and empirical evidence proves
Anders Sandberg, Postdoctoral research assistant for the Oxford group of the EU ENHANCE Project at the Uehiro Centre for Practical Ethics and research associate at the Future of Humanity Institute At Oxford University, 2008 (Extinction Risks and Particle Physics: When Are They Worth it?, 3/29, http://www.practicalethicsnews.com/practicalethics/2008/03/extinction-risk.html, Kunal) The Large Hadron Collider, LHC, is the worlds biggest particle accelerator and due to start investigating the structure of matter later this year. Now a lawsuit has been filed in the US calling on the U.S. Department of Energy, Fermilab, the National Science Foundation and CERN to stop preparations for starting the LHC for a reassessment of the safety of the collider. The reason is fears

that the high energy collisions could cause some form of devastating effect threatening the Earth: either the formation of miniature black holes, strangelets that absorb matter to make more strangelets or even a decay of the vacuum state of the universe. Needless to say, physicists are very certain there are no risks. But how certain should we be about safety when there could be a risk to the survival of the human species? The main reason physicists are not worried is that all of the disaster scenarios involve very speculative physics. Current theories do not seem to predict any danger and some disaster cases would require particles that have never been observed despite extensive searches. But this requires our understanding to be accurate, something the experiment itself is about to test. Perhaps the most convincing argument that we are safe is that if particle collisions could collapse planets, why is the moon (or any other heavenly body) still around after billions of years of bombardment that often involve energies far larger than what the LHC ever could produce? The solar system ought to be littered with strange matter and black holes if a measly 14 TeV could cause danger.
On balance, benefits outweigh low risk of accelerators
Anders Sandberg, Postdoctoral research assistant for the Oxford group of the EU ENHANCE Project at the Uehiro Centre for Practical Ethics and research associate at the Future of Humanity Institute At Oxford University, 2008 (Extinction Risks and Particle Physics: When Are They Worth it?, 3/29, http://www.practicalethicsnews.com/practicalethics/2008/03/extinction-risk.html, Kunal) Fortunately, Nick Bostrom and Max Tegmark found a way of estimating the risk that does not suffer this kind of bias. The trick is to consider when the lucky observers appear in the history of the universe. In a risky universe it is very unlikely for an observer to emerge late rather than early, since their planet would be likely to have been devastated. In a safe universe there would be no such bias. Plugging in estimates of when a typical planet forms and how old the Earth is, they could show that the rate of sterilization of planets for any reason is less than one in a billion per year. Good news, and it doesn't even require assuming a particular kind of disaster. However, this might not be enough to calm some people. A

small but finite chance of global destruction seems to be a risk we should not take if we can easily avoid it. Perhaps surprisingly, this may not be true if there are benefits linked to taking the risk. Imagine that there is a one in a billion chance that turning on the LHC destroys mankind, and otherwise makes every human life on average X times

better. When is the expected benefit of taking the risk larger than the value of not taking the risk? It turns out X only needs to be 1.00000000100000008 - a minuscule amount better.
No threat of collider impacts CERN 03 (STUDY OF POTENTIALLY DANGEROUS EVENTS DURING HEAVY-ION COLLISIONS AT THE LHC: REPORT OF THE LHC SAFETY
STUDY GROUP," 2/28, http://doc.cern.ch/yellowrep/2003/2003-001/p1.pdf, Kunal)

We review the possibility of producing dangerous objects during heavy-ion collisions at the Large Hadron Collider. We consider all such objects that have been theoretically envisaged, such as negatively charged strangelets, gravitational black holes, and magnetic monopoles. We find no basis for any conceivable threat. No risk of black hole or strangelets
R. L. Jaffe, Professor of Physics at MIT, et al, 2000 (Review of Speculative "Disaster Scenarios" at RHIC, Rev.Mod.Phys. 72, http://arxiv.org/abs/hepph/9910333, Kunal) We discuss speculative disaster scenarios inspired by hypothetical new fundamental processes that might occur in high energy relativistic heavy ion collisions. We estimate the parameters relevant to black hole production; we find that they are absurdly small. We show that other accelerator and (especially) cosmic ray environments have already provided far more auspicious opportunities for transition to a new vacuum state, so that existing observations provide stringent bounds. We

discuss in most detail the possibility of producing a dangerous strangelet. We argue that four separate requirements are necessary for this to occur: existence of large stable strangelets, metastability of intermediate size strangelets, negative charge for strangelets along the stability line, and production of intermediate size strangelets in the heavy ion environment. We discuss both theoretical and experimental reasons why each of these appears unlikely; in particular, we know of no plausible suggestion for why the third or especially the fourth might be true. Given minimal physical assumptions the continued existence of the Moon, in the form we know it, despite billions of years of cosmic ray exposure, provides powerful empirical evidence against the possibility of dangerous strangelet production.

A2 Pole Shift
Experts agree- Theres no threat from pole shifts- at the very least, reversals take 1000 yearscurrent changes are mild NASA 3 ( Earth's Inconstant Magnetic Field, 2003, NASA, http://science.nasa.gov/science-news/science-at-nasa/2003/29dec_magneticfield/)
December 29, 2003: Every few years, scientist Larry Newitt of the Geological Survey of Canada goes hunting. He grabs his gloves, parka, a fancy compass, hops on a plane and flies out over the Canadian arctic. Not much stirs among the scattered islands and sea ice, but Newitt's prey is there--always moving, shifting, elusive. His quarry is Earth's north magnetic pole. At the moment it's located in northern Canada, about 600 km from the nearest town: Resolute Bay, population 300, where a popular T-shirt reads "Resolute Bay isn't the end of the world, but you can see it from here." Newitt stops there for snacks and supplies--and refuge when the weather gets bad. "Which is often," he says. Right: The movement of Earth's north magnetic pole across the Canadian arctic, 1831--2001. Credit: Geological Survey of Canada. [more] Scientists have long known that the magnetic pole moves. James Ross located the pole for the first time in 1831 after an exhausting arctic journey during which his ship got stuck in the ice for four years. No one returned until the next century. In 1904, Roald Amundsen found the pole again and discovered that it had moved--at least 50 km since the days of Ross. Sign up for EXPRESS SCIENCE NEWS delivery The pole kept going during the 20th century, north at an average speed of 10 km per

year, lately accelerating "to 40 km per year," says Newitt. At this rate it will exit North America and reach Siberia in a few decades. Keeping
track of the north magnetic pole is Newitt's job. "We usually go out and check its location once every few years," he says. "We'll have to make more trips now that it is moving so quickly." Earth's magnetic field is changing in other ways, too: Compass needles in Africa, for instance, are drifting about 1 degree per decade. And globally the magnetic field has weakened 10% since the 19th century. When this was mentioned by researchers at a recent meeting of the American Geophysical Union, many newspapers carried the story. A typical headline: "Is

Earth's magnetic field collapsing?" Probably not. As remarkable as these changes sound, "they're mild compared to what Earth's magnetic field has done in the past," says University of California professor Gary Glatzmaier. Sometimes the field completely flips. The north and the south poles swap places. Such reversals, recorded in the magnetism of ancient rocks, are unpredictable. They come at irregular intervals averaging about 300,000 years; the last one was 780,000 years ago. Are we overdue for another? No one knows. According to Glatzmaier, the ongoing 10% decline doesn't mean that a reversal is imminent. "The field is increasing or decreasing all the time," he says. "We know this from studies of the paleomagnetic record." Earth's presentday magnetic field is, in fact, much stronger than normal. The dipole moment, a measure of the intensity of the magnetic field, is now
8 1022 amps m2. That's twice the million-year average of 4 1022 amps m2. To understand what's happening, says Glatzmaier, we have to take a trip ... to the center of the Earth where the magnetic field is produced. At the heart of our planet lies a solid iron ball, about as hot as the surface of the sun. Researchers call it "the inner core." It's really a world within a world. The inner core is 70% as wide as the moon. It spins at its own rate, as much as 0.2 of longitude per year faster than the Earth above it, and it has its own ocean: a very deep layer of liquid iron known as "the outer core." Right: a schematic diagram of Earth's interior. The outer core is the source of the geomagnetic field. Earth's magnetic field comes from this ocean of iron, which is an electrically conducting fluid in constant motion. Sitting atop the hot inner core, the liquid outer core seethes and roils like water in a pan on a hot stove. The outer core also has "hurricanes"--whirlpools powered by the Coriolis forces of Earth's rotation. These complex motions generate our planet's magnetism through a process called the dynamo effect. Using the equations of magnetohydrodynamics, a branch of physics dealing with conducting fluids and magnetic fields, Glatzmaier and colleague Paul Roberts have created a supercomputer model of Earth's interior. Their software heats the inner core, stirs the metallic ocean above it, then calculates the resulting magnetic field. They run their code for hundreds of thousands of simulated years and watch what happens. What they see mimics the real Earth: The magnetic field waxes and wanes, poles drift and, occasionally, flip. Change is normal, they've learned. And no wonder. The source of the field, the outer core, is itself seething, swirling, turbulent. "It's chaotic down

there," notes Glatzmaier. The changes we detect on our planet's surface are a sign of that inner chaos. They've also learned what happens during a magnetic flip. Reversals take a few thousand years to complete , and during that time--contrary to popular belief--the magnetic field does not vanish. "It just gets more complicated," says Glatzmaier. Magnetic lines of force near Earth's surface become twisted and tangled, and
magnetic poles pop up in unaccustomed places. A south magnetic pole might emerge over Africa, for instance, or a north pole over Tahiti. Weird. But it's still a planetary magnetic field, and it still protects us from space radiation and solar storms. Above: Supercomputer models of Earth's magnetic field. On the left is a normal dipolar magnetic field, typical of the long years between polarity reversals. On the right is the sort of complicated magnetic field Earth has during the upheaval of a reversal. [more] And, as a bonus, Tahiti could be a great place to see the Northern Lights. In such a time, Larry Newitt's job would be different. Instead of shivering in Resolute Bay, he could enjoy the warm South Pacific, hopping from island to island, hunting for magnetic poles while auroras danced overhead. Sometimes, maybe, a little change can be a good thing.

Pole shifts are common and have empirically not disturbed animal or plant life Birk and Konz 4 ( June 2, G. T. Birk, Institut fr Astronomie and Astrophysik, l. Konz, Universitt Mnchen, Scheinerstr,

Max-Planck-Institute for Plasma Physics, Garching, Germany , Solar wind induced magnetic field around the unmagnetized Earth, Astronomy and Astrophysics international journal, http://www.aanda.org/index.php?option=com_article&access=standard&Itemid=129&url=/articles/aa/abs/2004/23/aagb091/aagb091.html)

Paleomagnetic records show that the magnetism of Earth has reversed itself hundreds of times over the last 400 million years (Valet & Meynardier 1993; Juarez et al. 1998; Gee et al. 2000; Selkin & Tauxe 2000; Valet 2003). In fact, geomagnetic polarity reversals represent the most dynamic feature of the Earth's magnetic field. The polarity reversals do not occur instantaneously. Rather, transition periods, that span some thousand years and are characterized by unstable varying magnetic fields
with no clear shape, lay between the stable dipole field states. During the transition periods the magnetic field strength can drop well below 10 of the average value (Juarez et al. 1998; Guyodo & Valet 1999; Selkin & Tauxe 2000) which signifies a drastic breakdown of the Earth's dynamo. In the present time the magnetic south pole has wandered over more than 1100 km during the last 200 years. The strength of the Earth's field has decreased by per century. This decrease is by far the fastest that has been verified since the last total field reversal, the end of the so-called Matuyama chron, 730 000 years ago. Also, by statistical estimates the Earth's dynamo is overdue for a reversal . Thus, we have to expect a transition

period characterized by a very small Earth's magnetic field in the near future. Since the Earth's dipole field provides us with a

shield against cosmic rays and solar high-energy radiation one may wonder about the consequences for life on Earth . Also, having in mind that Mars lost the atmosphere almost completely after the final breakdown of the magnetic field (Luhmann & Bauer 1992), one may speculate that stripping by the solar wind could alter the Earth's atmosphere. Severe climate changes could result. Interesting enough, during the last Brunhes-

Matuyama reversal, no major changes in plant and animal life have been detected. This may be partly due to the fact that the atmospheric layers block some fraction of the cosmic radiation by scattering . On the other hand, the cosmogenic
radionuclide production varies at least over the last 200 000 years as function of short-term variations of the magnetic field (Frank 2000). In this contribution. we consider the interaction of the solar wind with a completely unmagnetized Earth. When the solar wind encounters unmagnetized objects, such as Venus (Luhmann 1995) and comets (Konz et al. 2004), magnetic barriers and ionopauses develop. Although the interaction of a fully ionized and a weakly ionized gas is very complex, an important characteristic can be identified - the generation of magnetic fields caused by relative plasma-neutral gas shear flows. It has been shown (Huba & Fedder 1993) that this process operates in the Venus' ionosphere and is responsible for the non-dipole magnetic field measured there. The same process has been studied in detail for the generation of seed magnetic fields in emerging galaxies (Wiechen et al. 1998; Birk et al. 2002) and in circumstellar disks (Birk et al. 2003). A kinematic estimate indicates that relatively strong magnetic fields are generated in the Earth's ionosphere. So far, the only way to study the dynamical non-linear interaction of the magnetized fully ionized solar wind plasma and the partially ionized Earth's ionosphere are three-dimensional plasma-neutral gas simulations. Our numerical studies show the draping of the magnetic field of the solar wind and the self-generation of a relatively strong magnetic field in the Earth's ionosphere We studied the interaction of the magnetized fully ionized solar wind plasma with the unmagnetized partially ionized Earth's ionosphere. When the solar wind hits the Earth the magnetic field lines carried along with it are draped around the planet. What is more important, the relative motion between the solar wind plasma and the ionosphere results in the self-generation of magnetic fields in the ionospheric layer. The strengths of these fields depend on the shear length of the relative flows, which, in contrast to the other relevant physical parameters, is not well known. For a reasonable shear length of L=10 km the maximum strength of the newly generated magnetic field is comparable to the one of the present dipole field. Consequently, even in the case of a complete breakdown of the Earth's dynamo,

the biosphere is still shielded against cosmic rays, in particular coming from the sun, by the magnetic field induced by the solar wind. Magnetic pole shifts wouldnt create a doomsday scenario International Business Times 11 ( January 7, What is magnetic polar shift? Does it warrant doomsday talk?,
http://www.ibtimes.com/articles/98604/20110107/polar-reversal-magnetic-polar-shift-north-pole-south-pole-earth-magnetic-pole-2012-mayancalendar-ap.htm)

According to him the pole could move to Siberia within the next 50 years, and he added that the movement of the pole was accelerating. Now, is this a worrying phenomenon worth holding breath and waiting for doomsday signs? No, according to scientists. The National Geographic article explains: "The north magnetic pole shifts constantly, in loops up to 80 kilometers (50 miles) wide each day. The recorded location of the pole is really an average of its daily treks, which are driven by
fluctuations in solar radiation." But the Tampa airport runway closure report ominously set in motion another round of doomsday talk as people shared anxiety on internet discussion forums about the impending apocalypse in 2012 as predicted by the Mayan calendar. On YouTube, people linked it with the Mayan calendar prophecies and said they would stock up on provisions, medicines and other stuff. Obviously people have associated the northern pole's magnetic shift to the putatively catastrophic phenomenon of polar reversal, which can result in calamities such as floods and tectonic events. Here's what Wikipedia says about the implications of a polar shift: "The cataclysmic pole shift hypothesis is the conjecture that there have been rapid shifts in the relative positions of the modern-day geographic locations of the poles and the axis of rotation of a planet. For the Earth, such a dynamic change could create calamities such as floods and tectonic events. This type of event would occur if the physical poles had been or would be suddenly shifted with respect to the underlying surface over a geologically short time frame. This hypothesis is almost always discussed in the context of Earth, but other bodies in the Solar System may have experienced axial reorientation during their existences. "... in what is known as true polar wander, the solid Earth can rotate with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30 has

occurred, but that no super-rapid shifts in the Earth's pole were found during this period. A characteristic rate of true polar wander is 1 per million years or less. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically-rapid phases of true polar wander may have occurred. In each of these, the Earth rotated ~55, " says Wikipedia. This pole shift hypotheses should not be confused with
geomagnetic reversal, which is a periodic reversal of the Earth's magnetic field, it says. 2012 TO WITNESS CATASTROPHIC POLAR REVERSAL? However, not everyone is willing to buy the popular scientific wisdom regarding true polar shift. Researcher and author Patrick Geryl is of the opinion that the next (physical) polar reversal will take place in 2012. He says the North Pole will be changed into the South Pole and the earth will start rotating in the opposite direction, causing calamities of unknown proportions. He paints the horror in an article on survive2012.com: "...life after a polar reversal is nothing but horror, pure unimaginable horror. All securities you presently have at hand, like - amongst others - food, transport, and medicines, will have disappeared in one big blow, dissolved into nothingness. As will our complete civilization. It cannot be more horrifying than this; worse than the worst nightmare. More destructive than a nuclear war in which the entire global arsenal of nuclear weapons has been deployed in one blow." Geryl postulates that a phenomenal change in our sun which results in a chaotic outburst that releases immense clouds of plasma into space can impact the earth. What discredits Geryl is that there is no way to predict such an outburst, but even if it would happen it would only affect the

magnetic poles, which would not cause the doomsday scenario he describes. Indict of Terrence Aym Article Ayms article is Baloney Plait 11 (February 9, Phil has a PhD in Astronomy from UVA, No, a pole shift wont cause global superstorms, Discover Magazine,
http://blogs.discovermagazine.com/badastronomy/2011/02/09/no-a-pole-shift-wont-cause-global-superstorms/) So the latest doomsday fearmongering Im hearing about are global superstorms caused

by dangerous shifts in the Earths magnetic field. Maybe youve heard: the Earths magnetic field is wandering around, and may be about to reverse. When this happens, incoming radiation will affect our weather , causing gigantic storms the likes of which have never been seen except in Hollywood movies. Panic! Death! Higher gas prices! Cats and dogs, living together! Yeah, right. Ill be up front right away: this claim is baloney. Garbage. Nonsense. The article in question is pretty long, and as usual debunking something takes more time and effort than it does to simply say wrong things. So for the TL;DR (too long; didnt read) crowd: the article makes basic science errors, attempts to link totally unrelated

phenomena, states things as facts that are pure conjecture, and generally gets almost everything wrong. Bottom line: his claim of a link between the Earths magnetic field and superstorms is totally wrong. OK, so you want details? I got details. The source As far as I can tell, the source for this silly claim is an article titled "Magnetic Polar Shifts Causing Massive Global Superstorms ", first seen online at helium.com, but also reprinted widely (Im getting lots of emails from people who read it at the Oregon Salem-Online site). The author, Terrence Aym, wrote at least one breathlessly overblown and grossly inaccurate doomsday article without doing the necessary basic research; that one was about Apophis hitting the Earth in 2036 and you know how I feel about that sort of thing. This one is more of the same. Aym makes scientific claims that are completely unfounded in reality, and sometimes says things that are simply dead wrong. For example, some of the basic science Aym claims is way off:
Worse, what shields the planet from cancer-causing radiation is the magnetic field. It acts as a shield deflecting harmful ultra-violet, X-rays and other lifethreatening radiation from bathing the surface of the Earth. With the field weakening and cracks emerging, the death rate from cancer could skyrocket and mutations of DNA can become rampant. Bzzzzt! Nope.

The idea that magnetic pole reversal would be dangerous is completely false and scientifically unfounded Plait 11 (February 9, Phil has a PhD in Astronomy from UVA, No, a pole shift wont cause global superstorms, Discover Magazine,
http://blogs.discovermagazine.com/badastronomy/2011/02/09/no-a-pole-shift-wont-cause-global-superstorms/)

The Earths magnetic field protects us from charged particles like fast electrons and protons in the solar wind. If we didnt have a magnetic field the Earths air would stop these particles anyway . The radiation hes talking about UV and X-rays are totally unaffected by magnetic fields. That type of radiation is also absorbed by the air
(including the ozone layer). Ironically, I will note that without the magnetic field protecting us, subatomic particles in the solar wind could erode the ozone layer, causing an increase in skin cancer rates from UV, but Aym doesnt say anything about the ozone layer. And it takes X-rays to affect DNA [UPDATE: I've been made aware that some forms of UV light can affect DNA], which cant get through our air no matter what. So that last statement of his is still wrong. When something as basic as that is wrong in an article, it should make you at least a little suspicious

about bigger claims. As well it should. But perhaps its an honest mistake. We all make em, right? But then he says this: Magnetic polar shifts have occurred many times in Earths history. Its happening again now to every planet in the solar system including Earth. Um. No its not. Venus doesnt even have a global magnetic field, for example, and theres no indication I could find that any of the other planets have fields that are "shifting". Aym doesnt give a reference for that comment, so theres no way to know where he got it from, or if he just made it up. And heres another bit: Forget about global warmingmanmade or naturalwhat drives planetary weather patterns is the climate and what drives the climate is the suns magnetosphere and its electromagnetic interaction with a planets own magnetic field. This is another very confused passage. Of course global warming affects the climate: thats why were so worried about it. His claim that the Suns magnetosphere drives the climate is also not true. What really drives the

climate are several factors, including the Suns energy output (light and heat), how much the Earth absorbs that energy, the Earths rotation, its atmosphere content, the shape of its orbit, and so on. Most of these factors are stable (or dont change much over time), which is why our Earth is so hospitable to life. However, some factors do change, which is worrisome. For
example, dumping so much carbon dioxide into the air is changing how much heat we absorb from the Sun, and thats why so many scientists are concerned about global warming. But what about magnetism? Magnetic repulsion The very basic premise of Ayms claims is that the changing magnetic field of the Earth and Sun (hes not clear which, actually, switching between the two in the article) can change our climate. There may be a modicum of truth in this idea; over long periods of time its possible the magnetic field may have some effect on climate. The evidence isnt at all clear. However, Aym takes this and runs with it, linking short-term magnetic field changes with huge storms across the planet essentially hyperinflating a real idea into nonsense. Heres the deal: incoming galactic cosmic rays (GCRs; very zippy subatomic particles) hit the Earths atmosphere. They can seed clouds, changing rainfall patterns. They also create an isotope of oxygen at the same time which washes down in that rain. As it happens, cave formations called speleothems are sensitive to rainfall amounts, so by measuring the amount of the isotope in these formations, you can see if there is any correlation between that isotope and the amount of rainfall. Since the magnetic field of the Sun (and Earth) protects us from GCRs, then maybe changes in that

magnetic field can affect rainfall. Scientists have looked into this, and what did they find? a relatively good correlation
between the high-resolution speleothem 18O; records and the dipole moment, suggesting that Earths magnetic field to some degree influenced lowlatitude precipitation in the past. Thats pretty interesting, Ill admit. The correlation isnt super-strong, but could be there. This idea of cosmic rays seeding clouds and affecting climate has been around for a while (I researched it pretty thoroughly for my book Death from the Skies!), and really, at best the data are interesting and suggest a possible correlation, but its impossible to say if there is any definite connection. Its

just too weak to be sure. All well and good, and an excellent starting point for more research to try to nail this down unless you read Ayms article. Instead of quoting the actual paper, as I just did, he instead quotes from a website called ViewZone which is chock full of antiscience nonsense about astral projection, alien abductions, and 2012 doomsday: "The earths climate has been
significantly affected by the planets magnetic field, according to a Danish study published Monday that could challenge the notion that human emissions are responsible for global warming. "Our results show a strong correlation between the strength of the earths magnetic field and the amount of precipitation in the tropics," one of the two Danish geophysicists behind the study, Mads Faurschou Knudsen of the geology department at Aarhus University in western Denmark, told the Videnskab journal. Perhaps the author of the article did in fact say that; if he did then thats not what the actual scientific study appears to say in the journal theres a big difference between a "relatively good" correlation and a "strong" one. But that statement is still confused and confusing. This has nothing to do with man-made global warming for one. Its certainly possible and even likely climate change has several sources, but we know human emissions are a huge cause of it (despite what denialists claim). Even if cosmic rays were affecting us and

thats not at all clear then it would probably be minor compared to what we ourselves are doing. If it were that big a source of warming the data would be a lot more clear. And even if this claim is totally true, it has nothing to do with superstorms, just with low-latitude rainfall. But Aym doesnt stop there. In another quote he makes an unfounded claim piled on top of a distortion of a NASA finding: Recently, as the [Earth's] magnetic field fluctuates, NASA has discovered "cracks" in it. This
is worrisome as it significantly affects the ionosphere, troposphere wind patterns, and atmospheric moisture. All three things have an effect on the weather. These "cracks" are real, as described by NASA The thing is, though, they happen all the time and have been happening throughout

history. They are not new, and are unrelated to magnetic fluctuations; the cause is actually described in the NASA article:
sometimes the solar wind has an opposite magnetic polarity as the Earths magnetic field, allowing them to interact more strongly and form these cracks. So in that sentence Aym is misleading about the cracks and the Earths magnetic field, and then asserts flatly they affect weather, when no

such connection is apparent. Pole dancing In the article, Aym claims the north magnetic pole of the Earth wanders, and this motion has sped up
recently. He again ties this to the creation of superstorms. Yeah, well, not so much. The Earths magnetic field is roughly like that of a bar magnet, with two opposite poles. The field is generated in the Earths core as it spins, creating a dynamo, a self-sustaining reaction. The magnetic poles dont line

up with the Earths physical poles (its spin poles if you like), and they also wander. Thats because the field is generated by the liquid interior of the Earth, and so the field is not static. It changes. The poles can wander, too. As it happens,
the north magnetic pole does appear to be moving faster than it used to; it averaged about 9 km/year of motion before 1970 and is now moving at about 40 km/yr (note that in the article, Aym says its 40 miles/year, but that may just be a typo on his part). Is that a concern? Not really. It moves a lot, and just because its speeding up now doesnt mean doom and catastrophe (I imagine if it always moved quickly and suddenly slowed down, doomsayers would use that as evidence too). Note that the south magnetic pole hasnt sped up, indicating this acceleration of the north pole is probably a temporary anomaly. In any case, why would this motion be of concern? Beats me. Aym never really says why this is bad, or provides any evidence that it

might be a problem. He just says it is. Doomsday promoters also talk a lot about the poles flipping, literally with the two magnetic poles flipping orientation. This is a real event, and happens on geologic timescales. To be honest we dont know what will happen when the poles reverse, but there is no evidence it will affect our weather . And anyway, it probably wont happen for millennia in any case. Weather or not But what about all the severe weather weve had lately, including the ginormous storms that have swept across the United States, and the cyclone Yasi that just hit Australia? Well, those actually happen pretty often. It
turns out cooler-than-average temperatures in the Pacific Ocean surface helped fuel and target the American snowstorms, while warmer waters near Australia fed the typhoon. That happens cyclically in the oceans its the El Nio and La Nia cycle and arent terribly surprising. The past couple of snowstorms have been pretty big, but theres no reason whatsoever to link them to any magnetic issues. After all, the pole has been wandering for decades, so why would this happen now? And the Suns activity is still pretty low right now; so again theres no reason to connect weather with the Sun. Its hard enough to figure out how the Sun influences our climate over millennia, let alone season to season. Be very wary indeed of anyone

claiming such an easy to spot connection (thats also been missed by hundreds of scientists who have devoted their lives
to such things). Ad naseum The article goes on and on, piling up one distortion of science on top of another. And mind you, even if you slog through the entire article theres one thing that becomes clear: Aym never makes a good connection between the magnetic field and

superstorms! He says theres one, but the best he can do is tenuously connect the magnetic field to climate in general and some weather like rainfall.
But superstorms? Nothing. Toward the end of the article he even links to a scientific article he claims makes a connection between the magnetic field and superstorms but the article is actually talking about magnetic storms, not weather storms! Yeah. Oops. Conclusion So whats the takeaway from all this? Well, the big one is that breathless doomsday articles are generally hugely misleading, if not outright wrong.

This one is certainly wrong. Big claims with shaky evidence, exaggerated conclusions, an apparent misunderstanding of basic science, and lots of supposition stated as fact all this points to the conclusion that this article distorts reality beyond recognition. Sadly, its not the first, nor will it be the last. I already have at least two more such articles on my radar and
I know there will never be an end to them. Until doomsday really does come, of course. But dont expect those guys to get it right if and when it does.

The weapons fail the neutrino positron pairing necessary for escalation is too rare American Physical Society 99 (APS, The Disintegration of High Energy Protons, http://prola.aps.org/abstract/PR/v51/i12/p1037_1) The coupling between light and heavy particles assumed in the Fermi theory of -decay makes it possible for high energy protons in passing through matter to transfer a considerable fraction of their energy to electrons and neutrinos. If we suppose that this coupling is a maximum for relative energies of the light and heavy particles of the order c/R, with R the range of nuclear forces, and is small for much higher relative energies , the most important process which occurs, for sufficiently energetic
protons, can be pictured as a sort of photodisintegration of the proton by the contracted Coulomb field of a passing nucleus, the proton changing into a neutron and emitting a positron and a neutrino. With a coupling of the type described, and of the magnitude required by the

A2 Proton Disintegration Weapons

proton-neutron forces, processes involving more than one pair of light particles will be relatively rare. The cross section for the disintegration of a proton of energy E is found to be of the order 2(/Mc)RZ22 ln2 (E/Mc2), and is very small,
even for heavy nuclei. The mean energy given to the positron per disintegration is of the order 2(c/R)(E/Mc2)/ln (E/Mc2). The positrons emitted in these disintegrations can account in order of magnitude for the incidence of showers observed under thick absorbers.

We cant disintegrate protons artificially the lifetime is too long Dursely 99 (Physics major, Towards Grand Unification, http://molaire1.club.fr/e_unification.html )
Because this theory groups together 5 particles (the electron, the neutrino and the d antiquarks of each colour) in a fundamental quintuplet. The other p particles would be grouped in a decuplet. The symmetry of the GUT would permit the invariance of nature by the permutation of a lepton (electron, neutrino...) with a quark: To put it plainly, leptons and quarks of the quintuplet would be transformable, one to another, and these transitions could be possible by the intermediary of new bosons called leptoquarks. These leptoquarks would then be bosons carrying a colour charge and a fractional e electric charge. This theory would permit explanation of the troubling fact that the value of the negative electric charge (Q= -1) of an electron corresponds to the same positive value (Q= +1) of the proton. This theory predicts an appalling event: the proton, symbol of stability of matter, should have a s

limited lifetime! This lifetime would be 1031yrs; knowing that the Universe was born around 1010 years ago, there is still some spare time, phew! Enormous swimming-pool proton disintegration detectors have therefore been constructed: A proton emits two photons g and a positron e+ when it disintegrates; now the positron emits a blue luminous cone in water (Cerenkov effect for the connoisseurs) which photo-multipliers can detect. Alas, for the moment, no positive results have been announced, which
rends this GUT theory much less solid than the electroweak theory.

The weapon has already been used with no impact, and future development is too expensive Global News Wire 1 (Russian TV Examines Latest Cutting-Edge Weaponry, Lexis) It took a year for a joint U.S.-U.K. team of 50 to set up one shot. Creating the proton beam requires the electrical power needed to light a small town, if only for a few moments. LANLs cost alone was about $600,000 excluding the expense of the beam. At that rate, a full 100-minute feature would cost $36 trillion.

A2 Red Giant
1. We can prevent the sun from becoming a red giant technology can manipulate chemical imbalances in the sun Beech 08 [Dr. Martin Beech is an associate professor of astronomy and head of the Astronomy Department at Campion College,Rejuvenating the Sun
and Avoiding Other Global Catastrophes http://www.springerlink.com.proxy.lib.umich.edu/content/n2gq04u036535533/fulltext.pdf, Chetan] In Chapter 3 the physical processes underlying the workings of a Sun-like star were described. In this chapter we will examine the ways in which the properties of a star might, at least in principle, be manipulated by our distant descendants. Specifically, our task is to see

how the Sun might be engineered or rejuvenated to enable the continued survival of life on the innermost planets, on
timescales greater than the canonical main-sequence lifetime [T > TMS (canonical)]. In the case of Venus and Mars, of course, this clearly means future human life on terraformed worlds. As already stated, the task of the would-be asteroengineer is to find ways to stop the Sun

from becoming over-luminous, and from becoming a bloated red-giant the dire consequences of these effects for the Solar System having been discussed in the last chapter. It turns out, fortuitously for humankind, that these goals are compatible; by stopping the red-giant Sun from coming about, the long-term temperature stability of the inner planets is also maintained.

Perhaps it should be reiterated at this stage that we are not describing in this book exactly how the mechanical part of star engineering can be done. We do not know, for example, what kinds of materials should be used or how to construct the various machines and devices that will be described in this chapter. What we will outline, however, is how the future properties of the Sun might be controlled in principle. The Engineering Options As highlighted in Chapter 4, the most important problem that the future star engineer will need to address is that of the Suns increasing luminosity. Its increase in radius is not so great an issue if we are only concerned with the survival of Planet Earth, but it seems an incredible waste of resources to simply let Mercury and Venus be consumed by an expanding Sun. Equation (4.1) contains the key terms of interest and, indeed, it indicates that for a fixed planetary distance d, the surface temperature of the planet increases as the Suns luminosity to the one-quarter powerthat is as L1/4. If all of the other terms on the right-hand side of Equation (4.1) remain the same, then the surface temperature of any given planet increases by about one degree

for every 1 percent increase in the Suns luminosity. So, to stop Earth from overheating, the star engineer must control the growth of the Suns luminosity. Indeed, the aim will be to keep the Sun at or at least near its present energy output per unit time. In fact, a slightly less luminous Sun might be desirable. This latter dictate builds upon the suggestion by Professor James Lovelock that the recent glacial interglacial cycling that has dominated the Pleistocene era is a Gaian response to the enhanced warming of Earth in recent times.1 The Suns ideal luminosity was achieved, according to Lovelock, some 2 billion years ago, when it was 15 percent less luminous than now. How then might the star engineer proceed? The double

goal of eliminating the red-giant phase and reducing the Suns luminositythe basic act of rejuvenationcan be achieved by manipulating both internal and external quantities. By external quantities we specifically mean the mass of the Sun, and by internal we mean the radial variation in its composition. No one process of manipulation is going to achieve both of the stated goals, so a combination of alteration mechanisms will be required. Mixing and Mass Loss In this section we will build upon the results leading to
Equation (3.12). Specifically the mass-luminosity relationship indicates that if the Sun is to have the same luminosity at the beginning and the end of its main-sequence phase, then its mass at the end of the main-sequence must be reduced to2 0.3 M In other words, the Sun must be slimmed down by some 0.7 Mworth of material. The example considered above assumes that the Sun has a homogeneous composition. Detailed numerical models, however, have shown that even if a star has an inhomogeneous composition (where the envelope, for example, is more hydrogen-rich than the core), the evolution with mass-loss is always at a lower luminosity. Figure 5.1 illustrates, in a schematic way, the effects of mixing and mass-loss on the evolution of a star.3 It can be seen from the figure that the effect of inducing greater and greater amounts of additional mixing within the interior of a star results in the red-giant phase being killed off. Rather than evolving into a low temperature, large red-giant at core hydrogen exhaustion, a fully

mixed star evolves into a luminous, slightly larger, and higher temperature star. To the star engineer this result illustrates how the bloated red-giant stage of the Sun can be avoided and, accordingly, methods of mixing the Suns interior will have to be developed. The evolution of a fully mixed star with massloss is again toward higher temperatures, but now the massloss results in lower luminosities being achievedthe greater the massloss, the lower the luminosity for any given composition. If the mass-loss is very high, the evolution can proceed to values lower than the initial main-sequence luminosity. A non-homogeneous star evolving with mass-loss is also, for a given composition, less luminous than the non-homogeneous zero mass-loss model. The evolution is still towards lower surface temperatures, however, and unless extreme amounts of mass are removed from the star, the red-giant phase will still occur. To sum up so far, for the Sun to avoid its bloated red-giant phase, and for it to evolve at near constant luminosity, both mass-loss and the (near) complete internal mixing of its chemical elements must be engineered. In this chapter we have attempted to outline a number of possible options for rejuvenating the Sun. Some of the scenarios are more fanciful than others, but the key point here is that there

are conceivable options and possibilities. The Sun can be tamed, and it is not inevitable that it will become a red-giant. Clearly, we do not know what will come to pass in the future, and there may be many other possible means of rejuvenating the Sun; who knows what our descendants 1 million, 10 million, and 100 million years from now will be able to do. The future holds great promise for humanityprovided that humanity is prepared to realize it. 2. Red Giant wont consume the Earth National Geographic 7 [Red Giant Sun May Not Destroy Earth, September 14th, 2007,
http://news.nationalgeographic.com/news/2007/09/070914-red-giants.htmlhttp://news.nationalgeographic.com/news/2007/09/070914-redgiants.html, Chetan]

The first glimpse of a planet that survived its star's red giant phase is offering a glimmer of hope that Earth might make it past our sun's eventual expansion. The newfound planet, dubbed V391 Pegasi b, is much larger than Earth but likely orbited its star as closely as our planet orbits the sun (explore a virtual solar system). When the aging star mushroomed into a red giant about a hundred times its previous size, V391 Pegasi b was pushed out to an orbit nearly twice as far away. "After this finding, we now know that planets with an orbital distance similar to the Earth can survive the red giant expansion of their parent stars," said lead author Roberto Silvotti of the National Institute of Astrophysics in
Napoli, Italy.

3. Earth can survive astronomical calculations proves NY Times 7 [Scientists Good News: Earth May Survive Suns Demise in 5 Billion Years, September 13th, 2007,
http://www.nytimes.com/2007/09/13/science/13planet.html, Chetan]

There is new hope that Earth, if not the life on it, might survive an apocalypse five billion years from now. That is when, scientists say, the Sun will run out of hydrogen fuel and swell temporarily more than 100 times in diameter into a socalled red giant, swallowing Mercury and Venus. Astronomers are announcing that they have discovered a planet that seems to have survived the puffing up of its home star, suggesting there is some hope that Earth could survive the aging and swelling of the Sun. The planet is a gas giant at least three times as massive as Jupiter. It orbits about 150 million miles from a faint star in Pegasus
known as V 391 Pegasi. But before that star blew up as a red giant and lost half its mass, the planet must have been about as far from its star as Earth is from the Sun about 90 million miles according to calculations by an international team of astronomers led by Roberto Silvotti of the Observatorio Astronomico di Capodimonte in Naples, Italy. Dr. Silvotti said the results showed that a

planet at Earths distance can

survive a red giant, and he said he hoped the discovery would prompt more searches.

A2 Robot Takeover
Robots would not be programmed to have survival instincts Singer 09 (5/21/09, P.W. Singer, director of the 21st Century Defense Initiative at the Brookings Institution, Ph.D. in security studies from Harvard,
helped write Obamas defense policy agenda, Slate, Gaming the Robot Revolution, http://www.slate.com/id/2218834/) HL First, the

machines would have to have some sort of survival instinct or will to power. In the Terminator movies, for

instance, Skynet decides to launch a nuclear holocaust against humans in self-defense after the frightened generals attempt to take it offline. Yet most

of the focus in military robotics today is to use technology as a substitute for human risk and loss. We use the Packbot in Iraq because, as one U.S. military officer tells, "When a robot dies, you don't have to write a letter to its mother." It would serve the very opposite goal to give our robots any survival instinct. Robot would be programmed with positive human qualities Singer 09 (5/21/09, P.W. Singer, director of the 21st Century Defense Initiative at the Brookings Institution, Ph.D. in security studies from Harvard,
helped write Obamas defense policy agenda, Slate, Gaming the Robot Revolution, http://www.slate.com/id/2218834/) HL Second, the machines would have to be more intelligent than humans but have no positive human qualities (such as empathy or ethics). This kind of intellectual advancement may be possibleeventuallygiven the multiplicative rate at which computer technology progresses. But an explosion of artificial intelligence that surpasses humanity (sometimes referred to as the Singularity) is by no

means certain. My Roomba vacuum, for example, still can't reason its way out of being stuck under my sofa, let alone plot my demise. There's also an entire field, called "social robotics," devoted to giving thinking machines the sort of positive human qualities that would undermine an evil-robot scenario. Researchers at Hanson Robotics, for example, describe how their mission is to build robots that "will evolve into socially intelligent beings, capable of love and earning a place in the extended human family." Robots would need humans to remain operational Singer 09 (5/21/09, P.W. Singer, director of the 21st Century Defense Initiative at the Brookings Institution, Ph.D. in security studies from Harvard,
helped write Obamas defense policy agenda, Slate, Gaming the Robot Revolution, http://www.slate.com/id/2218834/) HL

The third condition for a machine takeover would be the existence of independent robots that could fuel, repair, and reproduce themselves without human help. That's far beyond the scope of anything that now exists. While our realworld robots have become very capable, they all still need humans. For instance, the Global Hawk drone, the replacement for the manned U-2 spy plane, has the ability to take off on its own, fly to a destination 3,000 miles away, and stay in the air for 24 hours as it hunts for a terrorist on the ground. Then it can fly back to where it started and land on its own. But none of this would be possible if there weren't humans on the ground

to fill it with gas, repair any broken parts, and update its mission protocols. Robots could be overridden Singer 09 (5/21/09, P.W. Singer, director of the 21st Century Defense Initiative at the Brookings Institution, Ph.D. in security studies from Harvard,
helped write Obamas defense policy agenda, Slate, Gaming the Robot Revolution, http://www.slate.com/id/2218834/) HL Finally, a

robot invasion could only succeed if humans had no useful fail-safes or ways to control the machines' decision-making. We would have to have lost any ability to override, intervene, or even shape the actions of the robots. Yet one has to hope that a generation that grew up on a diet of Terminator movies would see the utility of fail-safe mechanisms. Plus, there's the possibility that shoddy programming by humans will become our best line of defense: As many roboticists joke, just when the robots are poised to take over, their Microsoft software programs will probably freeze up and crash.

A2 Rouge Black Holes


They arent a threat to us, Earth isnt within their orbiting radius Dicovery 08 [ROGUE BLACK HOLES ROAM MILKY WAY, January 9th, 2008, http://news.discovery.com/space/rogue-black-holesgalaxies.html, Chetan] Hundreds

of rogue black holes may be roaming around the Milky Way waiting to engulf stars and planets that "intermediate mass" black holes are invisible except in rare circumstances and have been spawned by mergers of black holes within globular clusters -- swarms of stars held together by their mutual gravity. These black holes are unlikely to pose a threat to Earth, but may engulf nebulae, stars and planets that stray into their paths, the researchers said. "These rogue black holes are extremely unlikely to do any damage to us in the lifetime of the universe," said Kelly Holley-Bockelmann, an assistant professor of physics and astronomy at Vanderbilt University in Nashville, Tenn. "Their danger zone, the Schwarzschild radius, (or gravitational radius) is really tiny, only a few hundred miles. There are far more dangerous things in our neighborhood." The evidence for "intermediate mass" black holes, as opposed to
cross their path, U.S. astronomers said Wednesday. The astronomers believe these

supermassive or stellar-mass black holes, is still largely theoretical and therefore controversial. Only two tentative observations of objects of this sort have been made to date.

The chance that black hole with consume the earth has an almost zero probability National Geographic 08 [Hundreds of "Rogue" Black Holes May Roam Milky Way, January 10th, 2008,
http://news.nationalgeographic.com/news/2008/01/080110-black-holes.html, Chetan] "People

ask me, Is this dangerous? Do I need to build a black hole shelter?" Holley-Bockelmann said. "The answer is, No, you don't." She said the only possible danger is that a meandering black hole would plow through the Oort clouda cloud of comets thought to exist at the outer edge of our solar systemand kick the objects on a path toward Earth . "But that's a 1 in 10 quadrillion per year probability," she said. If the roughly 200 globular clusters in the Milky Way have indeed spawned
intermediate-size black holes, this means that hundreds of such rogue objects are probably wandering invisibly around the Milky Way.

A2 Strange Matter
Strange matter is pure speculation Nasr 08 (12/30/08, Susan L. Nasr, HowStuffWorks, Should I be afraid of strange matter? http://science.howstuffworks.com/science-vsmyth/everyday-myths/strange-matter.htm) HL Did we mention that strange matter isn't known to exist anywhere in the universe? That's an important detail. Physicists came up with the idea of strange matter in the 1970s when they wondered what would happen if protons and neutrons were squished superhumanly hard [source: Freedman].

Strange matter doesnt exist and cant be made multiple reasons Nasr 08 (12/30/08, Susan L. Nasr, HowStuffWorks, Should I be afraid of strange matter? http://science.howstuffworks.com/science-vsmyth/everyday-myths/strange-matter.htm) HL

Could strange matter be on Earth now? Physicists have considered it. They've sampled our water and other matter, finding nothing. They've considered the possibility of creating strange matter in particle accelerators like the Large Hadron
Collider, since it could slam atomic nuclei together hard enough to knock the quarks out of the atoms and potentially convert some of them to strange quarks. But safety reviewers concluded that particle accelerators create so much heat that they would melt

potential strangelets. The likelihood of creating strange matter in a particle accelerator would be as low as making "an ice cube in a furnace," the reviewers concluded [source: Ellis]. Physicists have also considered whether strange matter could exist in space. They've nixed the idea that it could've been made in the early universe and stayed around [source: Farhi]. They're skeptical of it being made by heavy atoms, which are hurled through space by violent astrophysical
processes, hitting other heavy atoms in the process [source: Jaffe].

No risk of strangelets earthly pressures, charge, and too many intermediate steps Nasr 08 (12/30/08, Susan L. Nasr, HowStuffWorks, Should I be afraid of strange matter? http://science.howstuffworks.com/science-vsmyth/everyday-myths/strange-matter.htm) HL

For such a disaster scenario to occur on Earth, strange matter would have to remain for more than a fraction of a second at earthly pressures, and we don't know if it can do that. It would also have to be negatively charged. In fact, potential strange matter would probably be positively charged , says Farhi. And since the matter on our planet (including us) has positively charged atomic nuclei, it would repel strange matter. "If you had a little lump on the table, it
would just sit there," says Farhi. The scenario would change if strange matter were negatively charged, and a ball of it was madly rolling around on Earth. "You would probably know it because it would be growing and consuming everything at its border," says Farhi. Attracted to your atomic nuclei, the ball of strange matter would suck you in, and you'd be finished. Kind of like a modern-day incarnation of the Blob. Have you counted the "ifs" we've thrown at you so far? If

strange matter existed in space, if it were hurled at Earth, if it were stable at the pressures in space and on Earth, if it were more stable than our matter and if it were negatively charged -- it could turn you into a lump of unruly quarks. So no, you probably shouldn't be afraid of strange matter , but it's fun to think about. No risk empirics energies from collisions are dwarfed by cosmic collisions and models prove Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL The safety of collisions

at the Large Hadron Collider (LHC) was studied in 2003 by the LHC Safety Study Group, who concluded that they presented no danger. Here we review their 2003 analysis in light of additional experimental results and theoretical understanding, which enable us to confirm, update and extend the conclusions of the LHC Safety Study Group. The LHC reproduces in the laboratory, under controlled conditions, collisions at centre-of-mass energies less than those reached in the atmosphere by some of the cosmic rays that have been bombarding the Earth for billions of years. We recall the rates for the collisions of cosmic rays with the Earth, Sun, neutron stars, white dwarfs and other astronomical bodies at energies higher than the LHC. The stability of astronomical bodies indicates that such collisions cannot be dangerous. Specifically, we study the possible production at the LHC of hypothetical objects such as vacuum bubbles, magnetic monopoles, microscopic
black holes and strangelets, and find no associated risks. Any microscopic black holes produced at the LHC are expected to decay by Hawking radiation before they reach the detector walls. If some microscopic black holes were stable, those produced by cosmic rays would be stopped inside the Earth or other astronomical bodies. The stability of astronomical bodies constrains strongly the possible rate of accretion by any such microscopic black holes, so that they present no conceivable danger. In the case of strangelets, the good agreement of measurements of particle production at RHIC

with simple thermodynamic models constrains severely the production of strangelets in heavy-ion collisions at the LHC, which also present no danger. Empirically and theoretically disproven CERN 08 (2008, European Organization for Nuclear Research, The safety of the LHC, http://public.web.cern.ch/Public/en/LHC/Safety-en.html)
HL

Strangelet is the term given to a hypothetical microscopic lump of strange matter containing almost equal numbers of particles called up, down and strange quarks. According to most theoretical work, strangelets should change to ordinary matter within a thousand-millionth of a

second. But could strangelets coalesce with ordinary matter and change it to strange matter? This question was first raised before the start up of the Relativistic Heavy Ion Collider, RHIC, in 2000 in the United States. A study at the time showed that there was no cause for concern, and RHIC has now run for eight years, searching for strangelets without detecting any. At times, the LHC will run with beams of heavy nuclei, just as RHIC does. The LHCs beams will have more energy than RHIC, but this makes it even less likely that strangelets could form. It is difficult for strange matter to stick together in the high temperatures produced by such colliders, rather as ice does not form in hot water. In addition, quarks will be more dilute at the LHC than at RHIC, making it more difficult to assemble strange matter. Strangelet production at the LHC is therefore less likely than at RHIC, and experience there has already validated the arguments that strangelets cannot be produced. Experiments at RHIC confirm strangelet production is impossible Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL

No evidence has been found in the detailed study of heavy-ion collisions at RHIC for an anomalous coalescence mechanism. In particular, the production rate of light nuclei measured in central Au+Au collisions at RHIC [14], is consistent with the coalescence rates, used in the 2003 Report of the LHC CERN Safety Study Group [1] to rule out strangelet production. There is also
considerable experimental evidence against the distillation mechanism. For this mechanism to be operational, the produced matter should have a long lifetime and a large net nucleon density. However, experiments at RHIC confirm the general expectations that the net nucleon

density is small and decreases at higher collision energies. Moreover, the plasma produced in the collision is very shortlived, expanding rapidly at about half the velocity of light, and falling apart within 1023 seconds [20]. Furthermore, no characteristic difference has been observed in the production of particles containing strange quarks and antiquarks. Hence, a distillation mechanism capable of giving rise to strangelet production is not operational in heavy-ion collisions at RHIC, and this suggestion for strangeparticle production has been abandoned for the LHC. On the other hand, as reviewed below, RHIC data strongly support models that
describe particle production as emission from a high-temperature heat bath [3].

Moon proves the impact is empirically denied Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL It has been shown that the

continuing survival of the Moon under cosmic-ray bombardment ensures that heavy-ion collisions do not pose any conceivable threat via strangelet production [8]. This is because cosmic rays have a significant component of heavy ions, as does the surface of the Moon. Since the Moon, unlike planets such as the Earth, is not protected by an atmosphere, cosmic rays hitting the Moon have produced heavy-ion collisions over billions of years at energies that are comparable to or exceed those reached in man-made experiments. No impact 1) strangelets dont exist, 2) too small to be dangerous, and 3) would have harmless positive charge repelling nuclei rather than attracting matter Jaffe et al 2k (October 2000, Robert Jaffe, W. Busza, J. Sandweiss, and F. Wilczek, Review of Modern Physics. Vol. 72, No. 4, "Review of Speculative
"Disaster Scenarios" at RHIC." http://arxiv.org/PS_cache/hep-ph/pdf/9910/9910333v3.pdf) HL For strange matter to pose a hazard at a heavy ion collider, four conditions would have to be met: Strange matter would have to be absolutely stable in bulk at zero external pressure. If strange matter is not stable, it will not form spontaneously. Strangelets would have to be at least metastable for very small atomic mass, for only very small strangelets can conceivably be created in heavy ion collisions. It must be possible to produce such a small, metastable strangelet in a heavy ion collision. The stable composition of a strangelet must be negatively charged. Positively charged strangelets pose no threat whatsoever. Each of these conditions is considered unlikely by experts in the field, for the following reasons: At present, despite vigorous searches, there

is no evidence whatsoever for stable strange matter anywhere in the Universe. On rather general matter becomes unstable in small lumps due to surface effects. Strangelets small enough to be produced in heavy ion collisions are not expected to be stable enough to be dangerous. It is overwhelmingly likely that the most stable configuration of strange matter has positive electric charge. Theory suggests that heavy ion collisions (and hadron-hadron collisions in general) are a poor way to produce strangelets. Furthermore, it
grounds, theory suggests that strange suggests that the production probability is lower at RHIC than at lower energy heavy ion facilities like the AGS and CERN. Models and data from lower energy heavy ion colliders indicate that the probability of producing a strangelet decreases very rapidly with the strangelets atomic mass. A negatively

charged strangelet with a given baryon number is much more difficult to produce than a positively charged strangelet with the same baryon number because it must contain proportionately more strange quarks.

A2 Super Volcanos
Super volcanoes wont erupt anytime soon doomsday-sayers exaggerate super volcano risks to make money Patterson 11 ( Christiaan Patterson, staff writer, March 20, Yellowstone volcano not likely to erupt soon,
http://sundial.csun.edu/2011/03/yellowstone-volcano-not-likely-to-erupt-soon/) Natural disasters such as the volcano erupting in Hawaii and the earthquake in Japan seem to be increasing in occurance, causing some to believe catastrophe is just around the corner. One such disaster some people are concerned about is the eruption of a supervolcano called the Yellowstone Caldera located beneath Yellowstone National Park. This calamity is as probable as an asteroid over two miles wide crashing into us. Therefore, panic is not necessary. First of all, there is a difference between a volcano and a supervolcano. The main distinction is the amount of magma that would flow out of the earth. According to the United States Geological Survey, if a volcano erupts more than 240 cubic miles of lava engulfing the

surrounding area, its a super volcano. Add to this a volcanic feature called a caldera, which then makes the volcano one of the most
dangerous on earth. Calderas resemble an inverted volcano and have magma chambers under so much pressure from built-up gases that cracked rings form toward the exterior of the volcano. When an eruption occurs, large pyroclastic clouds relieve pressure beneath the magma chamber and once all initial pressure is exerted, the chamber collapses and the magma rises upward. Volcanic experts say the Yellowstone Caldera is one of the largest on the planet. Other large calderas also considered to be supervolcanoes exist in Long Valley, California, Taupo, New Zealand and Toba, Indonesia. Attention had not been fully given to this situation until the BBC and the Discovery Channel aired a docudrama in 2005 about what could happen if the Yellowstone Caldera blew. It explained how the last major eruption there happened about 630,000 years ago and emphasized that we are long overdue. The United States

Geological Survey responded to the program saying the facts and future projections were accurate. However, an eruption of magnitude 8 or greater is highly improbable in our lifetime or even the next five to 10 generations. Many people feed off chaos and negative events and the makers of this docudrama tapped into this in an effort to increase ratings and make money. It definitely preys on this fear to get us to wake up and pay attention, even when immediate action is not necessary. Yellowstone National Park is surrounded by seismometers and observed from space by GPS satellite. Using these instruments, scientists are able to monitor earthquake activity and any other movement of the ground enabling authorities to issue warnings when an eruption is imminent. A sudden explosion without any forewarning is almost impossible. When a volcano erupts, certain changes occur around it, whether its gas emissions, frequent quakes or changes in mountain size. Even the most active super volcano shows no signs of an eruption soonand even if it did erupt, it wouldnt necessarily be catastrophic National Park Service 6/21 (2011, Volcano Questions & Answers, http://www.solcomhouse.com/yellowstone.htm) QUESTION: Do scientists know if a catastrophic eruption is currently imminent at Yellowstone? ANSWER: There is no evidence that a catastrophic eruption at Yellowstone is imminent, and such events are unlikely to occur in the next few centuries. Scientists have also found no indication of an imminent smaller eruption of lava. QUESTION: How far in advance could scientists predict an eruption of the Yellowstone volcano? ANSWER: The science of forecasting a volcanic eruption has significantly advanced over the past 25 years. Most scientists think that the buildup preceding a catastrophic eruption would be detectable for weeks and perhaps months to years. Precursors to volcanic eruptions include strong earthquake swarms and
rapid ground deformation and typically take place days to weeks before an actual eruption. Scientists at the Yellowstone Volcano Observatory (YVO) closely monitor the Yellowstone region for such precursors. They expect that the buildup to larger eruptions would include intense precursory activity (far exceeding background levels) at multiple spots within the Yellowstone volcano. As at many caldera systems around the world, small earthquakes, ground uplift and subsidence, and gas releases at Yellowstone are commonplace events and do not reflect impending eruptions. QUESTION: Can you release some of the pressure at Yellowstone by drilling into the volcano? ANSWER: No. Scientists agree that drilling into a volcano would be of questionable usefulness. Notwithstanding the enormous expense and technological difficulties in drilling through hot, mushy rock, drilling is unlikely to have much effect. At near magmatic temperatures and pressures, any hole would rapidly become sealed by minerals crystallizing from the natural fluids that are present at those depths. QUESTION: Could the Yellowstone volcano have an eruption that is not catastrophic ? ANSWER: Yes. Over the

past 640,000 years since the last giant eruption at Yellowstone, approximately 80 relatively nonexplosive eruptions have occurred and produced primarily lava flows. This would be the most likely kind of future eruption. If such an
event were to occur today, there would be much disruption of activities in Yellowstone National Park, but in all likelihood few lives would be threatened. The most recent volcanic eruption at Yellowstone, a lava flow on the Pitchstone Plateau, occurred 70,000 years ago.

Super Volcanoes arent coming anywhere in the near future but when they do, well be able to predict them Bindeman 6 (May 24, Ilya Bindeman is an Associate Professor of Geological Sciences at the Univeristy of Oregon, The Secrets of Supervolcanoes,
http://globalrumblings.blogspot.com/2006/05/secrets-of-super-volcanoes.html) Lurking deep below the surface in California and Wyoming are two hibernating volcanoes of almost unimaginable fury. Were they to go critical, they would blanket the western U.S. with many centimeters of ash in a matter of hours. Between them, they have done so at least four times in the past two million years. Similar supervolcanoes smolder underneath Indonesia and New Zealand. A supervolcano eruption packs the devastating force of a small asteroid colliding with the earth and occurs 10 times more often--making such an explosion one of the most dramatic natural catastrophes humanity should expect to undergo. Beyond causing immediate destruction from scalding ash flows, active supervolcanoes spew gases that severely disrupt global climate for years afterward. Needless to say, researchers are eager to understand what causes these giants to erupt, how to predict when

they might wreak havoc again, and exactly what challenges their aftermath might entail. Recent analysis of the microscopic crystals in ash deposits from old eruptions has pointed to some answers. These insights, along with improved technologies for monitoring potential disaster sites, are making scientists more confident that it will be possible to spot warning signs well before the next big one blows. Ongoing work hints, however, that supervolcano emissions could trigger

alarming chemical reactions in the atmosphere, making the months following such an event more hazardous than previously suspected. Almost

all volcano experts agree that those of us living on the earth today are exceedingly unlikely to experience an active supervolcano. Catastrophic eruptions tend to occur only once every few hundred thousand years. Yet the sheer size
and global effects of such episodes have commanded scientific attention since the 1950s.

New studies prove well be able to predict super volcanic eruptions Alaska Science Forum 7 (May 30, Geophysical Institute, University of Alaska Fairbanks, Super eruptions are disasters like none other,
http://www2.gi.alaska.edu/ScienceForum/ASF18/1857.html) Super eruptions dont happen often, but they have been much larger than the Tambora eruption. An eruption in about AD 1452 was twice the size of Tambora, and both Long Valley Caldera (California) and Yellowstone (Wyoming and Idaho) are the earthly scars of super eruptions that affected large areas with their ashfall and lingering effects on the atmosphere. Yellowstone covered a large portion of the Lower 48, Self said. If another of these should occur, an entire country could be covered by ash. A super eruption could put the brakes on life as we know it, Self said. A big eruption would pump so much ash and particulates into the air that the whole planet would cool by as much as 10 degrees Celsius. Crops would fail and sulfur dioxide would eat holes in the ozone. Self said there is basically no way to prepare for a super eruption and it is perhaps foolish to worry about them, but how can a volcanologist not think of them? Even though they are very, very rare, I think its worthwhile to go through the mind exercise of imagining what they might be like, Self said. Steve McNutt of the Alaska Volcano Observatory in Fairbanks agreed with Selfs assessment of a super eruption being bad for business when it comes to being human. If there were an entire season of crop failures worldwide, it would not be pleasant, McNutt said McNutt has studied the precursors of a super eruption. Hes looked at earthquake swarms numerous internal earthquakes that precede volcanic eruptions to see if a longer period of swarms relates to a larger eruption. Included in his data set was information from the Katmai eruption of 1912, recorded by Japanese scientists who had deployed some of the worlds first seismometers. Katmai, which created the Valley of Ten Thousand Smokes on the Alaska Peninsula, was the largest eruption of the 20th century, and is one of the few eruptions in modern times large enough for McNutt to consider a super eruption. Pyroclastic Links Alaska Volcano Observatory Katmai National Park Yellowstone Volcano Observatory Yellowstone volcano FAQ Space.com: Watching super volcanoes BBC: Super Volcano site Looking at the earthquake swarm data for Katmai, McNutt noticed the volcano had several large earthquakes, as high as magnitude 5.5, before it erupted. That fit with other data that showed that super eruptions will provide

warnings of their imminent debut through increased earthquake swarm activity. So, while we may not be able to do much about a super eruption, scientists may be able to see one coming. I think its unlikely wed be blindsided, McNutt said. Experts agreeYellowstones volcanic activity is not a sign of impending super eruption CBS 9 ( January 2, Imminent Yellowstone 'Supervolcano' Now 'Unlikely', http://www.freerepublic.com/focus/f-news/2157670/posts) NEW YORK (CBS) Yellowstone remains very geologically active and its famous geysers and hot springs are a reminder that a pool of magma still exists five to 10 miles underground. Yellowstone Earthquake Map The recent "swarm" of small earthquake tremors happening in Yellowstone National Park are not likely to be a sign of a pending "supervolcano" eruption as some fear, according to a top scientist at the Yellowstone Volcano Observatory. Dr. Jacob Lowenstern of the U.S. Geological Survey said Monday that the earthquake activity in Yellowstone most likely will continue for weeks, "and then will end without any other related activity." Lowenstern's comments were reported by money and politics blogger James Pethokoukis of U.S. News & World Report, who
chatted with the scientist via e-mail. Yellowstone National Park was jostled by a host of small earthquakes this week, and scientists watched closely to see whether the more than 250 tremors were a sign of something bigger to come

Recent Yellowstone activity isnt a sign of a super volcanic eruption CSM 11 (February 9, Mysteriously swelling Yellowstone supervolcano 'not a portent of doom,' finds mostly reassuring study
http://www.csmonitor.com/Science/2011/0209/Mysteriously-swelling-Yellowstone-supervolcano-not-a-portent-of-doom-finds-mostly-reassuring-study) For reasons that are not clear, the huge volcano under Yellowstone National Park has been rising at an

unprecedented rate during the past several years. But that doesn't necessarily mean that a massive, doomsday eruption is about to happen, finds a study that manages to put everyone at ease, more or less. The huge volcano under Yellowstone
National Park has been rising at an unprecedented rate during the past several years, according to a new study. In the ancient past, the Yellowstone volcano produced some of the biggest-known continental eruptions, but the recent rising doesn't mean another doomsday eruption is

looming, scientists say. The recent rising is unprecedented for Yellowstone's caldera the cauldron-shaped part of the volcano but it's not uncommon for other volcanoes around the world. The new study has simply revealed a more active caldera at Yellowstone than scientists realized. "It's pretty exciting when you see something that's five times larger than what you've seen in the past,"
said Charles Meertens, director of the nonprofit UNAVCO facility in Boulder, Colo., which aids geoscience research. Meertens is a former postdoctoral fellow under one of the study's authors, Robert Smith of the University of Utah in Salt Lake City. In 2004, the caldera was swelling at 2.8 inches (7 centimeters) per year in some parts, but the uplift has since slowed to a low of 0.2 inches (0.5 cm) per year, according to the study, which was published in the December edition of the journal Geophysical Research Letters . Calderas rise just like an inflating bubble. The inflating could either be caused by magma rising and pushing up on the caldera, or the magma could be heating gases and hydrothermal fluids (the same fluids that spew from Yellowstone's Old Faithful geyser) and pushing them against the caldera, Meertens told OurAmazingPlanet. Whatever the exact mechanism, a rising caldera is not enough to signal an eruption. "It's not a portent of doom," said Erik Klemetti, a volcanologist at Denison University in Granville, Ohio, who was not involved with the study. "It seems like these restless calderas are always sort of rising and falling ,

but that by itself doesn't mean it's about to erupt." Volcanologists look at several indicators when deciding whether an eruption is

looming, Klemetti said. Warning

signs typically include an increase in earthquakes under the volcano, changes in the gases being emitted, change in the volcano's shape, and steam and heat escaping from the top. Low probability and humanity empirically survives supervolcanoes Britt, 6 (Robert, Senior LiveScience Writer, Super Volcano Will Challenge Civilization, Geologists
Warn, LiveScience, http://www.livescience.com/environment/050308_super_volcano.html) The odds

of a globally destructive volcano explosion in any given century are extremely low, and no scientist can say when the next one will occur. But the chances are five to 10 times greater than a globally destructive asteroid impact, according to the new British report. The next super eruption, whenever it occurs, might not be the first one humans have dealt with. About 74,000 years ago, in what is now Sumatra, a volcano called Toba blew with a force estimated at 10,000 times that of Mount St. Helens. Ash darkened the sky all around the planet. Temperatures plummeted by up to 21 degrees at higher latitudes, according to research by Michael Rampino, a biologist and geologist at New York University. Rampino has estimated three-quarters of the plant species in the Northern Hemisphere perished. Stanley Ambrose, an anthropologist at the University of Illinois, suggested in 1998 that Rampino's work might explain a curious bottleneck in human evolution: The blueprints of life for all humans -- DNA -- are remarkably similar given that our species branched off from the rest of the primate family tree a few million years ago. Ambrose has said early humans were perhaps pushed to the edge of extinction after the Toba eruption -around the same time folks got serious about art and tool making. Perhaps only a few thousand survived. Humans today would all be descended from these few, and in terms of the genetic code, not a whole lot would change in 74,000 years. Empirically denied- earthquakes relive super volcano probability and past eruptions havent lead to drastic climate change Handwerk, 1-19-11, Brian, National Geographic, Yellowstone Has Bulged as Magma Pocket Swells,
http://news.nationalgeographic.com/news/2011/01/110119-yellowstone-park-supervolcano-eruption-magma-science/, KHaze

Yellowstone National Park's supervolcano just took a deep "breath," causing miles of ground to rise dramatically, scientists report. The simmering volcano has produced major eruptionseach a thousand times more powerful than Mount St. Helens's 1980 eruptionthree times in the past 2.1 million years. Yellowstone's caldera, which covers a 25- by 37mile (40- by 60-kilometer) swath of Wyoming, is an ancient crater formed after the last big blast, some 640,000 years ago. Since then, about 30 smaller eruptionsincluding one as recent as 70,000 years agohave filled the caldera with lava and ash, producing the relatively flat landscape we see today. But beginning in 2004, scientists saw the ground above the caldera rise upward at
rates as high as 2.8 inches (7 centimeters) a year. (Related: "Yellowstone Is Rising on Swollen 'Supervolcano.'") The rate slowed between 2007 and 2010 to a centimeter a year or less. Still, since the start of the swelling, ground levels over the volcano have been raised by as much as 10 inches (25 centimeters) in places. "It's an extraordinary uplift, because it covers such a large area and the rates are so high," said the University of Utah's Bob Smith, a longtime expert in Yellowstone's volcanism. Scientists think a swelling magma reservoir four to six miles (seven to ten kilometers) below the surface is driving the uplift. Fortunately, the surge doesn't seem to herald an imminent catastrophe, Smith said. (Related: "Under Yellowstone, Magma Pocket 20 Percent Larger Than Thought.") "At the beginning we were concerned it could be leading up to an eruption," said Smith, who co-authored a paper on the surge published in the December 3, 2010, edition of Geophysical Research Letters. "But once we saw [the magma] was at a depth of ten kilometers, we weren't so concerned. If it had been at depths of two or three kilometers [one or two miles], we'd have been a lot more concerned." Studies of the surge, he added, may offer valuable clues about what's going on in the volcano's subterranean plumbing, which may eventually help scientists predict when Yellowstone's next volcanic "burp" will break out. Yellowstone Takes Regular Breaths Smith and colleagues at the U.S. Geological Survey (USGS) Yellowstone Volcano Observatory have been mapping the caldera's rise and fall using tools such as global positioning systems (GPS) and interferometric synthetic aperture radar (InSAR), which gives ground-deformation measurements. Ground deformation can suggest that magma is moving toward the surface before an

eruption: The flanks of Mount St. Helens, for example, swelled dramatically in the months before its 1980 explosion. (See pictures of Mount St. Helens before and after the blast.) But there are also many examples, including the Yellowstone supervolcano, where it appears the ground has risen and fallen for thousands of years without an eruption. According to current theory,
Yellowstone's magma reservoir is fed by a plume of hot rock surging upward from Earth's mantle. (Related: "New Magma Layer Found Deep in Earth's Mantle?") When the amount of magma flowing into the chamber increases, the reservoir swells like a lung and the surface above expands upward. Models suggest that during the recent uplift, the reservoir was filling with 0.02 cubic miles (0.1 cubic kilometer) of magma a year. When the rate of increase slows, the theory goes, the magma likely moves off horizontally to solidify and cool, allowing the surface to settle back down. Based on geologic evidence,

Yellowstone has probably seen a continuous cycle of inflation and deflation over the past 15,000 years, and the cycle will likely continue, Smith said. Surveys show, for example, that the caldera rose some 7 inches (18 centimeters) between 1976 and 1984
before dropping back about 5.5 inches (14 centimeters) over the next decade. "These calderas tend to go up and down, up and down," he said. "But every once in a while they burp, creating hydrothermal explosions, earthquakes, orultimatelythey can produce volcanic eruptions." Yellowstone Surge Also Linked to Geysers, Quakes? Predicting when an eruption might occur is extremely difficult, in part because the fine details of what's going on under Yellowstone are still undetermined. What's more, continuous records of Yellowstone's activity have been made only since the 1970sa tiny slice of geologic timemaking it hard to draw conclusions. "Clearly some deep source of magma feeds Yellowstone, and since Yellowstone has erupted in the recent geological past, we know that there is magma at shallower depths too," said Dan Dzurisin, a Yellowstone expert with the USGS Cascades Volcano Observatory in Washington State. "There has to be magma in the crust, or we wouldn't have all the hydrothermal activity that we have," Dzurisin added. "There is so much heat coming out of Yellowstone right now that if it wasn't being reheated by magma, the whole system would have gone stone cold since the time of the last eruption 70,000 years ago." The large hydrothermal system just below Yellowstone's surface, which produces many of the park's top tourist attractions, may also play a role in ground swelling, Dzurisin said, though no one is sure to what extent. "Could it be that some uplift is caused not by new magma coming in but by the hydrothermal system sealing itself up and pressurizing?" he asked. "And then it subsides when it springs a leak and depressurizes? These details are difficult." And it's not a matter of simply watching the ground rise and fall. Different areas may move in different directions

and be interconnected in unknown ways, reflecting the as yet unmapped network of volcanic and hydrothermal plumbing. The

roughly 3,000 earthquakes in Yellowstone each year may offer even more clues about the relationship between ground uplift and the magma chamber. For example, between December 26, 2008, and January 8, 2009, some 900 earthquakes occurred in the area around Yellowstone Lake. This earthquake "swarm" may have helped to release pressure on the magma reservoir by allowing fluids to escape, and this may have slowed the rate of uplift, the University of Utah's Smith said. (Related: "Mysterious 'Swarm' of Quakes Strikes Oregon Waters.") "Big quakes [can have] a relationship to uplift and deformations caused by the intrusion of magma," he said. "How those intrusions stress the adjacent faults, or how the faults might transmit stress to the magma system, is a really important new area of study." Overall, USGS's
Dzurisin added, "the story of Yellowstone deformation has gotten more complex as we've had better and better technologies to study it."

No timeframe or impact- super volcanoes rarely erupt and the next one will be small BBC, 2006, Supervolcano, The world's biggest bang , http://www.bbc.co.uk/sn/tvradio/programmes/supervolcano/article.shtml, KHaze Considering their destructive potential, it's a good thing super-eruptions are so rare the last one happened in Toba, Indonesia, about 74,000 years ago. Geologists think these eruptions take place about every 50,000 years,
which suggests one is overdue. About 40 supervolcanoes are dotted across the globe. There are two in Britain one in Glencoe, Scotland, the other in Scafell in the Lake District. However, most supervolcanoes, including those in Britain, burned out long ago. Yellowstone, located in the western state, Wyoming, is a dormant supervolcano, which means a major eruption could happen in the future. But before you get worried, it's important to

remember that most volcano experts say a Yellowstone super-eruption is probably a long way off, or it may never happen at all. "It's far more likely, if there is an eruption, it'll be on a small scale, perhaps comparable to Mt St Helens," says volcano expert Prof. Steve Sparks of the University of Bristol.

A2 Supernovas
1. Claims of supernova extinction are premised on flawed science Discover 10 [Phil Plait is the creator of Bad Astronomy, he is an astronomer, lecturer, and author. He worked on the on Hubble Space Telescope for
ten years and six more working on astronomy education. No, a nearby supernova wont wipe us out, January 7th, 2010, http://blogs.discovermagazine.com/badastronomy/2010/01/07/no-a-nearby-supernova-wont-wipe-us-out/, Chetan]

T Pyxidis is a fairly regular nova, blowing its lid every 20 years or so. Its currently overdue, since the last event was in 1967. Using ultraviolet observations and new models of the system, astronomer Edward Sion and his team concluded it may actually explode soon as a supernova, an event far more energetic than a mere nova. Worse, their models indicate
the system is "much closer" than previously thought: about 3300 light years away. In the last paragraph of their press release, it says: An interesting, if a bit scary, speculative sidelight is that if a Type Ia supernova explosion occurs within [that distance] of Earth, then the

gamma radiation emitted by the supernova would fry the Earth, dumping as much gamma radiation (~100,000 erg/square centimeter) into our planet [sic], which is equivalent to the gamma ray input of 1000 solar flares simultaneously. AIIIIEEEEE!!! Were all gonna die! Ahem. Except, really, no. I rolled my eyes when I read that bit. A Type Ia does put out more high-energy radiation than a Type II supernova, which is caused when a massive stars core collapses and the outer layers are ejected. Thats what most people think of when they hear about a supernova. Those have to be really close to hurt us, certainly closer than 25 light years. But even with their added power, a Type Ia just doesnt have the oomph needed to destroy our ozone layer (as the press release indicates) from 3300 light years away. It would have to be far closer than that. I missed that press conference, but oh, how I wish I had been there! My friend Ian ONeill was able to track down some details, and found out that astronomers (including another friend, Alex Filippenko, who is an experts expert on supernovae) at the meeting took Sion to task for this claim. It looks like Sion used the wrong numbers for the gamma ray emission for a Type Ia event, instead using the emission from a gamma-ray burst a far, far, far more energetic event, and dangerous from several thousand light years away. I dont generally have too big an issue with a scientist getting a number wrong, but it depends on the circumstance. Issuing a press release saying, essentially, were all gonna die means they should do some due diligence. And in this specific case they used the phrase "fry the Earth" for Petes sake! means I am less willing to cut them slack . People get scared from stuff like this, and its simply wrong to feed that fire without making really sure you have your numbers straight first. Ill note that scientists tend not to
write press releases, and it can be hard to rein in the PR author if they are not that familiar with the science (which Ive seen many times). But even if the numbers in the PR were correct, the phrasing of that last paragraph is unacceptable. Whoever wrote the release should have known the media would zero in on that phrase. Ian ONeil, in his post at Discovery News, points out The Daily Telegraph did just that, printing an article with the headline, "Earth to be wiped out by supernova explosion". The UK paper The Sun which is so awful fish complain when you wrap them in it had a similar article with the tagline, "A star primed to explode in a blast that could wipe out the Earth was revealed by astronomers

yesterday." Sheesh. Its too bad. There was no need to disaster-porn this release up the way it was done. Recurrent novae and Type Ia supernovae are fascinating, well worth our attention for any number of reasons including of course their potential danger. But its a nottoo-fine line between piquing interest and tarting up the science. 2. The probability of a life-ending supernova is miniscule no collapsing star is close enough Cain 3 [Fraser Cain is the editor of Universe Today and host of Astronomy Cast podcast with Dr. Pamela L. Gay. He studied engineering at the
University of British Columbia, Supernova Wont Destroy the World, January 15th, 2003, http://www.universetoday.com/8505/supernova-wont-destroythe-world/, Chetan] We have one less thing to worry about. While the cosmic debris from a nearby massive star explosion, called

a supernova, could destroy the Earths protective ozone layer and cause mass extinction, such an explosion would have to be much closer than previously thought, new calculations show. Scientists at NASA and Kansas University have determined that the supernova would need to be within 26 light years from Earth to significantly damage the ozone layer and allow cancer-causing ultraviolet radiation to saturate the Earths surface. An encounter with a supernova that close only happens at a rate of about once in 670 million years, according to Dr. Neil Gehrels of NASAs Goddard Space Flight Center in Greenbelt, Md., who
presents these findings today at the American Astronomical Society meeting in Seattle. Perhaps a nearby supernova has bombarded Earth once during the history of multicellular life with its punishing gamma rays and cosmic rays, said Gehrels. The possibility for mass extinction is indeed real, yet the

risk seems much lower than we have thought. The new calculations are based largely on advances in atmospheric modeling, analysis of gamma rays produced by a supernova in 1987 called SN1987a, and a better understanding of galactic supernova locations and rates.

A2 Supernovas/Supernovas Good
Supernovas will give us access to precious metals Strait Times 11 ['Second sun in the sky on way': University lecturer, January 21st, 2011,
http://www.straitstimes.com/BreakingNews/TechandScience/Story/STIStory_626703.html, Chetan]

Some have claimed the impending supernova proves the Mayan calendar's prediction of the apocalypse in 2012. However, according

to Dr Carter, the supernova explosion will provide Earth with elements helpful for human survival. 'When a star goes bang, the first we will observe of it is a rain of tiny particles called neutrinos. It literally makes things like gold, silver all the heavy elements - even things like uranium.'

This makes no sense If time travel is theoretically possible but destroys the universe someone would have destroyed us already the fact that the universe exists in its present state proves no one has traveled in time now or in the future which means your impact wont happen Time travel wont destroy space time if its possible it would preclude causality violations New Scientist in 05 (Mark Buchanan, No paradox for time travelers, June 18, http://www.newscientist.com/article.ns?
id=dn7535&feedId=online-news_rss20)

A2 Time Travel

THE laws of physics seem to permit time travel, and with it, paradoxical situations such as the possibility that people could go back in time to prevent their own birth. But it turns out that such paradoxes may be ruled out by the weirdness inherent in laws of quantum physics. Some solutions to the equations of Einstein's general theory of relativity lead to situations in which space-time curves back on
itself, theoretically allowing travellers to loop back in time and meet younger versions of themselves. Because such time travel sets up paradoxes, many researchers suspect that some physical constraints must make time travel impossible. Now, physicists Daniel Greenberger of the City

University of New York and Karl Svozil of the Vienna University of Technology in Austria have shown that the most basic features of quantum theory may ensure that time travellers could never alter the past, even if they are able to go back in time. Time travel is impossible The Scotsman in 98 (Jim Gilchrist, IT'S OFFICIAL. Scientists have concluded that time can move in only one direction forward, December
3, L/N) So that's it then. Say

goodbye to that glittering crystal machine assembled in the Wellsian basement of your imagination, or, for that matter, to time -travelling DeLorean specials or calendar-defying police boxes. Resign yourself to the future: living in the past's a non-starter, deja vu a temporal red
herring. This news came as a bit of a blow. Personally, I haven't had as yet any experience of time actually moving backwards - although on reflection there have been moments when I would have liked it to do so - but one keeps an open mind. And to think that, just two or three years ago, none other than the Amazing Stephen Hawking

admitted publicly, while brandishing a demand for more government spending on research into "closed time-like

curves", that contrary to his long-held views, he thought time-travel might be possible after all. That from the author of A Brief History of Time, who had once pointed out that the ridiculousness of the whole concept was highlighted by a singular lack of invading hordes of tourists from the future. But there it was, last week, in hard print (all right, the Independent, if you must know): 100 scientists from numerous

countries had published the results of a three-year project to demonstrate that, in our universe at least, time moves in only one direction, and that direction is forward. World-weary cynic I may be, but I felt a little twinge at that, thought I heard
somewhere the dismal, muffled clunk of yet another little door being shut on the realms of the possible: on Bradburyesque dinosaur safaris into the Cretaceous; on Connecticut Yankees in King Arthur's court, on sallying forth to prevent Gavrilo Princip from firing the shot that killed Archduke Franz Ferdinand and ten million others, or to reverse the result at Flodden; on throwing a spanner into the evolutionary works by treading on a Jurassic butterfly ... The article made for heavy going. I read it once over a pint of 80 shilling, then again over a large black coffee, neither of which particularly lubricated its ingress into the little grey cells, as they strove to come to grips with the vagaries of charge parity time symmetry and why matter and antimatter didn't eliminate each other back at the Big Bang. I'll spare you the gory details about kaons and antikaons, pions and

neutrinos: suffice to say that, amid the temporal stramash of whizzing particles and antiparticles, I gathered that the scientists had been violating charge particle symmetry (unspeakable deed!), using high-energy accelerators. Their conclusion was that, so far as time was concerned, antimatter was more likely to turn into matter - evidence of the irrevocable flow of time. And the reason, apparently, why our universe is a matter-dominated sort of place, though we'd all be in a right mess if it wasn't. And that was why time travel was impossible. "You might be able to play tricks with time at the single-atom level, but not in the larger world," pronounced one of the violators. Time travel aint happening The Independent (London) in 98 (Charles Arthur, Science: No more back to the future; To the dismay of science fiction fans, physicists
have proved time only moves forwards, November 27, L/N) But earlier

this month 100 scientists from nine countries published the results of a three-year collaborative project. It demonstrated, for the first time, that in our universe at least, time moves in only one direction. The experiment, called CP-LEAR (Charge Parity experiment in the Low Energy Antiproton Ring), was carried out to study the differences between matter and antimatter, the "converse" of matter. Antimatter particles have the same mass but opposite charge (and
other characteristics) to their matter counterparts; in theory, every matter particle has an antiparticle. The electron's counterpart is the positively charged positron, for example. When a particle and its antiparticle meet, the two annihilate each other in a burst of light

energy. What physicists therefore find strange about antimatter is its general absence in the universe. Theory
suggests that the Big Bang should have created equal amounts of matter and antimatter. Why didn't they eliminate each other at the universe's birth? "That is the big mystery," says Professor Frank Close, from the Rutherford Appleton Laboratory in Didcot. He is presently on secondment to Cern, the European Laboratory for Particle Physics in Geneva, Switzerland, which led the CP-LEAR work. Antimatter has not been found "free" in the wider universe, despite careful searches. One suggestion is that time affects particles and antiparticles differently. Early quantum physics assumed that, like other laws of physics, subatomic reactions would be the same no matter which way time flowed. If you started with a group of particles and antiparticles with known charges and "parities" (measurable quantities such as "spin" and "flavour"), then banged them together and measured the charge and parity of the resulting particles, the

totals would be the same before and after. Physicists called this "CPT symmetry" - for charge parity time symmetry. However, physicists always want to check such assumptions with the real world. They could not run time backwards, but they could experiment with antiparticles by pretending that antiparticles were just particles moving back in time. Testing this idea experimentally meant evaluating the charge and parity of every particle produced in thousands of high-speed particle collisions in high-energy accelerators. In 1964 a Japanese team discovered that, in some reactions, the totals differed. This effect, known as "charge parity violation", or CP violation, centres on an electrically neutral particle called the K meson, or kaon. In most reactions, it simply broke down into three pi mesons (pions). But in a fraction of cases, it decayed into only two pions - violating CP symmetry. The experiment put a bomb underneath the idea that time could run in either direction. For 30 years CP violation bothered physicists; they needed more powerful particle accelerators to confirm what was happening. Finally, in 1995, a set of new experiments set out to test this, using kaons and their antiparticles, antikaons. These are short-lived particles produced by the collision of antiprotons with hydrogen atoms. (Hence the use of the Low Energy Antiproton Ring for the work.) Kaons can turn into antikaons - and antikaons can turn into kaons - until they finally decay into an electron, a pion and a neutrino. By measuring the electron's exact charge, observers can determine whether the parent was a kaon or antikaon. In a paper published last month in the journal

Physics Letters, the international team working on the CP-LEAR experiment found that antikaons turned into kaons more often than kaons turned into antikaons. In other words, with time, antimatter is more likely to turn into matter - evidence of a clock running under the fabric of the universe. Very possibly, this difference was one of the reasons
our nascent universe turned into a matter-dominated place, instead of being snuffed out in a blast of gamma rays. Of the CP-LEAR results, Professor Close says: "This is confirmation that everything we believe about the universe holds together ." So does that mean that

time travel is impossible? Yes, according to Professor Close. "The way I describe it is that while you may not be able to tell which way a film is running when you see two billiard balls colliding, you'll certainly be able to tell if you see a white ball shooting towards a scattered group of balls on a table, after which they group together into a pyramid. You'd know it's crazy. You might be able to play tricks with time at the single-atom level, but not in the larger world."

Mega tsunamis are exaggerated- theres no impact and zero probability ScienceDaily, 2004, Southampton Oceanography Centre is a joint venture between the University of Southampton and the Natural
Environment Research Council Canary Islands Landslides And Mega-Tsunamis: Should We Really Be Frightened? http://www.sciencedaily.com/releases/2004/08/040815234801.htm, KHaze

A2 TsunamisExaggerated

What is the reality behind stories of mega-tsunamis wiping out the American east coast and southern England ? Very little, according to Dr Russell Wynn and Dr Doug Masson from Southampton Oceanography Centre, who have been studying Canary Islands landslides for many years. Their research has shown that stories of a devastating 'mega-tsunami' some 300 feet high and travelling at 500 mph are greatly exaggerated, and that reports suggesting tens of millions of people could be killed have little basis in reality. Dr Russell Wynn said, "The Canary Islands are volcanic islands that collapse at regular intervals in geological time. However, it is important to remember that in the last 200,000 years there have only been two major landslides on the flanks of the Canary Islands. At SOC we have studied previous Canary Islands landslides to understand how they move, and have found good evidence to show that the landslides actually break up and fall into the sea in several stages." "By analogy, if you drop a brick into a bath you get a big splash, but if you break that brick up into several pieces and drop them in one by one, you get several small splashes. Therefore a multi-stage failure would certainly not generate tsunamis capable of damaging the coastlines of southern England or the American east coast, although they may have an impact on nearby Canary Islands." Dr Wynn added, "The mega-tsunami scenario currently being aired in the media is a hypothetical 'worst case', and is largely based upon speculative computer models of landslide motion and tsunami generation. In contrast, our work involves study of actual landslide deposits." The best scientific models prove theres no Mega Tsunami risk- Dont trust their authors, theyre fabricating reports for funding and use inaccurate models TU Delft Report, no date, cites multiple leading organizations on Tidal Waves and leadings scientific experts, La Palma Tsunami The megahyped tidal wave story, http://www.lapalma-tsunami.com/, KHaze

Three scientists say that half of La Palma will fall into the sea and cause a tsunami that will wipe out much of the population of the eastern seaboard of the USA. They are wrong. La Palma will not slide into the sea. Even if it did, it wouldn't cause a tsunami that would reach the USA Why are they saying it will? Almost certainly to obtain funding for their own research projects. The worlds scientific experts have shown the "research" by Ward/Day/McGuire to be incorrect, unproven and wildly exaggerated both in the Horizon program and subsequent interviews. It is not based on scientific facts. New York has nothing to fear from La Palma. The island is stable .
Only a substantial increase in height could cause it to become unstable and at the current rate of growth that would take at least 10,000 years, This news spoils the fun for the media, disaster-mongers and hazard industry, but fortunately for the supposed potential victims and the people of the Canary Islands there is nothing to worry about on La Palma. The content of the BBC Horizon television program has been disproved by

fellow scientists and the even the BBC itself has published a partial retraction under the title "Tidal wave threat "over-hyped" Read the facts about the situation here: FACT: The Tsunami Society rightly accuses Ward/Day/McGuire of scaremongering The Tsunami Society has issued a press statement to counteract the effect of the scaremongering reports. Their aim is to
correct misleading or invalid information released to public about this hazard. The Tsunami Society states about the Discovery Channel program: 'We would like to halt the scaremongering from these unfounded reports." Source: The Tsunami Society THEY SAY: The "The block dropped 4 meters in 1949"!. The suggestion that 'the block' of rock (25km long, 2-3km deep and 15-20km wide) suddenly sank by 4 metres in 1949 is an absurd lie. Just one look at the coastal villages of Puerto Naos, Tazacorte, El Remo, Bombilla and Playa Nueva is enough to disprove this ridiculous lie. These villages are all situated just above sea level and would have disappeared under the sea. The block didn't sink, the villages didn't sink. SOME OF THE AREAS 30,000+ INHABITANTS WOULD HAVE NOTICED. Ward/Day/McGuire suggest that the entire block moved 4metres vertically in relation to the rest, if that were true it would leave a very clearly visible vertical displacement fault. This evidence just does not exist. The line they suggest for the edge of the block is 40kms long. Along 37.5km of this line there is NO EVIDENCE of any movement at all. They are lying about 37.5km of the 40km line. Or lets measure it by surface area. The surface area of the supposed block (above the sea) is about 135km2, there is evidence of movement in area of roughly 0.25km2. What the real scientists reported was a surface fissure 2.5km long. A fissure is NOT a vertical displacement. The real scientists stated that there was no evidence to indicate that it was anything more than a localised surface phenomenon created by the lava flows nearby. If 'the block' dropped by 4 meters then there must be a surface crack along the 2 sides. These cracks DO NOT EXIST. There is no fault line. The suggestion that the block fell is a deliberate falsehood. They found 'a crack in the paint' but Ward/Day/McGuire claim that 'the whole wall is about to fall down' !!! THEY SAY: Mountain flank collapses have caused long-distance Mega-tsunamis. The three known instances of similar events , Krakatoa, Santorin and Lituya Bay, Alaska, created local damage

but a tsunami did not reach any distant shores. Claims that the El Hierro collapse caused problems in the Bahamas are denied by scientists in the Bahamas. Ward/Day/McGuire ignored proven scientific facts and start the Horizon program by stating that the Lituya Bay, Alaska incident caused a mega tsunami. It didn't, and they knew it. Lander, and P. Lockridge stated clearly that the wave was confined to a small bay and dissipated quickly in the open waters of the Gulf of Alaska. DAMNING EVIDENCE: Using scientific computer modelling the researchers from the Technical University of Delft tried to simulate the collapse of La Palma and an ensuing Tsunami. Even using extreme unrealistic data and ignoring many dampening effects they could not create a significant tsunami. The Technical University of Delft in the Netherlands is a highly respect University and Technical Research Institute. Their evidence leads one
to compare the BBC Horizon program and the so called research by Ward/Day/McGuire with a Monty Python sketch. The only way La Palma is going to fall into the ocean is if the gigantic Monty Python cartoon foot kicks it there! LIE: The Horizon report suggests that evidence was gathered from the water galleries (tunnels) on La Palma. ALL the La Palma galleries are in the Caldera de Taburiente in the North of the island. The Caldera has a completely different structure and cannot provide valid supporting evidence. MISREPRESENTATION: The Horizon report shows 4 huge vertical walls of rock within the Cumbre Vieja. One of the graphics clearly shows 4 vertical columns of rock which, it is claimed, would trap water and create the explosion needed to trigger a landslide. The walls of rock appear to be about half a kilometer wide, at least 4 km deep and according to their own theory would need to be solid, continuous, 25km long and run parallel for all of that 25km. Curiously, none of these huge walls of rock were encountered by the construction workers who built the 2 road tunnels which have been drilled through the Cumbre Vieja. Curiously, none of the 4 walls of rock actually come to the surface ANYWHERE, not even where the island plunges down into the sea. Curiously, the authors own scientific report over the west side of the island categorically states that nothing is known about the structure under the Cumbre Vieja and suggests that someone investigates it. Yet they expect us to believe their speculation that these huge columns exist. Conclusion ? The 4 vertical walls of rock are a fabrication and only exist in the Horizon program. THEY SAY: La Palma will drop as one massive block into the ocean. All the measurements used to define the size of the possible block that might fall into the sea have been grossly exaggerated. The length, width, depth and speed are all fictional. The physical evidence for the length of the block is 4km, yet in the model they used 15 to 25 km. The depth is suggested as 2 to 3km below the surface. The report itself states that there is no evidence for any form of deep fracture. The figure used is fictitious and was obviously chosen because without a large figure the whole La Palma Tsunami theory is exposed as a fake. Width is given as 15 to 20 km. Again there is no hard evidence to support this fiction. The speed of the collapse used in the model is not possible under normal circumstances. An unusual form of natural lubrication would be needed to achieve the speeds used in the model. This natural lubrication is NOT present under La Palma. An immense force would be needed to trigger the movement of the so-called block and these explosive pressures could not be produced on La Palma. THEY CALCULATED: Their computer model says there would be a 'disaster-movie-size wave'. The method of calculation used in the

mathematical model is the wrong one. It is only valid for long under sea earthquakes. It is incorrect and misleading to claim that it can be applied to landslides. Using the correct calculations there wouldn't have been a scare story. THEY SAY: La Palma has undergone a previous single total flank collapse. There are piles of debris off the coast of La
Palma but there is no evidence that this was caused by a single event. There IS evidence that the debris is the result of a series of small landslides. Ward/Day/McGuire present this lie as a fact but produce NO evidence to support their falsification, which is simply because there is no evidence. FACT:

The 'Natural Hazard' industry is a multimillion pound (euro/dollar) business. Educational and Research institutes, Insurance companies and Hazard warning equipment manufacturers have a large commercial interest in promoting a high level of interest (and investment) in hazard monitoring programs. Promoting scare stories is one method of obtaining funding. FACT: The research and TV program were funded by Insurance companies. The 'research organisation' which provided the information for the BBC Horizon program is largely funded by their
parent organisation. Their parent organisation is an Insurance Company. Insurance sells better when their potential clients get scared. Previous versions of the research organisations website showed logos and references to the insurance companies who finance the hazard research. These references are no longer shown on the website.

Mega-tsunami predictions are empirically deniedVelocity Reviews, 2006, NASA has recently answered to the prediction of a mega tsunami,
http://www.velocityreviews.com/forums/t425185-nasa-has-recently-answered-to-the-prediction-of-a-mega-tsunami.html, KHaze

NASA has recently answered with a world press release to the prediction of a mega tsunami, created by a possible impact of a fragment of the comet SW-3 on MAY 25, 2006 in the Atlantic Ocean. This prediction, based on a clear and precise psychic communication, was confirmed by many sources. This information was itself spread out by an international press release, and taken into account by American media, even in Europe and Africa. NASA headquarters therefore spoke directly to the author of the prediction, which is quite strange. In order to spread this vital warning (waves until 200 meters), Eric Julien has decided to resign from his position of Director of Exopolitics Institute to create a website dedicated to save the lives of the Atlantic coast people, in the East of South America and North America, and the West of Africa and
Europe :

No Mega Tsunamis- multiple barriers Pararas, 2002, George, Science of Tsunami Hazards, Vol 20, No.5, EVALUATION OF THE THREAT OF MEGA TSUNAMI GENERATION FROM
POSTULATED MASSIVE SLOPE FAILURES OF ISLAND VOLCANOES ON LA PALMA, CANARY ISLANDS, AND ON THE ISLAND OF HAWAII, http://www.drgeorgepc.com/TsunamiMegaEvaluation.html, KHaze

A2 TsunamisNo Impact

Massive flank failures of island stratovolcanoes are extremely rare phenomena and none have occurred within recorded history. Recent numerical modeling studies, forecasting mega tsunami generation from postulated, massive slope failures of Cumbre Vieja in La Palma, Canary Islands, and Kilauea, in Hawaii, have been based on incorrect assumptions of volcanic island slope instability, source dimensions, speed of failure and tsunami coupling mechanisms. Incorrect input parameters and treatment of wave energy propagation and dispersion, have led to overestimates of tsunami far field effects. Inappropriate media attention and publicity to such probabilistic results have created unnecessary anxiety that mega tsunamis may be imminent and may devastate densely populated coastlines at locations distant from the source - in both the Atlantic and Pacific Oceans. The
present study examines the assumptions and input parameters used by probabilistic numerical models and evaluates the threat of mega tsunami generation from flank failures of island stratovolcanoes. Based on geologic evidence and historic events, it concludes that massive flank

collapses of Cumbre Vieja or Kilauea volcanoes are extremely unlikely to occur in the near geologic future. The flanks of these island stratovolcanoes will continue to slip aseismically, as in the past. Sudden slope failures can be expected to occur along faults paralleling rift zones, but these will occur in phases, over a period of time, and not necessarily as single, sudden, large-scale, massive collapses. Most of the failures will occur in the upper flanks of the volcanoes, above and below sea level, rather than at the basal decollement region on the ocean floor. The sudden flank failures of the volcanoes of Mauna Loa and Kilauea in 1868 and 1975 and the resulting earthquakes generated only destructive local tsunamis with insignificant far field effects. Caldera collapses and large slope failures associated with volcanic explosions of Krakatau in 1883 and of Santorin in 1490 B.C., generated catastrophic local tsunamis, but no waves of significance at distant locations. Mega tsunami generation, even from the larger slope failures of island stratovolcanoes, is extremely unlikely to occur. Greater source dimensions and longer wave periods are required to generate tsunamis that can have significant, far field effects. The threat of mega tsunami generation from massive flank failures of island stratovolcanoes has been overstated. No impact- it will stay confined to the East Coast and past Mega-Tsunamis prove no extinction Rokser, 2003, Dan, University of Wisconsin-Madison, Department of Civil and Environmental Engineering, Tsunami Protection Methods,
http://homepages.cae.wisc.edu/~chinwu/CEE514_Coastal_Engineering/2003_Students_Web/Dan/Tsunami.html, KHaze Scattered across the worlds oceans are a handful of rare geological time-bombs. Once

unleashed they create an extraordinary phenomenon, a gigantic tidal wave, far bigger than any normal tsunami, able to cross oceans and ravage countries on the other side of the world. Only recently have scientists realized the next episode is likely to begin at the Canary Islands, off North Africa, where a wall of water will one day be created which will race across the entire Atlantic ocean at the speed of a
jet airliner to devastate the east coast of the United States. America will have been struck by a mega-tsunami. These mega-tsunami's are caused by large amounts of earth falling into the ocean. Scientists now realize that the greatest danger comes from large volcanic islands,

which are particularly prone to these massive landslides. Geologists began to look for evidence of past landslides on the sea bed, and what they saw astonished them. The sea floor around Hawaii, for instance, was covered with the remains of millions of years worth of ancient landslides, colossal in size. But huge landslides and the mega-tsunami that they cause are extremely rare - the last one happened 4,000 years ago on the island of Runion. The growing concern is that the ideal conditions for just such a landslide - and consequent mega-tsunami - now exist on the island of La Palma in the Canaries. In 1949 the southern volcano on the island erupted. During the eruption an enormous crack appeared across one side of the volcano, as the western half slipped a few meters towards the Atlantic before stopping in its tracks. Although the volcano presents no danger while it is quiescent, scientists believe the western flank will give way
completely during some future eruption on the summit of the volcano. In other words, any time in the next few thousand years a huge section of southern La Palma, weighing 500 thousand million tons, will fall into the Atlantic ocean. What will happen when the volcano on La Palma collapses ? Scientists predict that it

will generate a wave that will be almost inconceivably destructive, far bigger than anything ever witnessed in modern times. It will surge across the entire Atlantic in a matter of hours, engulfing the whole US east coast, sweeping away everything in its path up to 20km inland. Boston would be hit first, followed by New York, then all the way
down the coast to Miami and the Caribbean.

No impact- we can just evacuate areas Velocity Reviews, 2006, NASA has recently answered to the prediction of a mega tsunami,
http://www.velocityreviews.com/forums/t425185-nasa-has-recently-answered-to-the-prediction-of-a-mega-tsunami.html, KHaze

Eric Julien

invites everyone to warn family and friends, who do not live in a safety area, to go to proper places on MAY 25, 2006 : inland and/or the heights. This date is the Ascension Day, i.e. the
day for the people to go up. This date, once converted in the traditional Julien calendar (16 centuries long), is JUNE 6, 2006, i.e. 6/6/6. Julien claims that we have still time to organise the protection of citizen without panic. We must stay calm and face this

collective responsibility with dignity. This former military air traffic controller and civilian jet pilot has got the time to develop his sense of responsibility. He proves that the heavy NASA administration, having lost two space shuttles (50% of the fleet)
due to a lack of vigilance, does not know all the parameters to claim that there is no danger for Earth, especially when data of cometary trajectories were "curiously changed", as he proved it. As former instructor in the French astronaut Patrick Baudry Space Camp, Julien has recently studied the fragmentation of SW-3 and claims that a big Earth change will happen end of MAY.

Past Mega-tsunamis have caused minimal damage- theres no impact ETIC, 2011, Earthquake and Tsunami Information Center, Possible Tsunami Dangers in The Atlantic The Largest Tsunamis at Lisbon and at
Puerto Rico, http://earthquaketsunamionline.info/2011/05/possible-tsunami-dangers-in-the-atlantic-the-largest-tsunamis-at-lisbon-and-at-puerto-rico/, KHaze

Islands of volcanic origin, such as the Canaries, have an especially large potential for triggering a tsunami. That the Canaries constitute a danger was shown 300 000 years ago when a part of the island El Hierro slid into the sea, triggering a mega-tsunami which carried rocks as high as a house for many hundreds of metres into the interior of the east coast of what is today the USA. The danger of a similar island collapse is seen by scientists particularly at the island of La Palma in the Canaries. Here, following a volcanic eruption in 1949 almost half of the mountain range of 20 km moved westwards towards the sea, leaving a large tear in the volcanic basalt. In the event of a fresh eruption, a huge part of the volcano could loosen itself due to differences in the types of rock and diverse water deposits within the now active volcano. As a result, the densely populated east coast of America would be massively threatened. According to a computer simulation by
Stephen N. Ward and Simon Day, a tsunami (purple-red on the graphics) would rush across the Atlantic if the slopes of the Cumbre Vieja.volcano were to collapse into the sea. Greatest known Tsunamis of the Atlantic region 11 October 1918, Puerto Rico: On 11 October 1918 an

earthquake of 7,5 on the Richter scale occurred in the Mona Passage west of Puerto Rico. The earthquake created a tsunami that grew to 6 metres and caused huge destruction along the western and northern coast of Puerto Rico. The tsunami caused the loss of 116 lives and physical damage amounting to 29 million dollars.

Type II civilizations are immortal they are key to solving any natural disaster even novas Kaku 99 (Michio, Professor of Physics at City University of New York, Visions: How Science Will Revolutionize the 21st Century, p. 326-327) By the time a civilization has reached Type II status, however, it will become immortal, enduring throughout the life of the universe. Nothing known in nature can physically destroy a Type II civilization. A Type II civilization has the ability to fend off scores of astronomical or ecological disasters by means of the power of its technology.
Potentially disastrous meteor or comet impacts can be prevented by deflecting away any cosmic debris in space which threatens to hit its planet. On a scale of millennia, ice ages can be averted by modifying the weathere.g., by controlling the jet stream near its polar caps or perhaps making micro-adjustments to the planet's spin. Because the planet's engines produce large amounts of heat, it requires a highly sophisticated waste management and recycling system. However, with centuries of experience in managing and recycling its wastes, it will not face catastrophes caused by the collapse of its environment.

A2 Type II Civilization

Perhaps the greatest danger faced by a Type II civilization is posed by an eruption of a nearby supernova, whose sudden burst of deadly X-rays could fry nearby planets. But by monitoring its nearby stars, a Type II civilization will have centuries in which to build space arks capable of carrying its peoples to colonies on nearby solar systems if they detect that one of its nearby stars is dying. Type II civilization is key to solve the big chill which causes extinction Kaku 4 (Michio, Professor of Physics at City University of New York, Astrobiology Magazine, How Advanced Could They Be? April 26,
http://www.astrobio.net/news/modules.php?op=modload&name=News&file=article&sid=939&mode=thread&order=0&thold=0) There is also the possibility that a Type

II or Type III civilization might be able to reach the fabled Planck energy with their machines quadrillion times larger than our most powerful atom smasher. This energy , as fantastic as it may seem, is (by definition) within the range of a Type II or III civilization. The Planck energy only occurs at the center of black holes and the instant of the Big Bang. But with recent advances in quantum gravity and superstring theory, there
(10^19 billion electron volts). This is energy is a is renewed interest among physicists about energies so vast that quantum effects rip apart the fabric of space and time. Although it is by no means certain that quantum physics allows for stable wormholes, this raises the remote possibility that a sufficiently advanced civilizations

may be able to move via holes in space, like Alice's Looking Glass. And if these civilizations can successfully navigate through stable
wormholes, then attaining a specific impulse of a million seconds is no longer a problem. They merely take a short-cut through the galaxy. This would greatly cut down the transition between a Type II and Type III civilization. Second, the ability to tear holes in space and time may come

in handy one day. Astronomers , analyzing light from distant supernovas, have concluded recently that the universe may be accelerating, rather than slowing down. If this is true, there may be an anti-gravity force (perhaps Einstein's cosmological constant) which is counteracting the gravitational attractio n of distant galaxies. But this also means that the universe might expand forever in a Big Chill, until temperatures approach near-absolute zero. Several papers have recently laid out what such a dismal universe may look like. It will be a pitiful sight: any civilization which survives will be desperately huddled next to the dying embers of fading neutron stars and black holes. All intelligent life must die when the universe dies. Timeframe the transition to a type II civilization would take thousands of years, we arent even Type I yet Kaku 99 (Michio, Henry Semat Professorship in Theoretical Physics at CUNY, Ph.D. from the University of California at Berkely Radiation Laboratory, Visions: How Science will Revolutionize the 21st Century http://books.google.com/books?
vid=ISBN0192880187&id=VQcCV1VuT_cC&pg=PA326&lpg=PA326&dq=%22type+II+civilization%22&sig=y_8X2c0RBRLiQKZua_Ge610hxXQ&hl=en)

The transition to a type II civilization, which can utilize and manipulate the power of the sun, may take several thousand years, based on the geometric growth of technology. A type II civilization could colonize the solar system and perhaps a few neighboring ones, mine the asteroid belts, and begin to build gigantic machines that can manipulate the greatest energy source in the soolar system: the sun. (The energy needs of a type II civilization would be so large that people would have to mine the sun.) The transition to a type III
civilization, which can harness the resources of a galaxy, stretches our imagination to the limit. A type III civilization could master forms of technology that can only be dreamed of now, such as interstellar travel. Perhaps the most revealing glimpse at what a type III civilization might be like can be found in Isaac Asimovs Foundation series, which used the entire galaxy as a stage. Given this perspective, which spans hundreds of thousands of years of technological development, we have made rapid progress in grasping the fundamental laws of nature within just three hundred years of Newtons original theory of gravity. It is difficult to conceive how our civilization, with its limited resoources, eventually will make the

transition to a Type I civilization and then exploit the full potential of the unified field theory. But Newton and
Maxwell, in their lifetimes, probably also never realized that civilization would one day have the resources to send spaceships to the moon or to electrify cities with gigantic electrical plants.

A2 Vacuum Bubbles
No risk of vacuum bubbles wouldve occurred elsewhere in the universe due to cosmic-ray collisions already Ellis et al 08 (12/11/08, John Ellis, Gian Giudice, Michelangelo Mangano, Igor Tkachev and Urs Wiedemann, LHC Safety Assessment Group, Theory
Division, Physics Department, CERN, Review of the Safety of LHC Collisions, http://lsag.web.cern.ch/lsag/LSAG-Report.pdf) HL

These large rates for the collisions of cosmic rays at energies higher than the LHC imply directly that there can be no danger to the Earth from the production of bubbles of new vacuum or magnetic monopoles at the LHC [1]. It has often been
suggested that the Universe might not be absolutely stable, because the state that we call the vacuum might not be the lowest-energy state. In this case, our vacuum would eventually decay into such a lower-energy state. Since this has not happened, the lifetime before any such decay must be longer than the age of the Universe. The possible concern about high-energy particle collisions is that they might stimulate the production of small bubbles of such a lowerenergy state, which would then expand and destroy not just the Earth, but potentially the entire Universe. However, if LHC collisions could

produce vacuum bubbles, so also could cosmic-ray collisions. This possibility was first studied in [7], and the conclusions drawn there were reiterated in [8]. These bubbles of new vacuum would have expanded to consume large parts of the visible Universe several billion years ago already. The continued existence of the Universe means that such vacuum bubbles are not produced in cosmic-ray collisions, and hence the LHC will also not produce any vacuum bubbles. Cosmic ray collisions disprove CERN 08 (2008, European Organization for Nuclear Research, The safety of the LHC, http://public.web.cern.ch/Public/en/LHC/Safety-en.html)
HL There have been speculations that the Universe is not in its most stable configuration, and that perturbations caused by the LHC could tip it into a more stable state, called a vacuum bubble, in which we could not exist. If the LHC could do this, then so could cosmic-ray collisions.

Since such vacuum bubbles have not been produced anywhere in the visible Universe, they will not be made by the LHC. No impact energy densities are too low and history disproves Jaffe et al 2k (October 2000, Robert Jaffe, W. Busza, J. Sandweiss, and F. Wilczek, Review of Modern Physics. Vol. 72, No. 4, "Review of Speculative
"Disaster Scenarios" at RHIC." http://arxiv.org/PS_cache/hep-ph/pdf/9910/9910333v3.pdf) HL We know that our world is already in the correct (stable) vacuum for QCD. Our knowledge of fundamental interactions at higher energies, and in particular of the interactions responsible for electroweak symmetry breaking, is much less complete. While theory strongly suggests that any possibility for

triggering vacuum instability requires substantially larger energy densities than RHIC will provide, it is difficult to
give a compelling, unequivocal bound based on theoretical considerations alone. Fortunately in this case we do not have to rely solely on theory; there is ample empirical evidence based on cosmic ray data. Cosmic rays have been colliding throughout the history of the universe,

and if such a transition were possible it would have been triggered long ago. Motivated by the RHIC proposal, in 1983 Hut
and Rees [4] calculated the total number of collisions of various types that have occurred in our past light-cone whose effects we would have experienced. 47 Even though cosmic ray collisions of heavy ions at RHIC energies are relatively rare, Hut and Rees found approximately 10 comparable

collisions have occurred in our past light cone. Experimenters expect about 21011 heavy ion collisions in the lifetime of RHIC. Thus on empirical grounds alone, the probability of a vacuum transition at RHIC is bounded by 21036. We can rest assured that RHIC will not drive a transition from our vacuum to another. We review and update the arguments of Hut and Rees in Section IV after introducting the necessary
cosmic ray data in Section II.

***Misc***

Red Giant Yes Extinction


1. Death by a Red Giant is a mathematical certainty their evidence is hopelessly optimistic Cain 08 [Fraser Cain is the editor of Universe Today and host of Astronomy Cast podcast with Dr. Pamela L. Gay.
He studied engineering at the University of British Columbia, Will Earth Survive When the Sun Becomes a Red Giant?, January 31st, 2008, http://www.universetoday.com/12648/will-earth-survive-when-the-sun-becomes-a-red-giant/, Chetan] Billions of years in the future, when

our Sun bloats up into a red giant, it will expand to consume the Earths orbit. But wait, you say, the

Earth travels the Earths orbit whats going to happen to our beloved planet? Will it be gobbled up like poor Mercury and Venus? Astronomers have been puzzling this question for decades. When the sun becomes a red giant, the simple calculation would put its equator out past Mars. All of the inner planets would be consumed. However, as the Sun reaches this late stage in its stellar evolution, it loses a

tremendous amount of mass through powerful stellar winds. As it grows, it loses mass, causing the planets to spiral outwards. So the question is, will the expanding Sun overtake the planets spiraling outwards, or will Earth (and maybe even Venus) escape its grasp. K.-P Schroder and Robert Cannon Smith are two researchers trying to get to the bottom of this question. Theyve run the calculations with the most current models of stellar evolution, and published a
research paper entitled, Distant Future of the Sun and Earth Revisted. It has been accepted for publication in the Monthly Notices of the Royal Astronomical Society. According to Schroder and Smith, when the Sun becomes a red giant star 7.59 billion years, it will start to lose mass quickly. By the time it reaches its largest radius, 256 times its current size, it will be down to only 67% of its current mass. When the Sun does begin to bloat up, it will go quickly, sweeping through the inner Solar System in just 5 million years. It will then enter its relatively brief (130 million year) helium-burning phase. It will expand past the orbit of Mercury, and then Venus. By the time it approaches the Earth, it will be losing 4.9 x 1020 tonnes of mass every year (8% the mass of the Earth). But the habitable zone will be gone much sooner. Astronomers estimate that will expand past the Earths orbit in just a billion years. The heating Sun will evaporate the Earths oceans away, and then solar radiation will blast away the hydrogen from the water. The Earth will never have oceans again. It will eventually become molten again. One interesting side benefit for the Solar System. Even though the Earth, at a mere 1.5 astronomical units, will no longer be within the Suns habitable zone, much of the Solar System will be. The new habitable zone will stretch from 49.4 AU to 71.4 AU, well into the Kuiper Belt. The formerly icy worlds will melt, and liquid water will be present beyond the orbit of Pluto. Perhaps Eris will be the new homeworld. Back to the question will the Earth survive? According to Schroder and

Smith, the answer is no. Even though the Earth could expand to an orbit 50% larger than todays orbit, it wont get the chance. The expanding Sun will engulf the Earth just before it reaches the tip of the red giant phase. And the Sun would still have another 0.25 AU and 500,000 years to grow. Once inside the Suns atmosphere, the Earth will collide with particles of gas. Its orbit will decay, and it will spiral inward. 2. Several studies conclusively determine that the Earth will be vaporized Apell 8 [David, The Sun Will Eventually Engulf EarthMaybe, September 8th, 2008, http://www.scientificamerican.com/article.cfm?id=the-sunwill-eventually-engulf-earth-maybe, Chetan] Earlier this year two

teams reported different kinds of calculations indicating that Earth will be swallowed up by the sun. In a calculation that would thrill any college junior studying classical mechanics, Lorenzo Iorio of Italys National Institute of Nuclear Physics used
perturbation theory. It simplifies analyses by dropping relatively small factors, thereby making complex equations of motions that describe the interactions between the sun and Earth mathematically manageable. Assuming that the suns yearly mass loss (currently about one part in 100 trillion)

remains small for the duration of its evolution to the red giant phase, Iorio calculates that Earth will move outward at about three millimeters a year, or only 0.0002 AU by the suns red giant phase. But at that point the sun will balloon up, in only a million years, to 1.2 AU in radius, thus vaporizing Earth. Iorios paper, submitted to Astrophysics and Space Science, has not yet been peer-reviewed. Several scientists question whether quantities that Iorio assumes are small will indeed remain small throughout the suns evolution. Even if Iorio got his number crunching wrong, he may have the right answer . In an analysis published in the May Monthly Notices of the Royal Astronomical Society, Klaus-Peter Schrder of the University of Guanajuato in Mexico and Robert Smith of the University of Sussex in England also conclude that Earth is doomed, by using more exact solar models and by considering tidal interactions. As the sun loses mass and expands, its rotation rate must also slow downphysics students learn this relation as the conservation of angular momentum. The slowed rotation causes a tidal bulge on the suns surface. The gravity exerted by this bulge pulls Earth inward. With such a consideration, the researchers find that any planet with a present-day orbital radius of less than 1.15 AU will ultimately perish.

Space travel increases risk of outer-space disease- staying on Earth provides protection and immune system efficiency Than, 2009, Ker, National Geographic, Mutant Diseases May Cripple Missions to Mars, Beyond,
http://news.nationalgeographic.com/news/2009/11/091104-space-diseases-mutants-mars.html, KHaze

Space --> Alien Plague

Mutant hitchhikers may become a major hurdle in the quest to send humans deeper into the galaxy, scientists say. That's because no matter how fit astronauts feel at liftoff, they're likely to be carrying disease-causing microbes such as toxic E. coli and Staphylococcus strains. Charged particles zipping through space, known as cosmic rays, can mutate the otherwise manageable microbes, spurring the bugs to reproduce quicker and become more virulent, recent studies show. At the same time, exposure to cosmic rays and the stresses of long-term weightlessness can dampen the human immune system, encouraging diseases to take hold. Aboard spaceships without advanced
medical care, illness could cripple human missions to Mars and beyond, according to a new report published this month in the Journal of Leukocyte Biology. (Get Mars exploration pictures, facts, and more.) "What is the interest of having people on Mars if they cannot efficiently perform the analyses and studies scheduled during their mission?" said study co-author Jean-Pol Frippiat, an immunologist at Nancy University in France. Cells Change in Zero G For the new report, Frippiat and colleagues analyzed more than 150 studies of the effects of space flight on humans, animals, and pathogens. (Get the scoop on how low gravity makes it harder to get pregnant in space.) On Earth humans are protected from the effects of cosmic rays,

because most of the particles are deflected by the planet's magnetic field. Out in space, however, such protections vanish, and cosmic radiation can cause mutations when it strikes the DNA inside cells. (Find out more about where cosmic rays come from.) The absence of gravity can also be detrimental to human health, because weightlessness allows structures to shift around within cells. The immune system is particularly vulnerable, since it relies on cell-to-cell interactions for ridding the body of harmful pathogen s. One study, for instance, found that astronauts who had recently returned from space had white blood cells that were less effective at seeking out and destroying E. coli bacteria. Left untreated, E. coli can cause severe cramps, vomiting, and diarrhea as well as kidney and blood-cell damage that can lead to fatal complications . Zero-gravity space travel makes common diseases impossible to stop DiGregorio, 2008, Barry E., Deadly Microbes From Outer Space, http://discovermagazine.com/2008/feb/deadly-microbes-from-outer-space,
KHaze For astronauts toiling in the close quarters of the International Space Station or on a shuttle to Mars, an

ordinary germ would be risky enough. But a recent experiment published in the Proceedings of the National Academy of Sciences has shown that a microbe can turn even more dangerous in space than on Earth. In that study, a bacterium particularly nasty for humans salmonellawas shown to become more virulent after just 83 hours of growing in space. The experiment on the space
shuttle Atlantis was designed to explore how a lack of gravity affects disease-causing microbes in space. Astronauts aboard the space shuttle grew the salmonella, and back on Earth researchers used it to infect a group of mice. For comparison, bacteria grown in a laboratory on Earth in

normal gravity infected another group of mice. The mice infected with the space-grown germs had a mortality rate almost three times higher than that of mice given germs grown in normal gravity. Researchers noticed that while on board the space shuttle, the salmonella encased themselves in a biofilm, a protective coating that is notoriously resistant to antibiotics. Several follow-up experiments on space shuttle flights over the next few years will look to see
whether other bacteria undergo similar changes in virulence in microgravity.

Space travel and exploration lead to decreased system immunity through stresses of spaceflight NASA 2004 (Dolores Beasley and William Jeffs, Release 04-320, Study Suggests Spaceflight May Decrease Human Immunity, September 29,
http://www.nasa.gov/home/hqnews/2004/sep/HQ_04320_immunity.html)

A NASA-funded study has found the human body's ability to fight off disease may be decreased by spaceflight. The effect may even linger after an astronaut's return to Earth following long flights. In addition to the conditions experienced by astronauts in flight, the stresses experienced before launch and after landing also may contribute to a decrease in immunity. Results of the study were recently published in "Brain, Behavior, and Immunity." The results may help researchers better
understand the affects of spaceflight on the human immune response. They may also provide new insights to ensure the health, safety and performance of International Space Station crewmembers and future spacefarers on extended missions. "Astronauts live and work in a relatively

crowded and stressful environment," said Duane Pierson, the study's principal investigator and NASA Senior Microbiologist at Johnson Space Center, Houston. "Stresses integral to spaceflight can adversely affect astronaut health by impairing the human immune response. Our study suggests these effects may increase as mission duration and mission activity demands increase," he added. The white blood cell count provides a clue to the presence of illness. The five main types of white cells work together to protect the body by fighting infection and attacking foreign material. The most prevalent white blood cells are called neutrophils. From 1999 to 2002, scientists from NASA, Enterprise Advisory Services, Inc., of Houston, and the Boston
University School of Medicine compared neutrophil functions in 25 astronauts. They made comparisons after five-day Space Shuttle missions and after nine

to 11 day missions. Researchers

found the number of neutrophils increased by 85 percent at landing compared to preflight levels. Healthy ground control subjects, who did not fly, exhibited no more than a two percent increase. Researchers also discovered
functions performed by these cells, specifically ingestion and destruction of microorganisms, are affected by factors associated with spaceflight. The effect becomes more pronounced during longer missions. The increase in astronaut neutrophil numbers resulted in a corresponding increase (more than 50 percent) in total white blood cell counts at landing. The increase is a consistent consequence of stress. Pierson emphasized that "no astronauts in the study became ill; however, longer

exploration missions may result in clinical manifestations of decreased immune response." Researchers concluded the general effect of spaceflight, pre- and post flight-related stress decreases the ability of crewmembers' neutrophils to destroy microbial invaders. This finding suggests crewmembers returning from longer missions may be briefly more susceptible to infections than before launch, because these cells are not as efficient in ingesting and destroying infectious agents. Space travel increases risk for death from diseases Sastry, assistant professor of experimental veterinary pathology, 2001 (Dr. Jaqannadha K., Texas Medical Center News (Ronda Wendler), "Studies
on Cell-Mediated Immunity Against Immune Disorders, http://www.tmc.edu/tmcnews/10_15_01/page_02.html)

Space travel can cause reduced immunity which leads to increased risk for infections. Immunodeficiency is also the basis for several cancers and AIDS. This project applied the ground-based microgravity technology developed by NASA to help understand immune disorders such as cancer and AIDS. This line of study may eventually help in the design of
treatments and vaccines for these conditions.

Вам также может понравиться