Открыть Электронные книги
Категории
Открыть Аудиокниги
Категории
Открыть Журналы
Категории
Открыть Документы
Категории
How to prevent
existential risks
From full list to complete prevention plan
(Draft)
Contents
Preface
11
11
11
13
13
13
16
17
20
21
23
24
24
26
47
45
47
47
47
Universal catastrophes
Geological catastrophes
Eruptions of supervolcanoes
Falling of asteroids
Asteroid threats in the context of technological development
Zone of defeat depending on force of explosion
Solar flashes and luminosity increase
Gamma ray bursts
Supernova stars
Super-tsunami
Super-Earthquake
Polarity reversal of the magnetic field of the Earth
Emerge of new illness in the nature
Debunked and false risks from media, science fiction and fringe science or old theories
Block 2 Anthropogenic risks
80
Chapter 6. Global warming
80
4
47
49
50
51
54
58
60
62
65
65
66
69
70
79
Chapter 7. The anthropogenic risks which are not connected with new technologies83
Chapter 8. Artificial Triggering of Natural Catastrophes
Chapter 9. Nuclear Weapons
86
102
103
112
112
119
119
119
119
121
123
128
135
Latest developments:
Block 4 Super technologies. Nanotech and Biotech.
176
176
209
172
203
Design features
Chapter 14: Nanotechnology and Robotics
212
217
275
316
270
367
368
372
372
369
369
371
372
381
381
382
386
388
390
390
391
392
393
395
397
398
399
403
404
406
408
408
409
410
418
419
422
424
Cryptowar
Chapter 22. The factors influencing for speed of progress
426
441
441
443
446
451
The problem
The context
In fact, we dont have a good plan
Overview of the map
The procedure for implementing the plans
The probability of success of the plans
Steps
Plan A. Prevent the catastrophe
Plan A1. Super UN or international control system
A1.1 Step 1: Research
6
446
451
451
452
452
453
454
454
455
455
455
467
469
469
469
470
471
471
471
472
473
473
473
474
475
475
476
476
477
477
478
478
479
479
480
B1. Preparation
B2. Buildings
Natural refuges
B3. Readiness
B4. Miniaturization for survival and invincibility
7
480
481
481
481
482
482
483
483
483
485
486
484
484
484
486
494
Literature:
495
491
492
493
493
494
494
Active shields
Existing and future shields
Conscious stop of technological progress
Means of preventive strike
Removal of sources of risks on considerable distance from the Earth
Creation of independent settlements in the remote corners of the Earth
Creation of the file on global risks and growth of public understanding of the problematics
connected with them
Refuges and bunkers
Quick spreading in space
All somehow will manage itself
Degradation of the civilization to level of a steady condition
Prevention of one catastrophe by means of another
Advance evolution of the man
Possible role of the international organizations in prevention of global catastrophe
Infinity of the Universe and question of irreversibility of human extinction
Assumptions of that we live in "Matrix".
Global catastrophes and society organization
Global catastrophes and current situation in the world
The world after global catastrophe
The world without global catastrophe: the best realistic variant of prevention of global
catastrophes
8
495
497
501
502
504
504
505
505
509
510
511
511
512
513
515
516
517
520
521
522
524
523
541
546
546
546
Chapter. Meta-biases
549
551
551
565
565
565
Chapter 4. The universal logical errors, able to be shown in reasoning on global risks
615
652
666
671
673
675
679
684
687
690
694
699
704
708
712
718
719
723
730
733
740
743
10
Preface
Existential risk one where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and
drastically curtail its potential.
N. Bostrom. Existential Risks: Analyzing Human
Extinction Scenarios and Related Hazards
unduly radical. However, when discussing them, we were guided by a precautionary principle
which directs us to consider worst realistic scenarios as a matter of caution. The point is not to
fantasize about doom for its own sake, but to give the worst scenarios the attention they deserve
from a straightforward utilitarian moral perspective.
In this volume you will find monograph Risks of human extinction. The monograph consists of
two parts research of concrete threats and methodology of the analysis. The analysis of concrete
threats in the first part consists of a detailed list with references to sources and critical analysis.
Then, systemic effects of the interaction of different risks are investigated, and probability estimates
are assessed. Finally we suggest roadmap of existential risks prevention.
The methodology offered in the second part consists of critical analysis of the ability of
humans to intelligently estimate the probability of global risks. It may be used, with little change,
and in other systematic attempts to assess the probability of uncertain future events.
Though only a few books with general reviews of the problem of global risks have been
published in the world, a certain tradition has already been formed. It consists of discussion of
methodology, classification of possible risks, estimates of their probability, ways of ameliorating
those risks and then a review of further philosophical issues related to global risks, such as the
Doomsday argument, which will be introduced shortly. The current main books on global
catastrophic risks are: J. Leslie's The End of the world. A science and ethics of human extinction,
1996, Sir Martin Rees' Our Final Hour, 2003, Richard Posner's Catastrophe: Risk and Response,
2004, and the volume edited by Nick Bostrom Global catastrophic risks, 2008.
This book differs considerably from previous books on the topic of global risks. First of all,
we review a broader set of risks than prior works. For example, an article by Eliezer Yudkowsky
lists ten cognitive biases affecting our judgment of global risks. In our book, we address more then a
100 such biases. In the section devoted to the classification of risks, we mention some risks, which
are entirely missing from previous works. I have aspired to create a systematic point of view of the
problem of global risks, which allows us not only to list various risks and understand them, but also
how different risks, influencing one another, form an interlocked structure.
I will use the terms global catastrophe, x-risk, existential risk and human extinction as
synonymous designating total and irreversible die off of all Homo sapience.
Main conclusion of the book is that chances of human extinction is around 50 per cent on the
XXI century but they could be lowered by the order of magnitude if all need actions will be done.
All information used in the analysis is taken from the open sources listed in the bibliography.
12
Robin Hanson wrote a lot about two modes of attitude usually displayed, that is near mode and far mode.
People have very different attitude to things that are happening now and to things that may happen
in distant future. For example if there is a fire in the house, everyone would try to escape, but if the
question of discussion will be should humanity live forever many nice people would say that they
are OK with human extinction.
And even inside the discussion of x-risks we could easily distinguish two approaches.
Two main opinions about timing of exist: decades or centuries. Or catastrophe will happen in next 15-30
years, or in next couple of centuries.
If we take in account many predictions about continuing exponential or even hyperbolic development of
new technologies when we should conclude that superhuman AI and ability to create super deadly
biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write this text in
2014, so it is just 15-30 years from now. As well as predictions about runaway global warming,
limits of growth, peak oil and some version of Doomsday argument all of them are centers around
the year 2030 2050.
If it is 2030, or even earlier, not much could be done to prepare. Late dates left more room for possible
action. But the main problem is not the risk are so large, but that society, government and research
communities are completely unprepared and unwilling to deal with them.
But if we take one hundred years risks timeframe we (as authors) will have some advantages. We are
signaling to be more respectful and conservative. It will be almost never proved that we are false
during our lifetime. We have more chances to be right just because we have larger timeframe. We
have plenty of time to implement some defense measures or in fact to think that such measures
13
would be implemented in the remote future (they will not). We may also think that we are correcting
overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.
The difference between two risks timeframe prediction is like difference in two predictions about future
of a man: one that claims that he will die in next 50 years and another that he will have cancer in
next 5 years. First of them doesnt bring almost any new information, the second is urgent message
which could be false and have high costs. But also the urgent messages tend to attract more
attention, which could bias some sensationalist author. More scientific authors tend to be more
careful and try to distinguish themselves from sensationalist and so give prediction on longer time
periods.
Good example here is E. Yudkowsky who in 2001 claimed that super exponential growth is possible with
ever-shorter doubling periods and super AI in 2005. After this prediction failed he and his
community Lesswrong are more biased to around 100 years estimate until super AI.
So the question if technologies continue their exponential growth is equal to the question of time scale of
global catastrophe. If they do continue to grow exponentially, then the global catastrophe will
happened in nearest decades or will be permanently prevented.
Let take a closer look at both scenarios. Arguments for decades scenario:
1. Because of NBI convergence, advanced nanotech, biotech and AI will appear almost
simultaneously.
2. New technologies will grow exponentially with a doubling time of approximately 2 years and
their risks will grow with a similar or even greater speed.
3. They will interact with each other as any smaller catastrophe could lead to a bigger one. For
example global warming will lead to a fight for recourses and nuclear war, and this nuclear war
will result in the release of dangerous biological weapons. It may be called oscillations near
Singularity.
4. Several possible triggers of x-risks could happen in the near future. It is world war and especially
new arms race, peak oil and runaway global warming.
There are several compelling arguments for centuries scenario:
1. Most predictions about AI were premature. The majority of Doomsday prophecies also have
been proven to be false.
2. Exponential growth will level up. Moores law may come to an end in the near future.
3. The most likely x-risks could be caused by an independent accidental event of unknown
origin, and not by complex interaction of known things.
14
are new problem and no such adaptation has happened. More about cognition on global risks is said
in the second part of the book.
The difference between a global catastrophe and existential risk
Any global catastrophe will affect the entire surface of the earth and all of its inhabitants, though not all
of them will perish. From the viewpoint of the personal history of any person, there is no big
difference between a global catastrophe and total extinction in both cases, he will most likely die.
But from the viewpoint of human civilization, the difference is enormous. It will either end forever
or simply transform and go on a new path.
Bostrom suggested expanding the term existential risks to include events that threaten human
civilization with irreversible damage, like, for example, half-hostile artificial intelligence that
evolves in the direction completely opposite to the current values of humanity, or a worldwide
totalitarian government that will forever stop progress.
But perpetual worldwide totalitarianism is impossible: it will either lead to the extinction of civilization,
maybe in several million years, or smoothly evolve into a new form.
However, it is possible to still include many other things under the category of existential risks we
have lost many things indeed over the course of history dead languages, extinct styles of the
history of art, ancient philosophy and so on.
The real dichotomy passes between complete extinction and simple global catastrophe. Extinction
means the complete destruction of mankind and cessation of history. A global catastrophe could
destroy 90% of the population, but only slow down the course of history for 100 years.
The difference here, rather, is the value of the continuation of human history and the value of future
generations, which for most people is extremely speculative. This probably presents one of the
reasons for ignoring the risks of complete extinction.
both the direct cause of human extinction, or a factor, which opens a window of vulnerability for
following catastrophes. For example after pandemic the civilization would be more vulnerable to the
next pandemic or famine or cant prevent collision with asteroid.
In 2007 was published article of Robin Hanson Catastrophe, Social Collapse, and Human
Extinction in which he used simple math model to estimate how variance in resilience would
change probability of extinction by aftermath of not total catastrophe. He wrote: For many types of
disasters, severity seems to follow a power law distribution. For some of types, such as wars and
earthquakes, most of the expected harm is predicted to occur in extreme events, which kill most
people on Earth. So if we are willing to worry about any war or earthquake, we should worry
especially about extreme versions. If individuals varied little in their resistance to such disruptions,
events a little stronger than extreme ones would eliminate humanity, and our only hope would be to
prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase
the variance in such resistance, such as by creating special sanctuaries from which the few
remaining humans could rebuild society.
Depending on the size of the catastrophe there can be various degrees of damage, which can
be characterized by different probabilities of the subsequent extinction, and the further recovery. It is
possible to imagine several semi-stable levels of degradation:
1. New Middle ages: Destruction of the social system similar to downfall of the Roman
Empire but in global scale. It means termination of the development of technologies, connectivity
reduction and population falling for several percent, however some essential technologies continue
to develop successfully. For example, some kinds of agriculture continue to develop in the early
Middle Ages. Technological development will continue, manufacture and use of dangerous
weaponry will also continue which could lead to total extinction or to degradation to even lower
level during next war. Degradation of Easer Island during internal war is clear example. But return
to the previous level is rather probable.
2. Postapocalyptic world: Considerable degradation of economy, loss of statehood and
society disintegration on small fighting kingdoms. The example of it could be current Somali in
some extent. The basic form of activity is a robbery or digging in ruins. This is probably a society
after large-scale nuclear war. The population is reduced many times, but, nevertheless, millions of
people survive. Reproduction of technologies will stop, but separate carriers of knowledge and
library will remain. Such world can be united in hands of one governor, and revival of the state will
begin. The further degradation could occur as a result of epidemics, pollution of environment excess
weaponry stored from previous epoch etc.
17
3. Survivals: Catastrophe in which survive only separate small groups of the people, which
are not connected to each other: polar explorers, crews of the sea ships, inhabitants of bunkers. On
one side, small groups appear to be in more favorable position, than in the previous case, as in them
there is no struggle between groups of people. On the other hand, forces, which have led to
catastrophe of such scales, are very great and, most likely, continue to operate and limit freedom of
moving of people from the survived groups. These groups will be compelled to struggle for the life.
The restoration period under the most favorable circumstances will occupy hundreds years and will
be connected with change of generations with loss of knowledge and skills. Ability to continue
sexual reproduction will be the basis of survival of such groups. Hanson estimated that the group of
healthy individuals, which may survive, is around 100 people. It is not accounted for injured, elderly
and parentless young children who could survive initial catastrophe but will not contribute to the
future survival of the group.
4. Last man on Earth: Only a few people survive on the Earth, but they are incapable neither
to keep knowledge, nor to give rise to new mankind. In this case people, most likely, are doomed.
5. Bunker people: It is possible to designate also "bunker" level that is level on which only
those people survive who are out of the usual environment. Groups of people would survive in the
certain closed spaces, accidentally or preplanned. Conscious transition to bunker level is possible
even without quality loss if the mankind will keep ability to further develop technologies.
Intermediate scenarios of the post-apocalyptic world are possible also, but I believe, that the
listed four variants are the most typical. Each step down means bigger chances to fall even lower
and less chances to rise. On the other hand, the long-term stability island is possible at a level of
separate tribal communities when dangerous technologies have already collapsed, dangerous
consequences of their applications have disappeared, and new technologies are not created yet and
cannot be created like different species of Homo lived in tribes for millions years before neolith.
But it required lower level of intelligence as stability factor. And the former required less Darwinian
pressure to increase intelligence again.
It is thus incorrect to think, that degradation is simply switching of historical time for a
century or a millennium in the past, for example, on level of a society XIX or XV centuries.
Degradation of technologies will not be linear and simultaneous. For example, it will be difficult to
forget such thing as Kalashnikov's gun. In Afghanistan, for example, locals have learned to make
rough copies of Kalashnikov's. But in a society where there is an automatic machine, knightly
tournaments and horse armies are impossible. What was stable equilibrium at movement from the
past to the future cannot be an equilibrium condition at the path of degradation. In other words, if
18
technologies of destruction degrade more slowly, than technologies of creation the society is
doomed to continuous sliding downwards.
However we can classify degradation degree not by the quantity of victims, but by degree of
loss of knowledge and technologies. In this sense it is possible to use historical analogies,
understanding, however, that loss of technologies will not be linear. Maintenance of social stability
at lower level of evolution demands lesser number of people, and this level is steadier both against
progress and decline. Such communities can arise only after the long period of stabilization after
catastrophe.
As to "chronology", following base variants of regress in the past (partly similar to the
previous classification) are possible:
1. Industrial production level: railways, coal, a firearms, etc. Level of self-maintenance
demands, possibly, tens millions humans. In this case it is possible to expect preservation of all basic
knowledge and skills of an industrial society, at least by means of books.
2. Level, sufficient for agriculture maintenance. It demands, probably, from thousand to
millions of people.
3. Level of small group like a hunter-gathers society.
We also could measure smaller catastrophes by the time on which they delay technical
progress and by probability that the humanity will recover at all.
Technological Level of Catastrophe and "Periodic System" of possible disasters
Possible disasters can be classified in terms of the contribution of the person and then the
technology in their causes. These disasters can also be referred to by the period of history of when
they are the most typical. But you can also give a total estimate of the probability for each type of
disaster for the 21st century.
In this case, it turns out that the more likely a disaster can happen, in the sense of a
technological disaster, has a high probability. In addition, the disaster can be classified according to
their possible causes, in the sense of how they cause the death of people (explosion, replication,
poisoning or infection disease) on the basis of that, I made a sort of "periodic system", which
outlined the possible global risks, a large map, which is on the site immortality-roadmap.com,
along with a map of the ways to prevent global risks and a map of how to achieve immortality.
1)
Natural. They are the disasters which have nothing to do with human activity,
and can occur on their own. These include falling asteroids, supernova explosions, and so on.
Their likelihood is relatively small, just tens of millions of years, but perhaps we seriously
underestimate them because of the effects of observational selection.
2)
Anthropogenic. These natural processes are caused by human activities. The
first is global warming and resource depletion. There are also more exotic options, such as
19
artificial awakening using atomic bombs. The main thing is that one awakens the natural
course of business.
3)
Process-level existing technologies. They are primarily concerned with atomic
weapons, as well as conventional biological weapons made by already existing influenza,
smallpox, anthrax.
4)
The expected breakthrough of technology. It is, first of all, nanotech and
biotech, i.e., the creation of microscopic robots or synthetic biology with the creation of
entirely new biological organisms through genetic engineering and synthetic biology. They
can be called super-technologies, because in the end they will give power over the dead and
living matter.
5)
Artificial Intelligence - superhuman levels as the ultimate technology.
Although it can be created in about the same period as the nanotech, but due to its potential
ability to self-upgrade, it is able to transcend or create any other technology.
6)
Space and hypothetical risks that we face in the distant future with the
development of the universe.
Various possible disasters may differ in their destructive power. One could be withstood
relatively easily, others practically impossible. It is more difficult to resist the disasters that have a
greater speed, more power, more penetrating power, and most importantly, an agent which has the
greatest intelligence. And also those which are more likely and more sudden, are difficult to predict,
and thus, it makes it difficult to prepare for them.
Therefore, man-made disasters more natural and technological they are man-made. At the
top of the pyramid as disasters are super-technology disasters and their king the enemy of artificial
intelligence.
The view of this catastrophe, which is on the top of the pyramid disasters is that it is the most
likely and the most destructive.
Collapse
http://sethbaum.com/ac/2013_DoubleCatastrophe.pdf
20
In
it
he
studied
hypothetical situation in which anti global warming geoengineering program is interrupted by social
collapse which lead to rapid rise of global temperatures.
There are many possible pairs of such double risks, and from such pairs could be built
chains. For example: Nuclear was accidental release of bioweapons.
The paleontological method consists of the analysis of previous mass extinctions in Earth's
history, such as Permian-Triassic extinction, which wiped out 99% of all species on Earth. Could it
happen again and with stronger force?
At last, the method of devils advocate consists in the deliberate design of scenarios of
extinction as though our purpose is to destroy the Earth.
Each extinction risk could be described by following criteria:
Anthropogenic/natural,
Total probability,
Natural risks could happen with any specie of living beings. Technological risks are not quite
identical to anthropogenic risks, as an overpopulation and exhaustion of resources is quite
anthropogenic. The basic sign of technological risks is their uniqueness for a technological
civilization.
Where also a category of proved and unproved existential risks:
Events, which we considered as possible x-risks and decided that they are not.
Events about we cant say now definitely are they risky or not.
Events about which we have good scientific base to say that they are risky.
But the biggest part here is type of events, which may be risky based on some consideration,
which seems to be true but cant be proved as a matter of fact and because of that are questionable
by lot of skeptics.
It is possible to distinguish three categories of technological risks:
Risks for which the technology is completely developed or which demands only slightly
improved technology. This includes nuclear warfare.
Risks, technologies for which are under construction and there are no visible theoretical
obstacles for its development in the foreseeable future (e.g. biotechnology).
Risks, technologies for which are in accordance with known laws of physics, but large
practical and theoretical obstacles need to be overcome that is nanotechnology and AI.
22
Risks, which demand for their appearance new physical discoveries. The bigger part of
global risks in the XX century has occurred from essentially new and unexpected discoveries
at the time I mean nuclear weapons.
The list of global risks in following chapters is sorted by degree of readiness of technologies
In our reasoning we will use our version of the precaution principle: something is an
existential risk until the otherwise is proved. Something here is a hypothesis or crazy idea. And
the burden of prove is not on one who suggest the crazy idea, but on specialist of existential risks.
For example, if someone asks, could Earths core explode, we should calculate all possible scenarios
in which it may happen and estimate their probabilities.
The main thing, which makes x-risks unique is the value of future
generations.
Anyway most people give very small value to future generations,
23
Global catastrophic risks (GCR) have been defined as risks of catastrophe in which 90 per cent of
humanity will die. It will most likely include me and the reader. From personal point of view there
will be not much difference between human extinction and a global catastrophe. The main difference
is the future of humanity.
The main question is this what probability GCR will result in human extinction. 700 mln people is
still a lot, but scavenging economy, remaining nukes and other weapons, worldwide guerilla war,
epidemics, AIDS, global warming and depletion of easy resources could result in constant decline
and even in human extinction. I will address the question further, but I think it is safe to estimate that
GCR is equal to 1 per cent chance of human extinction. This will help us to unite two fields of
research, one of which is much more established and the is more important.
Phil Torres gave rightful criticism of Bostrom term of existential risks. This term is not selfapparent, and it combines human extinction and just mere not realizing whole potential of humanity.
Humanity could live billions years and colonize the Galaxy and still not reach its whole potential,
may be because other civilization will colonize another Galaxy. Most human being live full lives,
but how we could say that a random person reached his full potential? Maybe he was born to be
Napoleon? Even Napoleon didnt get what he wanted after all.
Problems with Defining an Existential Risk http://ieet.org/index.php/IEET/more/torres20150121
But term existential risks wins and often reduced to x-risks. Bostroms classifications of 4 types of
x-risks has not become popular.
He suggests the following classification:
1. Bangs abrupt catastrophes, which results in human extinction
2. Crunches slow arrest of development, resource deletion, totalitarian government. It is not
extinction, and may be not very bad for people there, but it is unstable configuration which would
result in either extinction or supercivilization. The lower the level of equilibrium, the longer
24
civilization could exist on it that is it is more stable. Paleolithic people could live for million of
years.
3. Shrieks this is not extinction but replacement of humanity with some other more powerful
agent, which is non human. It is either AI or some posthumans. Most
4. Whimpers A posthuman civilization arises but evolves in a direction that leads gradually but
irrevocably to either the complete disappearance of the things we value or to a state where those
things are realized to only a minuscule degree of what could have been achieved. It is mostly a
catastrophes which could happened with advance supercivilization. Bostrom suggest two thing: war
with aliens and erosion of our core values.
We could also add context risks different situations in the world imply different chances of a
global catastrophe. Cold war results in arm race and risks of hot war. Apocalyptic ideologies raises
probability of the existential terrorism.
We could add risks changing the speed of tech. progress.
May be we should add a category of risks which change our ability to prevent x-risks. I am not in
position to make recommendations which may be implemented. But may be it would be safe to
create a couple more centers of x-risks research. Or not, if it result in rivalry and incomparable
safety solutions.
25
Human extinction catastrophe is one-time event, which will not have any observer (as long
as they are alive, it is not a such catastrophe).
For this reason, the traditional use of the term probability in relation to the likelihood
of global risks rather is pointless, no matter whether we understand probability in statistical terms, as
a proportion, or in Bayesian terms, as a measure of the uncertainty of our knowledge.
If the catastrophe starts to happen we cant distinguish is it very rare type of catastrophe or
inevitable one.
The concept of probability has undergone a long evolution, and it has two directions
objectivist, where the probability is considered as the fraction of events of a certain set,
and subjectivist, where the probability is considered as a measure of our ignorance.
Both approaches are applicable to the determination of what constitutes a probability of global
catastrophe, but with certain amendments.
Regularly is rising the question: "what is the probability of global catastrophe," but the answer
depends of what kind of probability is meant. I propose a list of different definitions of probability
and notions of probability of x-risk in which the answer will make sense.
1. The Fermi probability (so named by me in honor of the Fermi paradox). This term I
suggest for probability that a certain percentage of technological civilizations in the Universe die for
one specific reason, and the probability is defined as their share from the total amount of
technological civilizations. This quantity is unknown and unlikely to be objectively known until we
survey the entire galaxy, so that it can only be the object of subjective assumptions. Obviously, some
civilizations will make very big efforts in risk prevention, and some smaller efforts, but Fermi
probability also reflects the total share of the effectiveness of prevention I mean chances that they
will be applied and will be successful. I called it Fermi because knowing this probability distribution
could help answer Fermi paradox.
2. Objective probability if we were in the multiverse, it would be a share of all versions of
the future Earth, where will be a global catastrophe of certain type. In principle, it should be close to
the Fermi probability, but differ from it at the expense of some special features of the Earth, if any,
or if we will have create some.
26
3. Conditional probability of the certain type of catastrophe is the probability of the accident,
provided that no other global catastrophes will happen. (E.g. chance of asteroid impact within the
next million years). It is opposite to the probability that the specific type of accident will happen, in
between of all possible catastrophes.
4. Minimum probability is the probability that a disaster would happen anyway, even if we
undertake all possible efforts to prevent it. And the maximum probability of an x-risk is the
probability of it if nothing will be done to prevent it. I think that these probabilities differ in average
10 times for global risks, but better assessment is needed may be based on some analogies.
5. The total probability of global catastrophe vs. the probability of the extinction because of
one particular reason. Since many scenarios of global catastrophe may include several reasons, for
example, one there most of the population dies as a result of a nuclear war, the rest of population is
severely affected by multipandemia, and the last survivors on a remote island are dying because of
hunger. What is the main reason in this case hunger, biotech or nuclear war or dangerous
combination of this three, which are not so deadly independently?
6. Assigned probability the probability which we must ascribe to the particular risk to protect
ourselves from it in the best way, but do not overspend on it our resources that could be directed to
other risks. This is like a stake in the game; with the only difference being that the game is played
once. Here we are talking about estimates or order of magnitude, which are needed to properly plan
our actions. It is also replacement of Bayesian probability of existential risk, which cannot be
calculated without some subjectivism. Torino scale of asteroid risk is good example here.
7. Expected survival time of the civilization. Although we cannot measure the very probability
of global catastrophe of some type, we can transform it into an expected lifetime. Expected lifetime
includes our knowledge of the future change of the probability of a disaster whether it will grow
exponentially or decrease smoothly with increasing our knowledge to prevent it.
8. Yearly probability density. For example, if the probability of certain event in one year is 1
per cent, during 100 years it would be 63.3 percent, and in 1000 years period it would be 99.9.
Yearly linear probability density implies exponential growth of total probability of the event.
9. Exponentially growing probability density of total x-risk can be associated with the
exponential growth of new technologies based on Moore's Law, and it gives the total probability of
catastrophe as an exponent of the exponent, that is growing very quickly. It could go from near 0 to
almost 1 in just 2-3 doubling of technology based on Moore law, or in 4-6 years based on current
temp of technological developments. This means that a period of high catastrophic risk will be
around 6 years and probably during it some smaller catastrophes will also happen.
27
10. Posteriori probability is the probability of a global catastrophe, which we estimate after it
did not happen, for example, all-out nuclear war in the XX century (if we assume that it was an
existential risk). Such an assessment of probability is greatly distorted by observational selection
toward understatement.
We will return to the question of the probability of x-risks in the end of the book.
So, there are serious biases to consider, which mean we should greatly expand our default
confidence intervals to come up with more realistic estimates. Confidence intervals are range of
probabilities for some risk. Like for example nuclear war may have probability interval 0.5 2% per
year. How much we should expand our confidence intervals?
For decision-making we need to know an order of the size of the risk, instead of exact value.
Let's assume that the probability of global catastrophes can be estimated, at the best, to within
an order of magnitude (and, the accuracy of such an estimate will be plus-minus an order of
magnitude) and that such an estimate is enough to define the necessity of the further attentive
research and problem monitoring. Similar examples of risk scales are the Turin and Palermo scales
of asteroid risk.
The eleven-point (from 0 to 10) Turin scale of asteroid danger characterizes the degree of
potential danger of an Earth-threatening asteroid or comet. The point on the Turin scale of asteroid
danger is assigned to a small Solar system bodies at the moment of their discovery, depending on the
weight of the body, speed, and probability of its collision with the Earth. In the process of further
research of the orbit of an object, its point on the Turin scale can be updated. Zero means an absence
of threat, ten indicates a probability more than 99% of collision with a body with a diameter more
than 1 km. The Palermo scale differs from Turin in that it considers time remaining before the fall of
an asteroid: less time means a higher score on the scale.
J. Leslie, 1996, "The end of the world": 30% the next 500 years with the account of
Doomsday Argument, without it 5%.
Sir Martin Rees, 2003, Our final hour: 50% in the XXI century.
29
In 2009 was published a book by Willard Wells "Apocalypse when?" [Wells 2009]
devoted to the mathematical analysis of a possible global catastrophe mostly based on
Doomsday argument in Gotts version. Its conclusion is that its probability is about 3%
per decade, which roughly corresponds to 27% per century.
On the other hand, in days of cold war some estimates of probability of extinction was even
higher.
The researcher of a problem of extraterrestrial civilizations Horner attributed to a selfliquidation hypothesis of psyhozoe chances of 65%.
Von Neumann considered that nuclear war is inevitable and also all will die in it.
During 2008 conference Global catastrophic risks was made survey of experts, which
estimated total risks of human extinction as 19 per cent before 2010 (http://www.fhi.ox.ac.uk/gcrreport.pdf). The result of survey is presented in the table:
30
31
32
The precautionary principle recommends us to do the same: as we cant know for sure which
future model is correct, and different models imply different global risks, we should use several
most plausible models. (we cant use all models, as we get to much noise in result).
In our case hyperbolic, exponential, wave and black swan models are most plausible.
To assess the models we could use several characteristics:
1)
Refutability some models have early predictions and we could check these
predictions and see if this model works. This is like Popper criteria. Other models
cant be falsified and this makes them weaker.
2)
Completeness the model should take into account all known strong historic trends.
3)
Evidence base the model should predict past that is be built on large empirical
base.
4)
Support the model should have support from different futurists, and better if they
came to it independently.
5)
6)
Complexity. If the model is too complex or its predictions are too sharp it is probably
false as we live in very uncertain world.
7)
Too strong predictions for near future contradict Copernican mediocracy. Because
if we randomly chosen from the time when the model works, we should be
somewhere in the middle of it. For example, if I predict that nuclear war will happen
tomorrow, I will strongly go against the fact that if didnt happen for 70 years, and its
a priory probability to happen tomorrow of very small.
graphic of changes
Model of the future determines global risks
The map
The map: lists all main future models.
Structure: from fast growth to slow growth models.
34
Combining models
2. United models
Example: spiral-hyperbolic model.
Some models may be paired with one another.
Antique and medieval ideas about the doomsday were that it could happened based on will of
God or as a result of war between demons (Armageddon, Ragnarek).
First scientific ideas of the end of the world and human race appeared in XIX century. In the
beginning of XIX century Malthus created idea of overpopulation though it was not directly
related to complete human extinction, it became a base of future limits of growth ideas. Lord
Kelvin in 1850ies suggested the possibility of thermal death of the Universe that is thermal
equilibrium and stop of all processes. Most popular idea was that life on earth would vanish after
dimming of Sun. Now we know that the Sun is becoming even brighter as it is going older as well as
19 century timescale for these event was quite different in order of several million years as the
source of the energy of stars was not yet known. As the idea of space travel didnt appear yet the
death of life on Earth was equal to death of humanity. All these scenarios had in common that
natural end of the world would be slow and remote process of something like freezing. Humans
cant do anything about it neither stop nor create.
In first half of the XX century we find descriptions of grandiose natural disasters in science
fiction, for example, the works of H.G. Wells (War of the Worlds) and Sir Conan Doyle (The
poison belt). Like collision with giant meteors, poisoning by comets gases, genetic degradation.
During great influenza pandemic in 1918 one physician said that if the thing would go this
way the humanity would be finished in several weeks.
The history of modern scientific study of global risks dates back to 1945. Before the first
atomic bomb tests in the United States there were worries if it would lead to chain reaction of fusion
of the nitrogen in Earth's atmosphere.
In order to assess the risk of it was established commission, headed by physicist Arthur
Compton. He prepared report LA-602 Ignition of the Atmosphere with Nuclear Bombs [LA-602
36
1945], which was recently declassified and is now available to everyone on the Internet in the form
of poorly scanned typewritten pages. Compton shows that due to the scattering of photons by
electrons, the latter will be cooled (because their energy is greater than that of photons), and the
radiation will not heat and but cool the reaction region. Thus, increasing the area of the reaction
process is not able to become self-sustaining. This ensured the impossibility of the chain reaction in
nitrogen atmosphere. However at the end of the text has been mentioned that not all factors were
taken into account for instance, the effect of water vapor contained in the atmosphere.
Because it was a secret report, it was not intended to convince the public which differs it
from in favorable way from recent reports on the safety of the collider. But its target audience was
the decision makers. Compton told them that the chance that a chain reaction will start reaction in
the atmosphere is 3 per million. In the 1970s was conducted journalist investigation, found that
Compton took these figures "out of my head" because found them compelling enough for the
president in the report there is no probability estimates [Kent 2004]. Compton believed that a
realistic assessment of the probability of disaster is not important, because if we Americans
repudiate the bomb tests the Germans or other hostile countries would do them.
In 1979 was published an article by Wood and Weaver [Weaver, Wood 1979] on
thermonuclear detonation in the atmosphere and oceans, which shows that conditions suitable for
thermonuclear self-sustainable reaction do not exist on our planet (but they are possible on other
planets, if there is a high enough concentration of deuterium and they provide minimum
requirements for it).
The next important step in history of global risks research was the realization of not just the
possibility of accidental global catastrophe, but also technical possibility of the deliberated
destruction of humanity. It has become clear after proposal of cobalt bomb by Leo Szilard [Smith
2007]. During the debate on the radio show with Hans Bethe in 1950 about a possible threat to life
on Earth by nuclear weapons, he proposed a new type of bomb: the hydrogen bomb (which was still
not physically at that time, but the ability to create which has been widely discussed), wrapped in a
shell of cobalt-59, which would be converted during the explosion in cobalt-60. This highly
radioactive isotope with a half-life of about 5 years could make the entire continent or the whole
Earth uninhabitable if the bomb is large enough. After such declaration the Department of Energy
has decided to conduct an investigation in order to prove that such a bomb is impossible. However,
it hired scientist has shown that if the mass of the bomb is 200 000 tonnes (i.e. something like 20
modern nuclear reactors and so theoretically achievable), it is enough to destroy all highly organized
life on Earth. Such a device would inevitably be stationary. So could be used only as a doomsday
37
weapon. After all, no one would dare attack a country that has created such a device. In the 60s the
idea of the theoretical possibility of the destruction of the world by using a cobalt bomb was a very
popular and widely discussed in the press and in scientific literature and in art, but then was quite
forgotten. For example, in the book by Kahn "On Thermonuclear War [Khan 1960], N. Shute's
novel On the beach [Shute 1957], in the Kubrick's movie "Dr. Strangelove".
In 1950 Fermi postulated his famous paradox Where are they? that if alien civilizations
exist, why we dont see them. One of obvious explanation at that time was that civilizations use to
destroy themselves in nuclear war and silence of the Universe is explained by this selfdectruction.
And as we are typical civilization we will probably also destroy our selves.
Furthermore, in the 60s appeared a lot of ideas about potential disasters or dangerous
technologies that would be developed in the future. English mathematician J. Good wrote an essay
On the first super-intelligent machine [Good 1965], where he showed that as soon as such a
machine will be created, it will be able to improve itself, and leave humans behind forever later
these ideas formed the basis of ideas about technological singularity by V. Vinge [Vinge 1993], the
essence of which is that, based on current trends, by 2030, it will be create artificial intelligence, a
superior to human beings, and then the history will become fundamentally unpredictable.
Astrophysicist F. Hoyle [Hoyle 1962] wrote the novel "Andromeda", which described the
attack on Earth by hostile alien artificial intelligence, downloaded via radio telescope from space.
He gave most plausible description of scenario of such attack with several steps.
Physicist Richard Feynman wrote an essay Theres plenty of room at the bottom [Feynman
1959], where he was first to suggested the possibility of molecular manufacturing, i.e. nanorobots.
Important role in the realization of global risks played science fiction, which had its golden
age in sixties.
Forester published in 1960 an article entitled Judgment Day: Friday, November 13, 2026. At
this date human population will approach infinity if it grows as it has grown in the last two
millennia [Foerester, 1960]. It is more likely that he chose this title not for making prediction but to
draw attention to his explanation of past growth. So he did false projection of infinite growth of
human population but these prediction interestingly show the same date as several other predictions
made by other by different methods, like on done by Vinge about AI 2030. It also paved the way to
Limits of Growth theories. Forester idea was that human population is growing based on hyperbolic
law and any hyperbolic law is reaching infinity in final time. Of course population cant grow as
much by biological reasons but if we add population of computers we could find that his
prediction was still working around 2010.
38
In 1972 was published Meadows book Limits of growth. It didnt directly predict human
extinction, but only a decline of the human population at the end of the XXI century due to complex
crisis caused by overpopulation, limited recourses and pollution.
In general we have two lines of thoughts regarding future global catastrophe. One is based on
Malthusian theories, and another is based on predictions of technological developments. Idea of total
human extinction belongs to the second line, because Limit of growth theories tend to
underestimate role of technologies in all aspects of human life and one of them is role of
technologies in building weapons. Meadows theory doesnt take in account possibility of nuclear
war (or even more destructive wars and catastrophes based on XXI century technologies), which
could be logically predicted as a result of war for recourses.
In the 1970s became clear the danger associated with biotechnology. In 1971, American
biologist Robert Pollack learned [Teals 1989] that in the next laboratory are planned experiments to
embed oncogenic SV40 virus genome into the bacterium Escherichia coli. He immediately
suggested that if as E. coli spread throughout the world and it could cause a worldwide epidemic of
cancer. He appealed to this laboratory to suspend experiments before they started the experiments.
The result was the ensuing discussions in Asilomar Conference in 1975, which adopted the
recommendations
for
the
safe
genetic
engineering.
http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
In 1981 Asimov published a book "Choice of catastrophes" [Asimov 2002]. Although it was
one of the first attempts to systematize various global risks, the focus was on distant events, such as
the expansion of the Sun, and the main message of the book was that people would be able to
overcome the global risks.
In 1983 B. Carter suggested now famous anthropic principle. Carter's reasoning had the
second part, which he decided not to publish, but only to report on the meeting of the Royal Society,
because he knew that it would cause an even bigger protest. Later it was popularized by J. Leslie
[Leslie 1996]. This second half of the argument became known as the Doomsday argument, DA.
Briefly its essence is that on the basis of past humanity lifetime and the assumption that we are
roughly in the middle of its existence, we can estimate the future existence of mankind. Carter used
more complex form of DA with conditional probability of future catastrophe, which should change
depending of the fact if we find ourselves before or after the catastrophe. In 1993 Richard Gott
suggested simpler version, which is working directly with future lifetime.
In the early '80s appeared a new theory of human extinction as a result of use of nuclear
weapons the theory of "nuclear winter." In the computer simulation of the behavior of the
39
atmosphere after nuclear war was shown that shade from the emission of soot particles in the
troposphere will be a long and significant. This would result in prolong freezing. The question of
how realistic is such blackout and what temperature drop can survive mankind, remains open. This
theory was part of ongoing in 80s political fight against nuclear war. Nuclear war was portrayed in
mass consciousness as inevitably leading to human extinction. While it was not true in most realistic
cases, it was helpful in promoting idea of nuclear disarmament. And it has resulted in drastic
reduction of nuclear arsenals after cold war. So successful public fighting against existential risks is
possible.
In the 80s first publications appeared about the risks of particle accelerators experiments.
In 1985 was published a book of E. Drexler Engines of Creation [Drexler 1985] devoted to
radical nanotechnology that is, the creation of self-replicating nanobots. Drexler showed that such
an event would have revolutionary consequences for the economy and military affairs. He examines
various scenarios of global catastrophe associated with nanorobots. The first is gray goo", i.e.
unrestricted breeding of nanorobots over which control is lost in the environment. In just a few days
they could fully absorb the Earth's biosphere. The second risk is "unstable arms race."
Nanotechnology will allow fast and extremely cheap creation of weapons of unprecedented
destructive power. First of all we are talking about microscopic robots capable of engaging
manpower and equipment of the enemy. Instability of the arms race means that it "one who starts
first takes all" and the balance between two opposing forces, as it was during the Cold War is
impossible.
Public perception of existential risks was mostly formed by art, which later was criticized for
unrealistic description of risk scenarios. In 1984 appeared first movie of Terminator trilogy, where
military AI named Skynet tried to eliminate humanity for self-defense reasons, which later became a
metaphor of dangerous AI. In fact risks of military AI are still underestimated by Friendly AI
community partly because of rejection, which they feel to the Terminator movie.
In 1993 Vernor Vinge coined idea of technological Singularity that will be the moment when
will be created first superhuman AI and one of clear options after it as that it will destroy all
humanity. He predicted that he would be surprise if it happen before 2005 or after 2030. All
predictions about AI are known to be premature.
In 1996 was published the book by Canadian philosopher John Leslie "End of the World.
Science and ethics of human extinction [Leslie 1996], which was radically different from the
Asimovs book primarily by its pessimistic tone and focus on the near future. It examines all the
40
new discoveries of hypothetical disaster, including nanorobots and DA, and concludes that the
chances of human extinction are 30 percent in the next 200 years.
John Leslie was probably the first one who summarized all possible risks as well as DA
argument and started the modern tradition in its discussion but it was Bill Joy who brings these ideas
to the public.
In 2000 Wired magazine came out with sensational article by Bill Joy one of the founders of
Sun Microsystems Why the future doesnt need us [Joy 2000]. In it, he paints a very pessimistic
picture of the future of civilization in which people will be replaced by robots. Human will be like
pets for AI at best. Advances in technology will create a "knowledge of mass destruction" that can
be distributed over the Internet, for example, the genetic codes of dangerous viruses. In 2005, Joy
was in the company to remove from the Internet recently published Spanish flu virus genome. In
2003, Joy said that he wrote two manuscript books, that he decided not to publish. At first he wanted
to warn people of impending danger, but his published article fulfilled this task. In the second, he
wanted to offer possible solutions, but the solutions are not yet satisfying him, and that "knowledge
is not an area where you have right to a second shot."
Since the end of the last century, J. Lovelock [Lovelock 2006] has developed the theory of the
possibility of runaway global warming. The gist of it is that if the usual warming associated with the
accumulation of carbon dioxide in the atmosphere exceeds a certain threshold which is very small
(1-2 degrees C), than the vast reserves of methane hydrates on the seabed and in the tundra,
accumulated there during the recent ice ages begin stand out. Methane is tens times stronger
greenhouse gas than carbon dioxide, and this may lead to a further increase of the temperature of the
Earth, it will launch other chains with positive feedback. For example, it could start burning of the
vegetation on land and more CO2 will be emitted into the atmosphere; oceans would also warm up
and as a result would fall solubility CO2, and again it will be emitted into the atmosphere, and will
form the anoxic area in the ocean there will emit methane. In September 2008 were discovered
bubbles of methane escaping from the bottom of the pillars of the Arctic Ocean. Finally, water vapor
is also a greenhouse gas, and with its concentration rising temperatures will also rise. As a result, the
temperature could rise by tens of degrees, greenhouse catastrophe happens, and all living things die.
Although it is not inevitable, the risk of such a development is the worst possible outcome with the
maximum expected damage. From 2012 and as of 2014 group of scientists unite in Arctic Methane
emergency group with collective blog (http://arctic-news.blogspot.ru/). They are predicting total ice
melt in Arctic as yearly as in 2015 which will help more relise of methane and could lead to growing
of temperature of 10-20 degrees in XXI century and total human extinction by their opinion.
41
42
In 2008 several events have increased interest to the risks of global catastrophe: it is planned
(but not yet fully realized) start of adron collider, vertical jump in oil prices, release of methane in
the Arctic, the war with Georgia and the global financial crisis.
At the beginning of the XXI century is the formation of global risk analysis methodology, the
transition from the transfer of risk to the meta-analysis of human ability to detect and correctly
assess global risks.
In 2008, a conference was held in Oxford "Global catastrophic risks and its materials
proceedings were published under the same title, edited by N. Bostrom and M. irkovi [Bostrom,
Circovic 2008]. It includes more than 20 articles by various authors.
There was an article by M. irkovi about the role of observational selection in the evaluation
of the frequency of future disasters, which claim impossible to draw any conclusions about the
frequency of future disasters, based on a previous frequency
Arnon Dar examines risks of supernovae and gamma-ray bursts, and also shows that the
specific threat to the Earth comes from cosmic rays produced by galactic gamma-ray bursts.
William Neiper in the article about the threat of comets and asteroids showed that perhaps we
are living in a period of intense cometary bombardment, when the frequency of impact is 100 times
higher than average.
Michael Rampino gave an overview of catastrophe risk associated with supervolcanoes.
At the beginning of the XXI century appeared several organizations that promote the
protection of the global risks, for example, Lifeboat Foundation and CRN (Centre for Responsible
Nanotechnology), MIRI, Future of Humanity Institute in Oxford and CGR by Seth Baum. Most of
them are very small and dont have any impact. Most interesting work are done by MIRI and its
subsidiary Lesswrong community forum.
In Cambridge in 2012 was created Centre for study of existential risks with several prominent
figures in board: Huw Price, Martin Rees and Jaan Tallin (http://cser.org/). It got a lot of public
attention, which is good, but no much actual work was done.
Except MIRI which created working community all other institution are in the shadow of
work of their leaders, or not doing any work at all. For example Lifeboat Foundation has very large
boards with thousands people in them but rarely consult them. But they have good mailing list about
x-risks.
The blog Overcoming Bias and articles of Robin Hanson were important contribution to the
study of x-risks. Katja Grace from Australia did important contribution to the theory of Doomsday
43
argument
by
mathematically
connecting
it
with
Fermi
Paradox
(http://www.academia.edu/475444/Anthropic_Reasoning_in_the_Great_Filter).
The study of global risks had the following path: awareness about possibility of human
extinction and the possibility of extinction in the near future, then realization of several different
risks and then attempt to create an exhaustive list of global risks, and then create a system of their
description, which takes into account any global risks and determine the risk of any new
technologies and discoveries. Systematic description has greater predictive value than just a list, as it
allows finding new points of vulnerability, just as the periodic table allows us to find new elements.
And then, study the limits of human thinking about global risks as a first step in creating
methodology that is capable of effectively find and evaluate global risks.
From 2000 to 2008 was the golden age of x-risk research. Many seminal books and articles
were published from Bill Joy to Bostrom and Yudkowsky and many new ideas appeared. But
after that, the stream of new ideas almost stopped. This might be good, because every new idea
increased the total risk, and perhaps all important ideas about the topic had been discussed but
unfortunately nothing was done for preventing x-risks, and dangerous tendencies continued.
Risks of nuclear war are growing. No FAI theory exists. Biotech is developing very quickly
and genetically modified viruses are cheaper and cheaper. The time until the catastrophe is running
out. The next obvious step is to create a new stream of ideas ideas on how to prevent x-risks and
when to implement these ideas. But before doing this, we need a consensus between researchers
about the structure of the incoming risks. This can be via dialog, especially informal dialogs during
scientific conferences.
Lack of new ideas was somehow compensated by appearance of many think tanks as well as
publication of many popular articles about the problem. FHI, GCRI, Lesswrong, Arctic methane
group are among new players in field. But communication between them was not good especially
then it was ideological barrier these are mostly about which risk is most serious: AI or climate, war
or viruses.
Also x-risks researches seems to be less cooperative than for example anti-aging researches,
because maybe each of x-risk researchers pretend to save the world and has his own understanding
how to do it. And these theories dont add up to one another. This is my personal impression.
44
The first version of this book was written in Russian in 2007 (under the name The Structure
of the global catastrophe) and from that time not much changes for good. The predicted risky
trends have continued and now risks of world war are very high. The world war is a corridor for
creating x-risks weapons and situations, as well as of disseminating values, which are promoting xrisks. These values are values of nationalism, religion sectarianism, mysticism, fatalism, short-term
income, risky behaviour and winning in general. The values of human life, safety, rationality and
unity of humankind are not growing as quick as they should be.
In fact there are two groups of human values. One of them is about fighting with other
groups of people and is based on false and irrational beliefs from nationalism and religion, while the
other is about value of human life. The first of these promotes global catastrophe while the second is
good. Of course this is oversimplification but this simple map of values is very useful. And values
can't save us from catastrophe, because different people have different values but they can change
probability.
1. Most important thing: a global catastrophe has not happened. Colossal terrorist attacks, wars,
natural disasters also didnt happened.
2. Key technology trends like exponential growth in the spirit of Moore's Law have not changed. It
is especially true about biology and genes.
3. Economic crisis of 2008 has began. And I think its aftermath is not ended yet because
Quantitative Easing creates a lot of money, which could spiral as inflation, and large defaults are still
possible.
4. Several new potentially pandemic viruses appear like Swine flu and MERS.
5. New artificial viruses were created to test how mutated bird flu could wipe out humanity and
protocols of the experiments were published.
6. Arctic ice is collapsing and methane readings in Arctic are high.
7. Fukushima nuclear catastrophe shows again that unimaginable catastrophes could happen. By the
way Martin Rees predicted it in his book about existential risks.
8. Orbit infrared telescope Wise was launched, it will be able to clarify the question of the
existence of dark comets and directly answer the question of risk associated with them.
9. Many natural language processing AI projects has started and maximum computer power has rose
around 1 000 times after first edition of the book.
10. 2012 end of world crazy bonanza spoiled a lot the efforts of promoting rational approach to
global risks.
45
11. The start of adron collider helped to raise questions about risks of scientific experiments but
truth was lost in quarrels between opinions for and against it. I mean works of Adrian Kent and
Andreas Sandberg about small risks with large consequences.
12. In 2014 the situation in Ukraine becomes close to war between Russia and West. The peculiarity
of this situation is that it could deteriorate in small steps, and there is no natural barrier like not
using nuclear weapons was for nuclear war. And it will already result in new cold war, arm race and
nuclear race.
13. Peak oil has not happened mostly because of shale oil and shale gas. Again intellect proved to be
more powerful then limits of resources.
46
I like to create full exhaustive lists, and I could not stop myself from creating a list of
human extinction risks. Soon I reached around 100 items, although not all of them
are really dangerous. I decided to convert them into something like periodic table
i.e to sort them by several parameters in order to help predict new risks.
For this map I chose two main variables: the basic mechanism of risk and the
historical epoch during which it could happen. Also any map should be based on
some kind of future model, and I chose Kurzweils model of exponential
technological growth which leads to the creation of super technologies in the middle
of the 21st century. Also risks are graded according to their probabilities: main,
possible and hypothetical. I plan to attach to each risk a wiki page with its
explanation.
I would like to know which risks are missing from this map. If your ideas are too
dangerous to openly publish them, PM me. If you think that any mention of your idea
will raise the chances of human extinction, just mention its existence without the
details.
I think that the map of x-risks is necessary for their prevention. I offered prizes for
improving the previous map which illustrates possible prevention methods of x-risks
and it really helped me to improve it. But I do not offer prizes for improving this map
as it may encourage people to be too creative in thinking about new risks.
http://immortality-roadmap.com/x-risks%20map15.pdf
lesswrong discussion: http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/
In the following chapters I will go in details about all mentioned risks.
Universal catastrophes
47
Catastrophes which will change all Universe as whole, on scale equal to the Big Bang
are theoretically possible. From statistical reasons their probability is less than 1 % in the
nearest billion years as have shown by Bostrom and Tegmark. However the validity of
reasonings of Bostrom and depends on the validity of their premise - namely that
the intelligent life in our Universe could arise not only now but also a several billions years
ago. This suggestion is based on that the heavy elements necessary for existence of a life,
have arisen already after several billions years after Universe appearance, long before
formation of the Earth. Obviously, however, that degree of reliability which we can attribute
to this premise is less than 100 billion to 1 as we do not have its direct proofs - namely the
traces of early civilisations. Moreover, obvious absence of earlier civilisations (Fermi's
paradox) gives certain reliability to an opposite idea - namely, that the mankind has arisen
extremely improbable early. Probably, that existence of heavy elements is not a unique
necessary condition for emergence of intelligent life, and also there are other conditions,
for example, that frequency of flashes of close quasars and hypernovas has considerably
decreased (and the density of these objects really decreases in process of expansion of
the Universe and exhaustion of hydrogen clouds). Bostrom and write: One might
think that since life here on Earth has survived for nearly 4 Gyr (Gigayears), such
catastrophic events must be extremely rare. Unfortunately, such an argument is flawed,
giving us a false sense of security. It fails to take into account the observation selection
effect that precludes any observer from observing anything other than that their own
species has survived up to the point where they make the observation. Even if the
frequency of cosmic catastrophes were very high, we should still expect to find ourselves
on a planet that had not yet been destroyed. The fact that we are still alive does not even
seem to rule out the hypothesis that the average cosmic neighborhood is typically sterilized
by vacuum decay, say, every 10000 years, and that our own planet has just been
extremely lucky up until now. If this hypothesis were true, future prospects would be
bleak.
And though further Bostrom and reject the assumption of high frequency of
"sterilising catastrophes, being based on late time of existence of the Earth, we cannot
accept their conclusion, because as we spoke above, the premise on which it is based, is
unreliable. It does not mean, however, inevitability of close extinction as a result of
universal catastrophe. The only our source of knowledge of possible universal
catastrophes is theoretical physics as, by definition, such catastrophe never happened
48
during life of the Universe (except for Big Bang). The theoretical physics generates a large
quantity of unchecked hypotheses, and in case of universal catastrophes they can be
essentially uncheckable. We will notice also, that proceeding from today's understanding,
we cannot prevent universal catastrophe, nor be protected from it (though, we can provoke
it - see the section about dangerous physical experiments.) Let's designate now the list of
possible - from the point of view of some theorists - universal catastrophes:
1. Disintegration of false vacuum. We already discussed problems of false vacuum in
connection with physical experiments.
2. Collision with object in multidimensional space - brane. There are assumptions,
that our Universe is only object in the multidimensional space, named brane (from a word
"membrane"). The Big Bang is a result of collision of our brane with another brane. If there
will be one more collision it will destroy at once all our world.
3. The Big Rupture. Recently open dark energy results, as it is considered, to more
and more accelerated expansion of the Universe. If speed of expansion grows, in one
moment it will break off Solar system. But it will be ten billions years after modern times, as
assumes theories. (Phantom Energy and Cosmic Doomsday. Robert R. Caldwell, Marc
Kamionkowski, Nevin N. Weinberg. http://xxx.itep.ru/abs/astro-ph/0302506)
4. Transition of residual dark energy in a matter. Recently the assumption has been
come out, that this dark energy can suddenly pass in a usual matter as it already was in
time of the Big Bang.
5. Other classic scenario of the death of the universe are heat-related deaths rise
in entropy and alignment temperature in the universe and the compression of the Universe
through gravitational forces. But they again away from us in the tens of billions of years.
6. One can assume the existence of certain physical process that makes the
Universe unfit for habitation after a certain time (as it was unfit for habitation because of
intense radiation of nuclei of galaxies - quasars - billions of early years of its existence). For
example, such a process can be evaporation of primordial black holes through Hawking
radiation. If so, we exist in a narrow interval of time when the universe is inhabitable - just
as Earth is located in the narrow space of habitable zone around the Sun, and Sun - in a
narrow field of the galaxy, where the frequency of its rotation synchronized with the rotation
of the branches of the galaxy, making it does not fall within those branches and is not
subjected to a supernova.
8. If our world has to some extent arisen from anything by absolutely unknown to us
way, what prevents it to disappear suddenly also?
Geological catastrophes
Geological catastrophes kill in millions times more people, than falling of asteroids,
however they, proceeding from modern representations, are limited on scales.
Nevertheless the global risks connected with processes in the Earth, surpass space risks.
49
Probably, that there are mechanisms of allocation of energy and poisonous gases from
bowels of the Earth which we simply did not face owing to effect of observation selection.
Eruptions of supervolcanoes
Probability of eruption of a supervolcano of proportional intensity is much more, than
probability of falling of an asteroid. However modern science cannot prevent and even
predict this event. (In the future, probably, it will be possible to pit gradually pressure from
magmatic chambers, but this in itself is dangerous, as will demand drilling their roofs.) The
basic hurting force of supereruption is volcanic winter. It is shorter than nuclear as it is
heavier than a particle of volcanic ashes, but them can be much more. In this case the
volcanic winter can lead to a new steady condition - to a new glacial age.
Large eruption is accompanied by emission of poisonous gases - including sulphur. At
very bad scenario it can give a considerable poisoning of atmosphere. This poisoning not
only will make its of little use for breath, but also will result in universal acid rains which will
burn vegetation and will deprive harvest of crops. The big emissions carbon dioxid and
hydrogen are also possible.
At last, the volcanic dust is dangerous to breathe as it litters lungs. People can easily
provide themselves with gas masks and gauze bandages, but not the fact, that they will
suffice for cattle and pets. Besides, the volcanic dust simply cover with thick layer huge
surfaces, and also pyroclastic streams can extend on considerable distances. At last,
explosions of supervolcanoes generate a tsunami.
It all means that people, most likely, will survive supervolcano eruption, but it with
considerable probability will send mankind on one of postapocalyptic stages. Once the
mankind has appeared on the verge of extinction because of the volcanic winter caused by
eruption of volcano Toba 74 000 years ago. However modern technologies of storage of
food and building of bunkers allow considerable group of people to go through volcanic
winter of such scale.
In an antiquity took place enormous vulgar eruptions of volcanoes which have flooded
millions square kilometres with the fused lava - in India on a plateau the Decan in days of
extinction of dinosaurs (probably, is was provoked by falling of an asteroid on the Earth
opposite side, in Mexico), and also on the East-Siberian platform. There is a doubtful
50
Falling of asteroids
Falling of asteroids and comets is often considered as one of the possible reasons of extinction
of mankind. And though such collisions are quite possible, the chances of total extinction as a result
of them are often exaggerated. Experts think an asteroid would need to be I about 37 miles (60 km)
in diameter to wipe out all complex life on Earth. However, the frequency of asteroids of such size
hitting the Earth is extremely rare, approximately once every billion years. In comparison, the
asteroid that wiped out the dinosaurs was about 6 mi (10 km) in diameter, which is a volume about
200 times less than a potential life-killer.
The asteroid Apophis has approximately a 3 in a million chance of impacting the Earth in
2068, but being only about 1,066 ft (325 m), is not a threat to the future of life. In a worst-case
scenario, it could impact in the Pacific Ocean and produce a tsunami which kills several hundred
thousand people.
2,2 million years ago the comet in diameter of 0,5-2 km fell between southern America and
Antarctica (Eltanin catastrophehttp://de.wikipedia.org/wiki/Eltanin_(Asteroid) ). The wave in 1 km
in height threw out whales to the Andes. In vicinities of the Earth there are no asteroids in the sizes
which could destroy all people and all biosphere. However comets of such size can come from Oort
cloud. In article of Napir, etc. Comets with low reflecting ability and the risk of space collisions is
shown, that the number of dangerous comets can be essential underestimated as the observable
quantity of comets in 1000 times less than expected which is connected with the fact that comets
51
after several flights round the Sun become covered by a dark crust, cease to reflect light and become
imperceptible. Such dark comets are invisible by modern means. Besides, allocation of comets from
Oort cloud depends on the tidal forces created by the Galaxy on Solar system. These tidal forces
increase, when the Sun passes through more dense areas of the Galaxy, namely, through spiral
sleeves and a galactic plane. And just now we pass through a galactic plane that means, that during a
present epoch comet bombardment is in 10 times stronger, than on the average for history of the
Earth. Napir connects the previous epoch intensive of comet bombardments with mass extinction 65
and 251 million years ago.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere. Falling of a
large asteroid can cause deformations in Earth crust which will lead to eruptions of
volcanoes. Besides, the large asteroid will cause the worldwide Earthquake dangerous first
of all for technogenic civilisation.
The scenario of intensive bombardment of the Earth by set of splinters is more
dangerous. Then strike will be distributed in more regular intervals and will demand smaller
quantity of a material. These splinters to result from disintegration of some space body
(see further about threat of explosion Callisto), comet splitting on a stream of fragments
(the Tungus meteorite was, probably, a splinter of comet Enke), as a result of asteroid hit in
the Moon or as the secondary hurting factor from collision of the Earth with a large space
body. Many comets already consist of groups of fragments, and also can collapse in
atmosphere on thousand pieces. It can occur and as a result unsuccessful attempt to bring
down an asteroid by means of the nuclear weapon.
Falling of asteroids can provoke eruption of supervolcanoes if the asteroid gets to a
thin site of Earth crust or in a cover of a magmatic copper of a volcano or if shift from the
stike disturbs the remote volcanoes. The melted iron formed at falling of an iron asteroid,
can play a role Stevenson's probe - if it is possible in general, - that is melt Earth crust
and a mantle, having formed the channel in Earth bowels that is fraught with enormous
volcanic activity. Though usually it did not occur at falling of asteroids to the Earth, lunar
"seas" could arise thus. Besides, outpourings of magmatic breeds could hide craters from
such asteroids. Such outpourings are Siberian trap basalts and a Decan plateau in India.
The last is simultaneous to two large impacts (Chixulub and crater Shiva). It is possible to
52
assume, that shock waves from these impacts, or the third space body, a crater from which
has not remained, have provoked this eruption. It is not surprising, that several large
impacts
separate fragments - for example, comet Shumejker-Levi running into Jupiter in 1994, has
left on it a dotted trace as by the collision moment has already broken up to fragments.
Besides, there can be periods of intensive formation of comets when the solar system
passes near to other star. Or as a result of collision of asteroids in a belt of asteroids.
Much more dangerously air explosions of meteorites in some tens metres in diameter
which can cause false operations of systems of early warning of a nuclear attack, or hits of
such meteorites in areas of basing of rockets.
Pustynsky in his article comes to following conclusions: According to the estimates made in
present article, the prediction of collision with an asteroid is not guaranteed till now and is casual. It
is impossible to exclude that collision will occur absolutely unexpectedly. Thus for collision
prevention it is necessary to have time of an order of 10 years. Asteroid detection some months prior
to collision would allow to evacuate the population and nuclear-dangerous plants in a falling zone.
Collision with asteroids of the small size (to 1 km in diameter) will not result to all planet
consequences (excluding, of course, practically improbable direct hit in area of a congestion of
nuclear materials). Collision with larger asteroids (approximately from 1 to 10 km in diameter,
depending on speed of collision) is accompanied by the most powerful explosion, full destruction of
the fallen body and emission in atmosphere to several thousand cubic km. of stones. On the
consequences this phenomenon is comparable with the largest catastrophes of a terrestrial origin,
such as explosive eruptions of volcanoes. Destruction in a falling zone will be total, and the planet
climate will in sharply change and will settle into shape only in some years (but not decades and
centuries!) Exaggeration of threats of global catastrophe proves to be true by the fact that during the
history of the Earth it has survived set of collisions with similar asteroids, and it has not left is
proved an appreciable trace in its biosphere (anyway, far not always left). Only collision with larger
space bodies (diameter more ~15-20 km) can make more appreciable impact on planet biosphere.
53
Such collisions occur less often, than time in 100 million years, and we while do not have the
techniques allowing even approximately to calculate their consequence.
So, the probability of destruction of mankind as a result of asteroid falling in the XXI century
is very small.
meters in size) in the air. The first option is more likely to happen with superpowers (with
some unsecured areas in their missile defense system like in the Russian Federation which
results in inability to track full missile traectory) warning system missile attack, while the
second - for a regional nuclear powers (like India and Pakistan, North Korea, etc.), not
reaching is able to track missiles, but able to react to a single explosion.
C) Technology to move asteroids in the future will create a hypothetical possibility to
direct the asteroids not only from the Earth, but also to it. And even if there will be a
accidential asteroid impact, there will be gossips that it was sent on purpose. Yet hardly
anyone will direct the asteroids to the Earth, as such action is easy to spot, hit accuracy is
low, and this should be done for decades before the impact.
D) For safe deflection of asteroids will require the creation of space weapons, which
can be nuclear, laser or kinetic. Such weapons may be used against the Earth or against
the satellites of the potential enemy. Although the risk of the use of it against the land is
small, it still creates a greater potential for damage than falling asteroids.
E) Asteroid destruction by a nuclear explosion would lead to an increase in its lethal
force at the expense of its fragments that is an increasing number of explosions over a
larger area, as well as radioactive contamination of the debris.
Modern technical means are able to reject only the relatively small asteroids which
dont have global threat. The real danger is black comet bodies several kilometers accross
moving along elongated elliptical orbits with great speed.
However, in the future (may be as soon as 2030_2050), the space can be quickly and
cheaply scanned (and transformed) via self-replicating robots, based on nanotech. They
will create a huge telescopes in space able detect all dangerous body in the solar system.
It will be enough to land on an asteroid a microrobot that will multiply on it and then took it
apart into pieces and built the engine, which will change its orbit. Nanotech will help to
create self-sustaining human settlements on the Moon and other celestial bodies. This
suggests that the problem of the asteroid hazard will be irrelevant in a few decades.
Thus, the problem of preventing the collision of Earth with asteroids in the coming
decades can only be a diversion of resources from global risks.
55
Firstly, because that we still cannot reject the objects that can really lead to the
complete extinction of mankind.
Secondly by the time (or shortly thereafter), when the system nuclear annihilation of
asteroids will be created, it will become obsolete as nanotech can be used for quick and
cheap exploration of the solar system by the middle of the 21st century, and perhaps
earlier.
Third, because such a system in the conditions when the Earth is divided into warring
states, asteroid deflection system will be weapon in the event of war.
Fourth, because the probability of human extinction as a result of asteroid impact in
the narrow period of time, when the asteroid deflection system is already deployed, but
powerful nanotechnology may not be established, is extremely small. This time interval can
be set to 20 years, say from 2030 to 2050, and the chances of falling 10 kilometer body
during this time, even if we assume that we live in a period of cometary bombardment,
when the intensity is 100 times higher than is 1 in 15 000 (based on the average rate of
incidence of such bodies in time 30 million years). Moreover, if we consider the dynamics,
we will able to reject only by the end of this period the really dangerous objects, and
perhaps even later, since the larger the asteroid, the more large-scale and long-term
project for its rejection is required. Although 1 to 15 000 is still unacceptably high risk, it is
commensurate with the risk of the use of space-based weapons against Earth.
Fifth, anti-asteroid protection diverts attention from other global problems, due to the
limited human attention span (even mass media attention span) and financial resources.
This is due to the fact that the asteroid danger is very easy to understand: it is easy to
imagine the impact, it is easy to calculate its probability and it is understandable to the
general public. And there is no doubt in its reality and it is clear how to protect us against it.
(For example, the probability of a volcanic catastrophe comparable to the asteroid
according to various estimates, from 5 to 20 times more for the same level of energy - but
have no idea how it can be prevented.) This differs from other risks that are difficult to
imagine, that cannot be to quantify, but may mean the probability of extinction of tens of
percent. It is AI risks, biotech, nanotech and nuclear weapons.
56
Sixth, if we talk about relatively small bodies like Apophis, it may be cheaper to
evacuate the area of future impact than to deflect the asteroid. And, most likely, will it fall
area ocean, so it would require antitsunami measures.
Still, I do not call to abandon anti-asteroid protection, because our first need is to find
out whether we are living in a period of cometary bombardment. In this case, the probability
1 km body impact within the next 100 years is like 6 percent. (Based on data from a
hypothetical impacts in the last 10 000 years like a so called comet Clovis
http://en.wikipedia.org/wiki/Younger_Dryas_impact_event, which traces may be 500,000 or
similar to craters formations called Carolina bays http://en.wikipedia.org/wiki/Carolina_bays
and
created
large
crater
near
New
Zealand
in
1443
diameter of one kilometer or morebut is unlikely to meet the 2020 deadline for
cataloguing midsize NEOs.
But in 2016 some question aroused about validity of their data.
http://www.scientificamerican.com/article/for-asteroid-hunting-astronomersnathan-myhrvold-says-the-sky-is-falling1/
If infrared observations are correct they greatly reduce chances of large family of
dark comets (but 99 per cent of them are most time beyond Mars which make
difficult to find them).
In 2015 article Napier speaks about dangers of comets centauris, which sometimes
come from outer Solar System, disintegrate and result in the period of bombardment
https://www.ras.org.uk/images/stories/press/Centaurs/Napier.Centaurs.revSB.pdf
Encke comet could be fragment of larger disintegrated comet. Analysisoftheagesofthelunar
microcraters(zappits)onrocksreturnedintheApolloprogrammeindicatethatthenearEarthinterplanetarydust
(IPD)fluxhasbeenenhancedbyafactorofabouttenoverthepast~10kyrcomparedtothelongtermaverage.
Theeffectsofrunningthroughthedebristrailofalargecometareliabletobecomplex,andtoinvolveboththe
depositionoffinedustintothemesosphereand,potentially,thearrivalofhundredsorthousandsofmegatonlevel
bolidesoverthespaceofafewhours.Incomingmeteoroidsandbolidesmaybeconvertedtomicronsizedsmoke
particles(Klekociuketal.2005),whichhavehighscatteringefficienciesandsothepotentialtoyieldalargeoptical
depthfromasmallmass.Modellingoftheclimaticeffectsofdustandsmokeloadingoftheatmospherehasfocusedon
theinjectionofsuchparticulatesinanuclearwar.
Suchworkhasimplicationsforatmosphericdustingeventsofcosmicorigin,althoughtherearesignificantdifferences,
ofcourse.Hoyle&Wickramasinghe(1978)consideredthattheacquisitionof~1014gofcometdustintheupper
atmospherewouldhaveasubstantialeffectontheEarthsclimate.Suchanencounterisareasonablyprobableevent
duringtheactivelifetimeofalarge,disintegratingcometinanEnckelikeorbit(Napier2010)
Apartfromtheireffectsonatmosphericopacity,aswarmofTunguskalevelfireballscouldyieldwildfiresoveranarea
oforder1%oftheEarthssurface
secondly, that ability of a matter is elastic to transfer a shock wave is limited by a certain
limit from above, and all energy moreover is not transferred, and turns to heat around
epicentre. For example, at ocean there can not be a wave above its depth, and as
explosion epicentre is a dot (unlike epicentre of a usual tsunami which it represents a break
line), then will linearly decrease depending on distance. The superfluous heat formed at
explosion, or is radiated in space, or remains in the form of lake of the fused substance in
epicentre. The sun delivers for days to the Earth light energy of an order 1000 gigaton (10
22
joules), therefore the role of the thermal contribution of superexplosion in the general
temperature of the Earth is insignificant. (On the other hand, the mechanism of distribution
of heat from explosion will be not streams of heated air, but the cubic kilometres of splinters
thrown out by explosion with the weight comparable to weight of the asteroid, but smaller
energy, many of which will have the speed close to first cosmic speed, and owing to it to fly
on ballistic trajectories as intercontinental rockets fly. In an hour they reach all corners of
the Earth and though they, operating as the kinetic weapon, will hurt not each point on a
surface, they will allocate at the input in atmosphere huge quantities of energy, that is will
warm up atmosphere on all area of the Earth, probably, to temperature of ignition of a tree
that else will aggravate.)
We can roughly consider, that the destruction zone grows proportionally to a root of 4
power from force of explosion (exact values are defined by military men empirically as a
result of tests and heights lay between degrees 0,33 and 0,25, thus depending from force
of explosion, etc.). Thus each ton of weight of a meteorite gives approximately 100 tons of
a trotyl equivalent of energy - depending on speed of collision which usually makes some
tens kilometres per second. (In this case a stone asteroid in 1 cubic km. in the size will give
energy in 300 gigatons. The density of comets is much less, but they can be scattered in
air, strengthening strike, and, besides, move on perpendicular to ours orbits with much
bigger speeds.) Accepting, that the radius of complete destruction from a hydrogen bomb
in 1 megaton makes 10 km, we can receive radiuses of destruction for asteroids of the
different sizes, considering, that the destruction radius decreases proportionally the fourth
degree force of explosion. For example, for an asteroid in 1 cubic km it will be radius in 230
km. For an asteroid in diameter in 10 km it will be radius in 1300 km. For 100 km of an
asteroid it will be radius of dectruction of an order of 7000 km. That this radius of the
guaranteed destruction became more than half of width of the Earth (20 000 km), that is
guaranteed covered all Earth, the asteroid should have the sizes of an order of 400 km. (If
59
to consider, that the destruction radius grows as a root of the third degree it will be
diameter of an asteroid destroying all about 30 km. Real value lays between these two
figures (30-400 km), also the estimation Pustynsky gives independent estimation: 60 km.)
Though the given calculations are extremely approximate, from them it is visible, what
even that asteroid which connect with extinction of dinosaurs has not hurt all territory of the
Earth, and even all continent where it has fallen. And extinction if it has been connected
with an asteroid (now is considered, that there complex structure of the reasons) it has
been caused not by strike, but by the subsequent effect - the asteroid winter connected
with the dust carrying over by atmosphere. Also collision with an asteroid can cause an
electromagnetic impulse, as in a nuclear bomb, for the account of fast movement of
plasma. Besides, it is interesting to ask a question, whether there can be thermonuclear
reactions at collision with a comet if its speed is close to greatest possible about 100 km/s
(a comet on a counter course, the worst case) as in a strike point there can be a
temperature in millions degrees and huge pressure as at implosion in a nuclear bomb. And
even if the contribution of these reactions to energy of explosion will be small, it can give
radioactive pollution.
Strong explosion will create strong chemical pollution of all atmosphere, at least by
oxides of nitrogen which will form rains of nitric acid. And strong explosion will litter
atmosphere with a dust that will create conditions for nuclear winter.
From the told follows, that the nuclear superbomb would be terrible not force of the
explosion, and quantity of radioactive deposits which it would make. Besides, it is visible,
that terrestrial atmosphere represents itself as the most powerful factor of distribution of
influences.
next 1 billion years (that is much earlier, than the sun becomes the red giant and,
especially, white dwarf). However in comparison with an interval investigated by us in 100
years this process is insignificant (if only it has not developed together with other
processes conducting to irreversible global warming - see further).
There are assumptions, that in process of hydrogen burning out in the central part of
the Sun, that already occurs, will grow not only luminosity of the Sun (luminosity grows for
the account of growth of its sizes, instead of surface temperatures), but also instability of its
burning. Probably, that last glacial ages are connected with this reduction of stability of
burning. It is clear on the following metaphor: when in a fire it is a lot of firewood, it burns
brightly and steadily but when the most part of fire wood burns through, it starts to die away
a little and brightly flash again when finds not burnt down branch.
Reduction of concentration of hydrogen in the sun centre can provoke such process
as convection which usually in the Sun core does not occur therefore in the core fresh
hydrogen will arrive. Whether such process is possible, whether there will be it smooth or
catastrophic, whether will occupy years or millions years, it is difficult to tell. Shklovsky
assumed, that as a result of convection the Sun temperature falls each 200 million years
for a 10 million perod, and that we live in the middle of such period. That is end of this
process when fresh fuel at last will arrive in the core and luminosity of the sun will increase
is dangerous. (However it is marginal theory, as at the moment is resolved one of the basic
problems which has generated it - a problem of solar neitrino.)
It is important to underline, however, that the Sun cannot flash as supernova or nova,
proceeding from our physical representations.
At the same time, to interrupt a intelligent life on the Earth, it is enough to Sun to be
warmed up for 10 percent for 100 years (that would raise temperature on the Earth on 1020 degrees without a greenhouse effect, but with the green house effect account, most
likely, it would appear above a critical threshold of irreversible warming). Such slow and
rare changes of temperature of stars of solar type would be difficult for noticing by
astronomical methods at supervision of sun-like stars - as necessary accuracy of the
equipment only is recently reached. (The logic paradox of a following kind is besides,
possible: sun-like stars are stable stars of spectral class G7 by definition. It is not
surprising, that as a result of their supervision we find out, that these stars are stable.)
So, one of variants of global catastrophe consists that as a result of certain internal
processes luminosity of the sun will steadily increase on dangerous size (and we know,
61
that sooner or later it will occur). At the moment the Sun is on an ascending century trend
of the activity, but any special anomalies in its behaviour has not been noticed. The
probability of that it happens in the XXI century is insignificant is small.
The second variant of the global catastrophe connected with the Sun, consists that
there will be two improbable events - on the Sun there will be very large flash and emission
of this flash will be directed to the Earth. Concerning distribution of probability of such event
it is possible to assume, that the same empirical law, as concerning Earthquakes and
volcanoes here operates: 20 multiple growth of energy of event leads to 10 multiple
decrease in its probability (the law of repeatability of Gutenberg-Richter). In XIX century
was observed flash in 5 times, by modern estimations, stronger, than the strongest flash in
the XX century. Probably, that time in tens and hundred thousand years on the Sun there
are the flashes similar on a rarity and scale to terrestrial eruptions of supervolcanoes.
Nevertheless it is the extremely rare events. Large solar flashes even if they will not be
directed to the Earth, can increase a little solar luminosity and lead to additional heating of
the Earth. (Usual flashes give the contribution no more than 0,1 percent).
At the moment the mankind is incapable to affect processes on the Sun, and it looks much
more difficult, than influence on volcanoes. Ideas of dump of hydrogen bombs on the Sun for
initiation of thermonuclear reaction look unpersuasively (however such ideas were expressed, that
speaks about tireless searches by human mind of the weapon of the Doomsday).
There is a precisely enough reckoned scenario of influence to the Earth magnetic making solar
flash. At the worst scenario (that depends on force of a magnetic impulse and its orientation - it
should be opposite to a terrestrial magnetic field), this flash will create the strong currents in electric
lines of distant transfer of the electric power that will result in burning out of transformers on
substations. In normal conditions updating of transformers occupies 20-30 years, and if all of them
burn down will be nothing to replace them there, as will require many years on manufacture of
similar quantity of transformers that it will be difficult to organize without an electricity. Such
situation hardly will result in human extinction, but is fraught with a world global economic crisis
and wars that can start a chain of the further deterioration. The probability of such scenario is
difficult for estimating, as we possess electric networks only about hundred years.
energy more concentrated, than at usual explosions of stars. Probably, strong gamma ray bursts from
close sources have served as the reasons of several mass extinctions tens and hundred millions years
ago. It is supposed, that gamma ray bursts occur at collisions of black holes and neutron stars or
collapses of massive stars. Close gamma ray bursts could cause destruction of an ozone layer and
even atmosphere ionization. However in the nearest environment of the Earth there is no visible
suitable candidates neither on sources of gamma ray bursts, nor for supernovas (the nearest
candidate for a gamma ray burst source, a star Eta Carinae - it is far enough - an order of 7000 light
years and hardly its axis of inevitable explosion in the future will be directed to the Earth - Gamma
ray bursts extend in a kind narrow beam jets; However at a potential star-hypernew of star WR 104
which are on almost at same distance, the axis is directed almost towards the Earth. This star will
blow up during nearest several hundreds thousand years that means chance of catastrophe with it in
the XXI century less than 0.1 %, and with the account of uncertainty of its parameters of rotation
and our knowledge about scale - splashes - and is even less). Therefore, even with the account of
effect of observant selection, which increases frequency of catastrophes in the future in comparison
with the past in some cases up to 10 times (see my article Anthropic principle and Natural
catastrophes) the probability of dangerous gamma ray burst in the XXI century does not exceed
thousand shares of percent. Mankind can survive even serious gamma ray burst in various bunkers.
Estimating risk of gamma ray bursts, Boris Stern writes: We take a moderate case of energy relies
of 10 ** 52 erg and distance to splash 3 parsec, 10 light years, or 10 ** 19 sm - in such limits from
us are tens stars. On such distance for few seconds on each square centimeter of a planet got on
ways of gamma ray will be allocated 10 ** 13 erg. It is equivalent to explosion of a nuclear bomb on
each hectare of the sky! Atmosphere does not help: though energy will be highlighted in its top
layers, the considerable part will instantly reach a surface in the form of light. Clearly, that all live
on half of planet will be instantly exterminated, on second half hardly later at the expense of
secondary effects. Even if we take in 100 times bigger distance (it a thickness of a galactic disk and
hundred thousand stars), the effect (on a nuclear bomb on a square with the party of 10 km) will be
the hard strike, and here already it is necessary to estimate seriously - what will survive and whether
something will survive in general. Stern believes, that gamma ray burst in Our galaxy happens on
the average time in one million years. Gamma ray burst in such star as WR 104, can cause intensive
destruction of the ozone layer on half of planet. Probably, Gamma ray burst became reason of
Ordovician mass extinction 443 million years ago when 60 % of kinds of live beings (and it is
considerable the big share on number of individuals as for a survival of a specie there is enough
preservation of only several individuals) were lost. According to John Scalo and Craig Wheeler,
63
gamma ray bursts make essential impact on biosphere of our planet approximately everyone five
millions years.
Even far gamma ray burst or other high-energy space event can be dangerous by radiation hurt
of the Earth - and not only direct radiation which atmosphere appreciably blocks (but avalanches of
high-energy particles from cosmic rays reach a terrestrial surface), but also for the formation
account in atmosphere of radioactive atoms, that will result in the scenario similar described in
connection with cobalt bomb. Besides, the scale radiation causes oxidation of nitrogen of
atmosphere creating opaque poisonous gas dioxide of nitrogen which is formed in an upper
atmosphere and can block a sunlight and cause a new Ice age. There is a hypothesis, that neutrino
radiation arising at explosions of supernovas can lead in some cases to mass extinction as neutrino is
elastic dissipate on heavy atoms with higher probability, and energy of this dispersion is sufficient
for infringement of chemical bonds, and therefore neutrino will cause more often DNA damages,
than other kinds of radiation having much bigger energy. (J.I.Collar. Biological Effects of Stellar
Collapse Neutrinos. Phys.Rev.Lett. 76 (1996) 999-1002 http://arxiv.org/abs/astro-ph/9505028)
Danger of gamma ray burst is in its suddenness - it begins without warning from invisible
sources and extends with a velocity of light. In any case, gamma ray burst can amaze only one
hemisphere of the Earth as they last only a few seconds or minutes.
Activization of the core of galaxy (where there is a huge black hole) is too very improbable
event. In far young galaxies such cores actively absorb substance which twists at falling in accretion
disk and intensively radiates. This radiation is very powerful and also can interfere with life
emerging on planets. However the core of our galaxy is very great and consequently can absorb stars
almost at once, not breaking off them on a part, so, with smaller radiation. Besides, it is quite
observed in infra-red beams (a source the Sagittarius), but is closed by a thick dust layer in an
optical range, and near to the black hole there is no considerable quantity of the substance ready to
absorption by it, - only one star in an orbit with the period in 5 years, but also it can fly still very
long. And the main thing, it is very far from Solar system.
Except distant gamma ray bursts, there are the soft Gamma ray bursts connected with
catastrophic processes on special neutron stars - magnitars. On August, 27th, 1998 flash on magnitar
has led to instant decrease in height of an ionosphere of the Earth on 30 km, however this magnitar
was on distance of 20 000 light years. Magnitars in vicinities of the Earth are unknown, but find out
them it can not to be simple.
Supernova stars
64
Real danger to the Earth would be represented by close explosion supernova on distance
to 25 light years or even less. But in vicinities of the Sun there are no stars which could become
dangerous supernova. (The nearest candidates - the Mira and Betelgeuse - are on distance of
hundreds light years.) Besides, radiation of supernova is rather slow process (lasts months), and
people can have time to hide in bunkers. At last, only if the dangerous supernova will be strict in an
equatorial plane of the Earth (that is improbable), it can irradiate all terrestrial surface, otherwise one
of poles will escape. See Michael Richmond's review. Will a Nearby Supernova Endanger Life on
Earth?
be sources of space beams which will lead to sharp increase in cloud amount at the Earth that is
connected with increase in number of the centers of condensation of water. It can lead to sharp
cooling of a climate for the long period. (Nearby Supernova May Have Caused Mini-Extinction,
Scientists Say http://www.sciencedaily.com/releases/1999/08/990803073658.htm)
Super-tsunami
Ancient human memory keep enormous flooding as the most terrible catastrophe.
However on the Earth there is no such quantity of water that ocean level has risen above
mountains. (Messages on recent discovery of underground oceans are a little exaggerated
- actually it is a question only of rocks with the raised maintenance of water - at level of 1
percent.) Average depth of world ocean is about 4 km. And limiting maximum height of a
wave of the same order - if to discuss possibility of a wave, instead of, whether the reasons
which will create the wave of such height are possible. It is less, than height of highmountainous plateaus in the Himalayas where too live people. Variants when such wave is
possible is the huge tidal wave which has arisen if near to the Earth fly very massive body
or if the axis of rotation of the Earth would be displaced or speed of rotation would change.
All these variants though meet in different "horror stories" about a doomsday, look
impossible or improbable.
So, it is very improbable, that the huge tsunami will destroy all people - as the
submarines, many ships and planes will escape. However the huge tsunami can destroy a
considerable part of the population of the Earth, having translated mankind in a
postapocalyptic stage, for some reasons:
65
Super-Earthquake
We could name super-earthquake a hypothetical large scale quake leading to full
destructions of human built structures on the all surface of the Earth. No such quakes
happened in human history and the only scientifically solid scenario for them seems to be
large asteroid impact.
Such event could not result in human extinction itself, as there would be ships,
planes, and people on the countryside. But it unequivocally would destroy all technological
civilization. To do
so
it should have
intensity around
10
in
Mercally scale
http://earthquake.usgs.gov/learn/topics/mercalli.php
6)
The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic
rifts expand not gradually but in large jumps. This middle oceanic rifts creates new
oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is
large steps in ocean floor in the zone of oceanic rifts. Boiling of water trapped into
the rift and contacted with magma may also contribute to explosive zip style rapture
of the rifts. But this idea may be from fridge science catastrophism so should be
taken with caution.
67
12. The waves (from the surface location event) will focus on the
opposite side of the Earth, as it may be happened after Chicxulub
asteroid impact which coincide with Deccan traps on opposite side
of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the
speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from
tectonic tensions and mountain collapses.
Large, non-global earthquakes also could become precursors for
global catastrophes in several occasions. The following podcast by
Seth Baum is devoted to this possibility.
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4147
1) Destruction of biological facilities like CDC which has smallpox
samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ
in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San
Francisco and around London. They are more concentrated than
possible sources of risks. So in event of devastating EQ in SF our
ability to prevent x-risks may be greatly reduced.
of the Earth. In itself inversion of a magnetic field will not result in extinction of people as
polarity reversal already repeatedly occurred in the past without appreciable harm. In the
process of polarity reversal the magnetic field could fall to zero or to be orientated toward
Sun (pole will be on equator) which would lead to intense suck of charged particles into
the atmosphere. The simultaneous combination of three factors - falling to zero of the
magnetic field of the Earth, exhaustion of the ozone layer and strong solar flash could
result in death of all life on Earth, or: at least, to crash of all electric systems that is fraught
with falling of a technological civilisation. And itself this crash is not terrible, but is terrible
what will be in its process with the nuclear weapon and all other technologies.
69
Nevertheless the magnetic field decreases slowly enough (though speed of process
accrues) so hardly it will be nulled in the nearest decades. Other catastrophic scenario magnetic field change is connected with changes of streams of magma in the core, that
somehow can infuence global volcanic activity (there are data on correlation of the periods
of activity and the periods of change of poles). The third risk - possible wrong
understanding of the reasons of existence of a magnetic field of the Earth.
There is a hypothesis that the growth of solid nucleus of Earth did the Earth's
magnetic field less stable, and it exposed more often polarity reversal, that is consistent
with the hypothesis of weakening the protection that we receive from anthropic principle.
population of one specie after which new dangerous illnesses will arise every day. From real-life
illnesses it is necessary to note two:
Bird flu. As it was already repeatedly spoken, not the bird flu is dangerous, but possible
mutation of strain H5N1, capable to be transferred from human to human. For this purpose, in
particular, should change attaching fibers on a surface of the virus that would attached not in the
deep in lungs, but above where there are more chances for virus to get out as cough droplets.
Probably, that it is rather simple mutation. Though there are different opinions on, whether H5N1 is
capable to mutate this way, but in history already there are precedents of deadly flu epidemics. The
worst estimate of number of possible victims of mutated bird flu was 400 million humans. And
though it does not mean full extinction of mankind, it almost for certain will send the world on a
certain post-apocalyptic stage.
AIDS. This illness in the modern form cannot lead to full extinction of mankind though he has
already sent a number of the countries of Africa on a post-apocalyptic stage. There are interesting
reasoning of Supotinsky about the nature of AIDS and on how epidemics of retroviruses repeatedly
cut the population of hominids. He also assumes, that the HIV have a natural carrier, probably, a
microorganism. If AIDS began to transfer as cold, the mankind fate would be sad. However and now
AIDS is deadly almost on 100 %, and develops slowly enough to have time to spread.
We should note new strains of microorganisms which are steady against antibiotics, for
example, the hospital infection of golden staphylococci and medicine-steady tuberculosis. Thus
process of increase of stability of various microorganisms to antibiotics develops, and such
organisms spread more and more, that can give in some moment cumulative wave from many steady
illnesses (against the weakened immunity of people). Certainly, it is possible to count, that
biological supertechnologies will win them but if in appearance of such technologies there will be a
certain delay a mankind fate is not good. Revival of the smallpox, plague and other illnesses though
is possible, but separately each of them cannot destroy all people. On one of hypotheses,
Neanderthal men have died out because of a version of the mad cow decease that is the illness,
caused by prion (autocatalytic form of scaling down of protein) and extended by means of
cannibalism so we cannot exclude risk of extinction because of natural illness and for people.
At last, the story that the virus of "Spaniard" flu has been allocated from burial places, its
genome was read and has been published on the Internet looks absolutely irresponsible. Then under
requirements of the public the genome have been removed from open access. But then still there was
a case when this virus have by mistake dispatched on thousand to laboratories in the world for
equipment testing.
71
Hypercanes
Kerry Emanuel from the University of Michigan put forward the hypothesis that in the past,
the Earth's atmosphere was much less stable, resulting in mass extinction. If the temperature of
ocean surface would be increased to 15-20 degrees, which is possible as a result of a sharp global
warming, falling asteroid or underwater eruption, it would invoke the so-called Hypercane--a huge
storm with wind speeds of approximately 200-300 meters per second, the size of a continent, high
live-time and pressure in the center of about 0.3 atmosphere. Removed from their place of
appearance, such hypercane would destroy all life on land and at the same time, in its place over
warm ocean site would form new hypercane. (This idea is used in the Barnes novel The mother
storms.)
Emanuel has shown that when fall asteroid with diameter more than 10 km in the shallow sea
(as it was 65 million years ago, when the fall asteroid near Mexico, which is associated with the
extinction of dinosaurs) may form site of high temperature of 50 km, which would be enough to
form hypercane. Hypercane ejects huge amount of water and dust in the upper atmosphere that
could lead to dramatic global cooling or warming.
http://en.wikipedia.org/wiki/Great_Hurricane_of_1780
http://en.wikipedia.org/wiki/Hypercane
Emanuel, Kerry (1996-09-16). "Limits on Hurricane Intensity". Center for Meteorology and
Physical
Oceanography
MIT
http://wind.mit.edu/~emanuel/holem/node2.html#SECTION00020000000000000000
Did
storms
land
the
dinosaurs
in
hot
water?
http://www.newscientist.com/article/mg14519632.600-did-storms-land-the-dinosaurs-in-hot72
water.html
millions years before following act of trap volcanic if it at all will happen. The basic danger
here consists that people by any penetrations deep into the Earths can push these
processes if these processes have already ripened to critical level.
In a liquid terrestrial core the gases dissolved in it are most dangerous. They are
capable to be pulled out of a surface if they get a channel. In process of sedimentation of
heavy iron downwards, it is chemically cleared (restoration for the heat account), and more
and more quantity of gases is liberated, generating process of de-gasation of the Earth.
There are assumptions, that powerful atmosphere of Venus has arisen rather recently as a
result of intensive de-gasation of its bowels. Certain danger is represented by temptation to
receive gratuitous energy of terrestrial bowels, extorting the heated magma. (Though if it to
do it in the places which have been not connected with plumes it should be safe enough).
There is an assumption, that shredding of an oceanic bottom from zones of median rifts
occurs not smoothly, but jerky which, on the one hand, are much more rare (therefore we
did not observe them), than Earthquakes in zones of subduction, but are much more
powerful. Here the following metaphor is pertinent: Balloon rupture is much more powerful
process, than its corrugation. Thawing of glaciers leads to unloading plates
and to strengthening of volcanic activity (for example, in Iceland - in 100 times). Therefore
the future thawing of a glacial board of Greenland is dangerous.
At last, is courageous assumptions, that in the centre of the Earth (and also other
planets and even stars) are microscopic (on astronomical scales) relict black holes which
have arisen in time of Big Bang. See A.G. Parhomov's article About the possible effects
connected with small black holes. Under Hawking's theory relic holes should evaporate
slowly, however with accruing speed closer to the end of the existence so in the last
seconds such hole makes flash with the energy equivalent approximately of 1000 tons of
weight (and last second of 228 tons), that is approximately equivalent to energy 20 000
gigaton of trotyl equivalent - it is approximately equal to energy from collision of the Earth
with an asteroid in 10 km in diameter. Such explosion would not destroy a planet, but would
cause on all surface Earthquake of huge force, possibly, sufficient to destroy all structures
and to reject a civilisation on deeply postapocalyptic level. However people would survive,
at least those who would be in planes and helicopters during this moment. The microscopic
black hole in the centre of the Earth would test simultaneously two processes accretion of
matter and energy losses by hawking radiation which could be in balance, however
balance shift in any party would be fraught with catastrophe - either hole explosion, or
74
absorption of the Earth or its destruction for the account of stronger allocation of energy at
accretion. I remind, that there are no facts confirming existence of relic black holes and it is
only the improbable assumption which we consider, proceeding from a precaution principle.
75
//
1999.
69,
9.
probable
source
of
the
K/T
http://www.nature.com/nature/journal/v449/n7158/abs/nature06070.html
76
impactor
)
times. (No more as then enter the actions of the restriction similar described in article of
Bostrom and which consider this problem in the relation of cosmic catastrophes.
However real value of these restrictions for geological catastrophes requires more exact
research.) For example if absence of superhuge eruptions of volcanoes on the Earth,
flooding all surface, is lucky coincidence, and in norm they should occur time in 500 million
years the chance of the Earth to appear in its unique position would be 1 to 256, and
expected time of existence of a life - 500 million years.
We still will return to discussion of this effect in the chapter about calculation of
indirect estimations of probability of global catastrophe in the end of the book. The
important methodological consequence is that we cannot use concerning global
catastrophes any reasonings in the spirit of: it will not be in the future because it was not in
the past. On the other hand, deterioration in 10 times of chances of natural catastrophes
reduces expected time of existence of conditions for a life on the Earth from billion to
hundred millions that gives very small contribution to probability of extinction to the XXI
century.
Frightening acknowledgement of the hypothesis that we, most likely, live in the end of
the period of stability of natural processes, is R.Rods and R.Muller's article in Nature about
cycle of extinctions of live beings with the period 62 (+/-3 million years) - as from last
extinction has passed just 65 million years. That is time of the next cyclic event of
extinction has come for a long time already. We will notice also, that if the offered
hypothesis about a role of observant selection in underestimations of frequency of global
catastrophes is true, it means, that intelligent life on the Earth is extremely unusual event in
the Universe, and we are alone in the observable Universe with a high probability. In this
case we cannot be afraid of extraterestial intrusion, and also we cannot do any conclusions
about frequency of self-destruction of the advanced civilisations in connection with Fermi's
paradox (space silence). As a result net contribution of the stated hypothesis to our
estimation of probability of a human survival can be positive.
Debunked and false risks from media, science fiction and fringe science or old
theories
Nemesis
Gases from comets
78
become less steady and more inclined to fluctuations (that is quite known concerning the
sun which will burn, in process of hydrogen exhaustion, more and more brightly and nonuniformly), and secondly, that it seems to more important, - they become more sensitive to
possible small human influences. That is one business to pull a hanging elastic band, and
another - for an elastic band tense to a limit of rapture.
For example, if a certain eruption of a supervolcano has ripened, there can pass still
many thousand years while it will occur, but there is enough chink in some kilometres depth
to break stability of a cover of the magmatic chamber. As scales of human activity grow in
all directions, chances to come across such instability increase. It can be both instability of
vacuum, and terrestrial lithosphere, and something else of what we do not think at all.
The map also shows how prevention plans depends of current level of technologies. In short, the
map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.
The following post consists of text wall and the map, which are complimentary: the text provides in
depths details about some ideas and the map gives general overview of the prevention plans.
The map: http://immortality-roadmap.com/warming3.pdf
Uncertainty
The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate
change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about
its exact scale and timing, and especially about low probability tails with high consequences. In the
case of risk analysis we cant ignore these tails as they bear the most risk. So I will focus mainly on
the tails, but this in turn requires a focus on more marginal, contested or unproved theories.
These uncertainties are especially large if we make projections for 50-100 years from now; they are
connected with the complexity of the climate, the unpredictability of future emissions and the
chaotic nature of the climate.
Clathrate methane gun
An unconventional but possible global catastrophe accepted by several researchers is a greenhouse
catastrophe named the runaway greenhouse effect. The idea is well covered in wikipedia
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis
Currently large amounts of methane clathrate are present in the Arctic and since this area is warming
quickly than other regions, the gasses could be released into the atmosphere.
https://en.wikipedia.org/wiki/Arctic_methane_emissions
Predictions relating to the speed and consequences of this process differ. Mainstream science sees
the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in
global temperature, which seems bad but it is survivable. It will also take thousands of years.
It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal
maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum
(PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven
global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane
clathrate deposits were around 10 times smaller than they are at present because the ocean was
warmer. This means that if the clathrate gun fires again it could result in much more severe
consequences.
But some scientists think that it may happen quickly and with stronger effects, which would result in
runaway global warming, because of several positive feedback loops. See, for example the blog
http://arctic-news.blogspot.ru/
There are several possible positive feedback loops which could make methane-driven warming
stronger:
1)
The Sun is now brighter than before because of stellar evolution. The increase in the Suns
luminosity will eventually result in runaway global warming in a period 100 million to 1 billion
years from now. The Sun will become thousand of times more luminous when it becomes a red
giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans
81
2)
After a long period of a cold climate (ice ages), a large amount of methane clathrate
accumulated in the Arctic.
3)
Methane is short living atmospheric gas (seven years). So the same amount of methane
would result in much more intense warming if it is released quickly, compared with a scenario in
which it is scattered over centuries. The speed of methane release depends on the speed global
warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of
the methane.
4)
Water vapor is the strongest green house gas and more warming results in more water vapor
in the atmosphere.
5)
Coal burning resulted in large global dimming
https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could
stop the masking of the global warming.
6)
The oceans ability to solve CO2 falls with a rise in temperature.
7)
The Arctic has the biggest temperature increase due to global warming, with a projected
growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earths albedo
which would result in higher temperatures. The same is true for permafrost and snow cover.
8)
Warmer Siberian rivers bring their water into the Arctic ocean.
9)
The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.
10)
The current period of a calm, spotless Sun would end and result in further warming.
Anthropic bias
One unconventional reason for global warming to be more dangerous than we used to think is
anthropic bias.
1. We tend to think that we are safe because not runaway global warming events have ever
happened in the past. But we could observe only a planet where this never happened. Milan
Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher.
See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf
2. Also we, humans tend to find ourselves in a period when climate changes are very strong because
of climate instability. This is because human intelligence as a universal adaptation mechanism was
more effective in the period of instability. So climate instability helps to breed intelligent beings.
(This is my idea and may need additional proof).
3. But if runaway global warming is long overdue this would mean that our environment is more
sensitive even to smaller human actions (compare it with an over-pressured balloon and small
needle). In this case the amount of CO2 we currently release could be such an action. So we could
underestimate the fragility of our environment because of anthropic bias. (This is my idea and I
wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defendus-observation-selection-and-fragility-of-our-environment)
The timeline of possible runaway global warming
We could name the runaway global warming a Venusian scenario because thanks to a greenhouse
effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo
(0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).
A greenhouse catastrophe can consist of three stages:
1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of a trigger
point. We dont where the tipping point is, we may have passed it already, conversely we may be
underestimating natural self-regulating mechanisms.
2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as
the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is
82
limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process
can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions
of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it
becomes, as its technologies will be damaged. But the later that global warming happens, the higher
the tech will be that can be used to stop it.
3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even
stronger and quicker positive feedback loop. A moist greenhouse will start if the average
temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the
oceans, resulting in 900 C surface temperatures.
(https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet
will boil, resulting in a dense water vapor atmosphere. See also here:
https://en.wikipedia.org/wiki/Runaway_greenhouse_effect
Prevention
If we survive until positive Singularity, global warming will be not an issue. But if strong AI and
other super techs dont arrive until the end of the 21st century, we wll need to invest a lot in its
prevention, as the civilization could collapse before the creation of strong AI, which means that we
will never be able to use all of its benefits.
I have a map, which summarizes the known ideas for global warming prevention and adds some
new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf
The map has two main variables: our level of tech progress and size of the warming which we want
to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the
plans are:
No plan do nothing, and just adapt to warming
Plan A cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of
investment and cooperation. Long term action and remote results.
Plan B geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are
possible. Quicker action and quicker results, but involves risks in the case of switching off.
Plan C emergency actions for Sun dimming, like artificial volcanic winter.
Plan D moving to other planets.
All plans could be executed using current tech levels and also at a high tech level through the use of
nanotech and so on.
I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and
its not working, because it is very expensive and requires cooperation from all sides. Even then it
will not achieve immediate results and the temperature will still continue to rise for many other
reasons.
Plan B is changing the opacity of the Earths atmosphere. It could be a surprisingly low cost exercise
and could be operated locally made. There are suggestions to release something as simple as sulfuric
acid into the upper atmosphere to raise its reflection abilities.
"According to Keiths calculations, if operations were begun in 2020, it would take 25,000 metric
tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of
sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric
tons of it each year, at an annual cost of $700 million, would be required to compensate for the
83
increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program
would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft."
https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/
There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the
oceans and by dispersing the carbon capturing mineral olivine.
The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller
catastrophe could result in the disruption of such engineering and the consequent immediate return
of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html
There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter
through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more
controversial and untested than geo-engineering.
A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere,
average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years
afterward, the models suggest.
http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warmingenvironment-science-climate-change/ Nuclear explosions in deep forests may have the same effect
as attacks on cities in term of soot production.
Fighting between Plan A and Plan B
So we are not even close to being doomed by global warming but we may have to change the way
we react to it.
While cutting emissions is important it will probably not work within a 10-20 year period, quicker
acting measures should be devised.
The main risk is abrupt runaway global warming. It is low probability event with the highest
consequences. To fight it we should prepare rapid response measures.
Such preparation should be done in advance, which requires expensive scientific experiments. The
main problem here is (as always) funding, and regulators approval. The impact of sulfur aerosols
should be tested. Complicated math models should be evaluated.
Contra-arguments are the following: Openly embracing climate engineering would probably also
cause emissions to soar, as people would think that there's no need to even try to lower emissions
any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever
was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are
safe? Of course, if we believe that history will end anyways within decades or centuries because of
singularity, long-term effects of such measures may not matter so much Another big issue with
changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to
start geo-engineering until they at least take simple measures to reduce their emissions. (comments
from Lesswrong discussion about GW).
Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geoengineering), where plan As approval is winning. It has been suggested not to implement Plan B as
an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions).
84
Regulators didnt approve even the smallest experiments with sulfur shielding in Britain. Iron ocean
seeding also has regulatory problems.
But the same logic works in the opposite direction. China and the coal companies will not cut
emissions, because they want to press policymakers to implement plan B. It looks like a prisoners
dilemma of two plans.
The difference between the two plans is that plan A will return everything to its natural state and
plan B is aimed on creating instruments to regulate the planets climate and weather.
In the current global political situation, cutting emissions is difficult to implement because it
requires collaboration between many rival companies and countries. If several of them defect (most
likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work,
even if all of Europe were solar powered.
Transition to zero-emission economy could happen naturally in 20 years after electric transportation
will become widespread as well as solar energy.
Plan C should be implemented if the situation suddenly changes for the worse, with the temperature
jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to
make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt
other geo-engineering measures.
I would also advocate for a mixture of both plans, because they work on different timescale. Cutting
emissions and removing CO2 using the current level of technologies would take decades to have an
impact on the climate. But geo-engineering has a reaction time of around one year so we could use it
to cover the bumps in the road.
Especially important is the fact that if we completely stop emissions, we could also stop global
dimming from coal burning which would result in a 3 C global temperature jump. So stopping
emissions may result in a temperature jump, and we need a protection system in this case.
In all cases we need to survive until stronger technologies develop. Using nanotech or genetic
engineering we could solve the warming problem with less effort. But we have to survive until this
time.
It seems to me that the idea of cutting emissions is overhyped and solar management is
"underhyped" in terms of public opinion and funding. By changing that misbalance we could
achieve more common good.
An unpredictable climate needs a quicker regulation system
The management of climate risks depends on their predictability and it seems that this is not very
high. The climate is a very complex and chaotic system.
It may react unexpectedly in response to our own actions. This means that long-term actions are less
favorable. The situation could change many times during their implementation.
The quick actions like solar shielding are better for management of poor predictable processes, as
we can see the results of our actions and quickly cancel them or make them stronger if we don't like
the results.
85
Multiple people predict extinction due to global warming but they are mostly labeled as alarmists
and are ignored. Some notable predictions:
1.
David Auerbach predicts that in 2100 warming will be 5 C and combined with resource
depletion and overcrowding it will result in global catastrophe.
http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankindextinct-100-years-climate-change-warns-expert.html
2.
Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and
extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-adecade.html
3.
Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst
case emission scenario and worst climate sensitivity to them:
https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios
4.
The consensus of scientists is that climate tipping point will be in 2200
http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-22002012967.html
5.
If humanity continues to burn all known carbon sources it will result in a 10 C warming by
2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turnearths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not
extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars
or other smaller catastrophes which will permit the existence of regional powers which often smash
each other into ruins and then rebuild using coal energy. Something like global nuclear Somali
world.
6. Kopparapu says that if current IPCC temperature projections of a 4 degrees K (or Celsius)
increase by the end of this century are correct, our descendants could start seeing the signatures of
a moist greenhouse by 2100. Earth on the edge of runaway warming. http://arctic-
news.blogspot.ru/2013/04/earth-is-on-the-edge-of-runaway-warming.html
We should give more weight to less mainstream predictions, because they describe heavy tails of
possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway
global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global
warming.
87
Plan A
Greenhouse
gases
limitation
Local adaptations
to changing
climate
Cutting emissions
of CO2
Reduction of the
emissions of other
greenhouse
gases
Air conditioning
Cities replacement
New crops
Irrigation
Using nanotech and biotech to create new modes of transport and energy
sources
Using AI to create a comprehensive climate model and calculate impacts
wiki
Capturing CO2
from the
atmosphere
Reforestation
Stimulation of plankton: seeding the ocean with iron
Using common mineral olivine, which when spray enters into chemical reaction
with CO2 and absorbs it, and
wiki
Plan B
Use of
geo-engineering for
c
reflet i on
of sunlight
Risk: turning off
could result in immediate bounce of
global warming
Risk: Less
incentives in
cutting emissions
Reflet ion of
sunlight
radiation
in stratosphere
Increase of
albedo of the
earths surface
Increase of
cloud albedo
Space solutions
i
3. Increasing of the albedo of clouds over the sea with the help of injecting sea-c
water-based condensation nuclei
https://en.wikipedia.org/wiki/Cloud_refle
a
t i vi t y_modifict i on
1500 ships can do it. The spray itself will go to the upper atmosphere
Risk: The water turns to steam, which is itself a greenhouse gas.
Risk: Incorrect altitude clouds can lead to heating
https://en.wikipedia.org/wiki/Solar_radiation_management#Weaponization
Mirrors in space
Spraying moondust: explosions on the moon could create a cloud of dust
Construction of factories for the production of the moon, satellites umbrellas
Lenses or smart dust at L1 point
Deviation of an asteroid to the Moon so that it would result into impact and a
cloud of dust on the moons orbit will be created, and will be screening solar radiation.
c
Plan
Urgent measures
to stop global
warming
Plan C could be realized in half a year with help of already existing nuclear
weapons
Plan D
Escape
Escape into high mountains, like Himalaya or in Antarctic
High-tech air condition escapes
Space stations
Other planet colonisation
Uploading into AI
Consequences
Probability
No warming
Useless waste of money, time and attention in figt ing gl obal w arming
Catastrophic warming
10-20 C
Catastrophic warming
New equilibrium of the
climate at 55 C (40 C
warming)
Venusian runaway
warming (Mild)
The medium temperature
reach more than 100 C
88
Chapter 7. The anthropogenic risks which are not connected with new technologies
Exhaustion of resources
The problem of exhaustion of resources, growth of the population and pollution of
environment is system problem, and in this quality we will consider to it further. Here we will
consider only, whether each of these factors separately can lead to mankind extinction.
Widespread opinion is that the technogenic civilization is doomed because of exhaustion of
readily available hydrocarbons. In any case, this in itself will not result in extinction of all mankind
as earlier people lived without oil. However there will be vital issues if oil ends earlier, than the
society will have time to adapt for it - that is will end quickly. However coal stocks are considerable,
and the "know-how" of liquid fuel from it was actively applied in Hitlers Germany. Huge stocks of
hydrate of methane are on a sea-bottom, and effective robots could extract it. And wind-energy,
transformation of a solar energy and similar as a whole it is enough existing technologies to keep
civilization development, though probably certain decrease in a standard of life is possible, and in
the worst case - considerable decrease in population, but not full extinction.
In other words, the Sun and a wind contain energy which in thousand times surpasses
requirements of mankind, and we as a whole understand how to take it. The question is not, whether
will suffice energy for us, but whether we will have time to put necessary capacities into operation
before shortage of energy will undermine technological possibilities of the civilization at the adverse
scenario.
To the reader can seem, that I underestimate a problem of exhaustion of resources to which the
is devoted set of books (Meadows, Parhomenko), researches and the Internet of sites (in the spirit of
www.theoildrum.com ). Actually, I do not agree with many of these authors as they start with the
precondition, that technical progress will stop. We will pay attention to last researches in the field of
maintenance with power resources: In 2007 in the USA industrial release of solar batteries in cost
less than 1 dollar for watt has begun, that twice it is less, than energy cost on coal power station, not
considering fuel. The quantity wind energy which can be taken from ocean shoal in the USA makes
900 gigawatts, that covers all requirements of the USA for the electric power. Such system would
give a uniform stream of energy for the account of the big sizes. The problem of accumulation of
surpluses of the electric power is solved for the account of application of return back waters in
89
hydroelectric power stations and developments of powerful accumulators and distribution, for
example, in electromobiles. The large amount of energy can be taken from sea currents, especially
Gulf Stream, and from underwater deposits methane hydrates.
Besides, end of exhaustion of resources is behind horizon of the forecast which is established
by rate of scientific and technical progress. (But the moment of change of the tendency - Peak Oil is in this horizon.)
One more variant of global catastrophe is poisoning by products of our own live. For example,
yeast in a bottle with wine grows on exponent, and then poisoned with products of the disintegration
(spirit) and all to one will be lost. This process takes place and with mankind, but it is not known,
whether we can pollute and exhaust so our inhabitancy that only it has led to our complete
extinction. Besides energy, following resources are necessary to people:
Materials for manufacture - metals, rare-Earth substances etc. Many important ores can end
by 2050. However materials, unlike energy, do not disappear, and at development nanotechnology
there is possible a full processing of a waste, extraction of the necessary materials from sea water
where the large quantity is dissolved, for example, uranium, and even transportation of the necessary
substances from space.
Food. According to some information, the peak of manufacture of foodstuff is already
passed: soils disappear, the urbanization grasps the fertile fields, the population grows, fish comes to
an end, environment becomes soiled by waste and poisons, water does not suffice, wreckers extend.
On the other hand, transition to essentially new industrial type of manufacture of the food plants,
based on hydroponics - that is cultivation of plants in water is possible, without soil in the closed
greenhouse that protects from pollution and parasites and is completely automated. (see Dmitry
Verhoturova's and Kirillovsky article Agrotechnologies of the future: from an arable land to
factory). At last, margarine and, possibly, many other things necessary components of a foodstuff, it
is possible to develop from oil at the chemical enterprises.
Water. It is possible to provide potable water for the account desalination sea water, today it
costs about dollar on ton, but the water great bulk goes on crop cultivation - to thousand tons of
water on wheat ton that does desalination unprofitable for agriculture. But at transition on
hydroponic water losses on evaporation will sharply decrease, and desalination can become
profitable.
Place for a life. Despite fast rates of a gain of quantity of the population on the Earth, it is
still far to a theoretical limit.
90
Pure air. Already now there are the conditioners clearing air from a dust and raising in it the
maintenance of oxygen.
- Exceeding the global hypsithermal limit.
Artificial Wombs
91
that this would certainly not occur, with confidence, is not possible now because the case
has not been investigated thoroughly enough. Unfortunately, the other risks which we
discuss in this chapter have been analyzed even less. It is important to highlight them,
however, so that future research can be prompted.
Yellowstone Supervolcano Eruption
A less controversial claim than the status of the Cumbre Vieja volcano is that there is
a huge, pressurized magma chamber beneath Yellowstone National Park in Wyoming.
Others have gone on to suggest, off the record, that it would blow if its cap were destroyed
by a nuclear weapon. No geologists have publicly addressed the possibility, but it is entirely
consistent with their statements on the pressure of the magma chamber and its depth 4. The
magma chamber is 80 km (50 mi) long and 40 km (24 mi) wide, and has 4,000 km 3 (960 cu
mi) of underground volume, of which 1030% is filled with molten rock. The top of the
chamber is 8 km (5 mi) below the surface, the bottom about 16 km (10 mi) below. That
means that anything which could weaken or annihilate the 8 km (5 mi) cap could release
the hyper-pressurized gases and molten rock and trigger a supervolcanic eruption, causing
the loss of millions of lives.
Before we review the effects of a supervolcano eruption, it is worth considering the
depth penetration of nuclear explosions. During Operation Plowshare, an experiment of the
peaceful use of nuclear weapons, craters 100 m (320 ft) deep were created. Simplistically
speaking, this means that 80 similar weapons would be needed to burrow all the way down
to the Yellowstone Caldera magma chamber. Realistically speaking, fewer would be
needed, since deep nuclear explosions cause collapses which reduce overhead pressure.
Our estimate is that just 10 ten-megaton nuclear explosions or fewer would be sufficient to
connect the magma chamber to the surface. If there are solid boundaries between the
explosion cavities, they could be bridged by a drilling machine. That would release the
pressure and allow the magma to explode to the surface. You might wonder what sort of
people would have the motivation to do such a thing. America's enemies, for one, but there
are other possibilities. We explore the general case in a later chapter.
For now, let's review the effects of a supervolcano eruption. Like many of the risks
discussed in this book, they would not be likely to kill all of humanity, but only a few
hundreds of millions of people. It's in combination with other risks that the supervolcano
93
94
Still, little of this directly matters in the context of this book, since such an event, while
severe, does not threaten humanity as a whole. Although worldwide temperatures would
drop by a few degrees, and the continental US would be devastated, the world population
as a whole would survive and live on, guaranteeing humanity's future. It's still worth noting
this scenario because 1) there may be multiple supervolcanos worldwide which could be
triggered simultaneously, which either individually or concurrently with nuclear weapons
could cause a volcanic/nuclear winter so severe that no one survives it, 2) it is an
exacerbating factor which could add to the tension of a World War or similar scenario. A
strike on a supervolcano would be deadlier than a strike on a major city, and could
correspondingly raise the stakes of any international conflict. Threatening to attack a
supervolcano with a series of ICBMs could be a potential blackmailing strategy in the
darkest hours of a World War.
Asteroid Bombardment
A risk which has more potential to be life-ending than a supervolcano eruption is that
of intentionally-directed asteroid bombardment, which could be quite dangerous to the
planet indeed. Directing an asteroid of sufficient size towards the Earth would require a
tremendous amount of energy, orders of magnitude greater than mankind's current total
annual energy consumption, but it could eventually be done, perhaps with the tools
described in the nanotechnology chapter.
The Earth's orbit already places it in some danger of being hit by a deadly asteroid.
65 million years ago, an asteroid between 5 km (2 mi) and 15 km (6 mi) impacted the
Earth, causing the extinction of the dinosaurs. There is an asteroid, 1950 DA, 1 km (0.6 mi)
in diameter, which scientists say has a 0.3 percent chance of impacting Earth in 2880 6. The
dinosaur-killer impact was so severe that its blast wave ignited most of the forests in North
America, destroying them and making fungus the dominant species for several years after
the impact7. Despite this, some human-sized species survived, including various turtles and
alligators. On the other hand, many human-sized and larger species were wiped out,
including all non-avian dinosaurs. More research is needed to determine whether a
dinosaur-killer-class asteroid would be likely to wipe out humanity in our entirety, taking into
account our ability to take refuge in bunkers with food and water for decades or possibly
95
even centuries at a time. Detailed studies have not been done, and we were only able to
locate one paper on the topic8.
In the geological record, there are asteroid impacts up to half the size of the
Chicxulub impactor which wiped out the dinosaurs, which are known not to have caused
mass extinctions. The Chicxulub impactor, on the other hand, caused a mass extinction
that destroyed about three-quarters of all living plant and animal species on Earth. It seems
fair to assume that an asteroid needs to be at least as large as the Chicxulub impactor to
have a chance of wiping out humanity, and probably significantly larger.
Sometimes, asteroids with a diameter of greater than 10 km (4 mi) are called lifekiller asteroids, though this is an exaggeration. At least one asteroid of this size has
impacted the Earth during the last 600 million years and not wiped out multicellular life,
though it did burn down the entire biosphere. The two largest impact craters known, with a
diameter of 300 and 250 km (186 and 155 mi) respectively, correspond to impacts which
occurred before the evolution of multicellular life. The Chicxulub crater, with a diameter of
180 km (112 mi), is the third-largest impact crater on Earth which is definitively known, and
the only known major asteroid to hit the planet after the rise of complex, multicellular life.
Craters of similar size from the last 600 million years cannot be definitively identified, but
that does not mean that such an impact has not occurred. The crater could very well have
been on part of the Earth's surface that has since been subsumed beneath a continental
plate, or could be camouflaged by surface features. There is one possible crater of even
larger size, the Shiva crater, which, if real (it is highly contested) is 600 km (370 miles)
long by 400 km (250 mi) wide, and may correspond to the impact of a 40 km (25 mi) sized
object. If this were confirmed, it would substantially increase the necessary size for a
human-killer asteroid, but since it is highly contested, it does not count, and we ought to
assume that an asteroid just modestly larger than the Chicxulub impactor, say near the top
of its probable size range, 15 km (6 mi) could potentially wipe out the human species, just
as it did the dinosaurs. Comets have somewhat greater speed than asteroids, due to their
origin farther out in the solar system, and can be correspondingly smaller than asteroids
but still do equivalent damage. It is unknown whether the Chicxulub impactor was an
asteroid or a comet. To put it in perspective, an asteroid 10 km (4 mi) across is similar to
the size of Mt. Everest, but with greater mass (due to its spherical, rather than conical
shape).
96
plants underground and extend their stay for the life of the reactor, which could be 100
years or longer. A thick ice sheet forming on top of the bunker could kill all life inside, by
denial of oxygen, but such ice sheets are unlikely to form in the tropics, where billions of
people live today. Perhaps a greater risk would be a series of artificially engineered
asteroid impacts, 50-100 years apart, designed to last for thousands of years. It seems
more likely that an unnatural scenario such as that could actually wipe out all human life,
but also seems correspondingly more difficult to engineer. It could become possible during
the late 21st century and beyond, however.
Manually redirecting an asteroid with an orbit relatively far from the Earth, say 0.3
astronomical units (AU) in the case of 1036 Ganymed, would require a tremendous amount
of energy, even by the standards of high energy density MNT-built machinery. Redirecting
an object of that size by a substantial amount would require many tens of thousands of
years and a corresponding number of orbits. If someone had an objective to destroy a
target on Earth, it would be much simpler to blow it up with a nuclear weapon or direct
sunlight from a space mirror to vaporize it rather than drop an asteroid on it. It would be
even far easier to pick up a mountain, launch it off the surface of the Earth, orbit it around
the Earth until it picked up sufficient energy, then drop it on the Earth, rather than
redirecting a distant object. For this reason, a man-engineered asteroid impact seems like
an unlikely risk to humanity for the foreseeable future (tens of thousands of years), and an
utterly negligible one for the time frame under consideration, the 21 st century. Of course,
the probability of a natural impact of this size in the next century is minute.
Runaway Autocatalytic Reactions
Very few people know that if the concentration of deuterium (an isotope of hydrogen,
also known as heavy water) in the world's oceans were just 22 times higher than it is, it
would be possible to ignite a self-sustaining nuclear reaction that would vaporize the seas
and the Earth's crust with it. The oceans have 1 atom per 6,247 of deuterium, and the
critical threshold for a self-sustaining nuclear chain reaction is one atom per 300 12. It may
be that there are deposits of ice or heavy water with the required concentration of
deuterium. Heavy water has a slightly higher melting point than normal water and thus
concentrates during natural processes. Even a cube of heavy water ice just 100 m (320 ft)
on a side, small in terms of geologic deposits, could release energy equivalent to many
98
gigatons of TNT if ignited, greater than the largest nuclear bombs ever detonated. The
reason we do not observe these events in other star systems may be due to the fact that
artificial nuclear explosions are needed to trigger them, or that the required concentrations
of deuterium do not exist naturally. More research on the topic is needed. There is even a
theory that the Moon formed during a natural nuclear explosion from concentrated uranium
at the core-mantle boundary13. It may be that we have not observed such an event in other
star systems yet because it is sufficiently uncommon.
It should be possible to go looking for higher-than-normal concentrations of deuterium
in Arctic ice deposits. Theses studies should be pursued out of an abundance of caution. If
such deposits are found, this would provide more evidence for the usefulness of space
stations distant from the Earth as an insurance policy against geogenic human extinction
triggered by nuclear chain reaction in geologic deposits. There may be other autocatalytic
runaway reactions which are possible, threatening structures such as the ozone layer.
These possibilities should be studied in greater detail by geologic chemists. Dangerous
thresholds of deuterium may exist in the icy bodies of the outer solar system, on Mars, or
among the asteroids. The use of nuclear weapons in space should be restricted until these
objects are thoroughly investigated for deuterium levels.
Gas Giant Ignition
The potential ignition of gas giants is of sufficient importance that it merits its own
section. The first reaction to such an idea is that it sounds utterly crazy. There is a reflexive
search for a rationalization, a quick rebuttal, to put such an implausible idea out of our
heads. Problematically, however, many of the throw-away rebuttals have been dismissed
on rational grounds, and if we are going to rule out this possibility decisively, more work
needs to be done14. Specifically, we are talking about a nuclear chain reaction being
triggered by a nuclear bomb detonated in a deuterium-rich cloud layer of a planet like
Jupiter, Saturn, Uranus, or Neptune. Even a pocket only several kilometers in diameter
would be enough to sterilize the solar system if ignited.
For ignition to be a danger, there needs to be a pocket in a gas giant where the level
of deuterium per normal hydrogen atom is at least 1:300. That is all it takes. A nuclear
explosion could then theoretically start a self-reinforcing nuclear chain reaction. If all the
deuterium in the depths of Jupiter were somehow ignited, it would release energy
99
equivalent to 3000 years of luminescence of the Sun during a few tens of seconds, enough
to melt the first few kilometers of the Earth's crust and penetrate much deeper with x-rays.
Surviving this would be extremely difficult, though possibly machine-based life forms buried
deeply underground could do it. If it turns out to seem possible, the threat of blowing up a
gas giant could be used as a blackmailing device by someone to get all of humanity to do
what they want.
The average deuterium concentration of Jupiter's atmosphere is low, about 26 per 1
million hydrogen atoms, similar to what is thought to be the primordial ratio of deuterium
created shortly after the Big Bang. Although this is a small amount, there may be chemical
and physical processes, which concentrate deuterium in parts of the atmosphere of Jupiter.
Natural solar heating of ice in comets leads to isotope separation and greater local
concentrations, for instance. Scientific details of how isotope separation can occur and
what isotope concentrations are reached in the interior of gas giants is poorly understood.
If the required deuterium concentrations exist, there are additional reasons to be
concerned that a runaway reaction could be initiated in a gas giant. One is the immense
pressure and opacity of the gas giants beneath the cloud layer. 10,000 km beneath
Jupiter's cloud tops, the pressure is 1 million bar. For comparison, the pressure at the core
of the Earth is 3 million bar. Deep in the bowels of gas giants, hydrogen changes to a
different phase called metallic hydrogen, where it is so compressed that it behaves as an
electrical conductor, and would be considerably more opaque than normal hydrogen. The
pressure and opacity would help contain any nuclear reaction and ensure it gets going
before fizzling out. Another factor to consider is that a gas giant has plenty of fusion fuel
which has never been involved in nuclear reactions, elements like helium-3 and lithium.
This makes it potentially more ignitable than a star.
There are runaway fusion reactions which occur in astronomical bodies in nature. The
most obvious is a supernova, where a red giant star collapses and fuses a large amount of
fuel very quickly. Another example is a helium flash, where a degenerate star crosses a
critical threshold of helium pressure and 60-80 percent of the helium in its core fuses in a
matter of seconds. This causes the luminosity of star to increase to about 10 11 solar
luminosities, similar to the luminosity of an entire galaxy. Jupiter, which is about a thousand
100
times less massive than the Sun, would still sterilize the solar system if any substantial
portion of its helium fused. Helium makes up about 10 percent of the mass of Jupiter.
The arguments for and against the likelihood of planetary ignition are complicated,
and require some knowledge of nuclear physics. For this reason, we will conclude
discussion of the topic here and point to some key references. Besides the risk of planetary
ignition, there has also been some discussion of the possibility of solar ignition, but this has
been more limited. Like with planetary ignition, it is tempting to dismiss the possibility
without considering the evidence, which is a danger.
Deep Drilling and Geogenic Magma Risk
Returning back to the realm of Earth, it may be possible to dig very deeply, creating
an artificial supervolcano even more energetic than Yellowstone. We should always
remember that 1792 mi (2885 km) beneath us is a molten, pressurized, liquid core
interspersed with gas. If even a tiny amount of that energy could be released to the
surface, it could wipe out all life. This is especially concerning in light of recent proposals to
send a probe to the mantle-core boundary. Although the liquid iron in the core would be too
heavy to rise to the surface on its own, the gases in the core could eject it through a
channel, like opening the cork of a champagne bottle. If even a small channel were flooded
with pressurized magma, it could make the fissure larger by melting its walls all the way to
the top, until it became large enough to eject a substantial amount of liquid iron.
A scientist at Caltech has devised a proposal for sending a grapefruit-sized
communications probe to the Earth's core, by creating a crack and pouring in a huge
amount of molten iron. This would slowly melt its way downwards, taking about a week to
travel the 1,792