Вы находитесь на странице: 1из 746

New version of Structure of global catastrophe,

2016 revision, draft.


Most ideas are the same, but some maps are added.

How to prevent
existential risks
From full list to complete prevention plan

(Draft)

Copyright 2016 Alexey Turchin.

Contents
Preface

11

About this book


What is human extinction risks?
Part 1. General considerations.

11
11
13

Chapter 1. Types and nature of global catastrophic risks

13

Main open question about x-risks: before or after 2050?


The difference between a global catastrophe and existential risk
Smaller catastrophes as steps to possible human extinction
One-factorial scenarios of global catastrophe
Principles of classification of global risks
Precautionary principle for existential risks
X-risks and other human values
Global catastrophic risks, human extinction risks and existential risks
Chapter 2. Problems of probability calculation

13
16
17
20
21
23
24
24
26

Problems of calculation of probabilities of various scenarios


28
Quantitative estimates of the probability of the global catastrophe, given by various authors 29
The model of the future
32
Chapter 3. History of the research on global risk
36
Current situation with global risks
Part 2. Typology of x-risks

47

45

Chapter 4. The map of all know global risks

47

Block 1 Natural risks

47

Chapter 5. The risks connected with natural catastrophes

47

Universal catastrophes
Geological catastrophes
Eruptions of supervolcanoes
Falling of asteroids
Asteroid threats in the context of technological development
Zone of defeat depending on force of explosion
Solar flashes and luminosity increase
Gamma ray bursts
Supernova stars
Super-tsunami
Super-Earthquake
Polarity reversal of the magnetic field of the Earth
Emerge of new illness in the nature
Debunked and false risks from media, science fiction and fringe science or old theories
Block 2 Anthropogenic risks
80
Chapter 6. Global warming

80
4

47
49
50
51
54
58
60
62
65
65
66
69
70
79

Chapter 7. The anthropogenic risks which are not connected with new technologies83
Chapter 8. Artificial Triggering of Natural Catastrophes
Chapter 9. Nuclear Weapons

86
102

The Evolution of Scientific Opinion on Nuclear Winter


Nuclear War and Human Extinction
More Exotic Nuclear Scenarios
Nuclear space weapons
Nuclear attack on nuclear power stations
Cheap nukes and new ways enrichment
Estimating the Probability of Nuclear War Causing Human Extinction
Near misses
The map of x-risks connected with nuclear weapons
Chapter 10. Global chemical contamination

103
112
112
119
119
119
119
121
123
128

Chapter 11. Space Weapons

135

Latest developments:
Block 4 Super technologies. Nanotech and Biotech.

176

Chapter 12. Biological weapons

176

The map of biorisks


Chapter 13. Superdrug

209

172

203

Design features
Chapter 14: Nanotechnology and Robotics

212
217

The map of nanorisks


Chapter 15. Space Weapons

275

Chapter 16: Artificial Intelligence

316

270

Current state of AI risks 2016


317
Distributed agent net as a basis for hyperbolic law of acceleration and its implication for AI
timing and x-risks
317
The map AI failures modes and levels
323
AI safety in the age of neural networks and Stanislaw Lem 1959 prediction
325
The map of x-risks prevention
334
Estimation of timing of AI risk
335
Doomsday argument in estimating of AI arrival timing
339
Several interesting ideas of AI control
341
Chapter 17. The risks of SETI
348
Latest development in 2016
365
Chapter 18. The risks connected with blurring the borders of human and transhumans 366
The risks connected with a problem of "the philosophical zombie
Chapter 19. The causes of catastrophes unknown to us now

367
368

Phil Torres article about unknown unknowns (UU)


List of possible types of Unknown unknowns
Unexplained aerial phenomena as global risks
Part 3. Different factors influencing global risks landscape

372

Chapter 20. Ways of detecting one-factorial scenarios of global catastrophe

372

369
369
371

The general signs of any dangerous agent


Chapter 19. Multifactorial scenarios

372
381

Integration of the various technologies, creating situations of risk


Double scenarios
Studying of global catastrophes by means of models and analogies
Inevitability of achievement of a steady condition
Recurrent risks
Global risks and problem of rate of their increase
Comparative force of different dangerous technologies
Sequence of appearance of various technologies in time
Comparison of various technological risks
The price of the question of x-risks prevention
The universal cause of the extinction of civilizations
Does the collapse of technological civilization means human extinction?
Chapter. Agents, which could start x-risks
The social groups, willing to risk the destiny of the planet
Humans as a main factors in global risks, as a coefficient in risks assessment
Decision-making about nuclear attack
Chapter 21. The events changing probability of global catastrophe

381
382
386
388
390
390
391
392
393
395
397
398
399
403
404
406
408

Definition and the general reasons


Events which can open a vulnerability window
System crises
Crisis of crises
Technological Singularity
Overshooting leads to simultaneous exhaustion of all resources
System crisis and technological risks
Chapter 21. Cryptowars, arms race and others scenario factors raising probability of global
catastrophe
426

408
409
410
418
419
422
424

Cryptowar
Chapter 22. The factors influencing for speed of progress

426
441

Global risks of the third sort


Moore's law
Chapter 23 X-risks prevention

441
443
446

The general notion of preventable global risks


Introduction

451

The problem
The context
In fact, we dont have a good plan
Overview of the map
The procedure for implementing the plans
The probability of success of the plans
Steps
Plan A. Prevent the catastrophe
Plan A1. Super UN or international control system
A1.1 Step 1: Research
6

446
451
451
452
452
453
454
454
455
455
455

Plan A1.1: Step 2: Social support


457
Reactive and Proactive approaches
458
A1.1-Step 3. International cooperation
459
Practical steps to confront certain risks
460
1.1 Risk control
461
Elimination of certain risks
462
A1.1 Step 4: Second level of defense on high-tech level: Worldwide risk
prevention authority
463
Planetary unification war
464
Active shields
464
Step 5 Reaching indestructibility of civilization with negligible annual
probability of global catastrophe: Singleton
466
Plan A1.2 Decentralized risk monitoring
466
A1.2 1.Values transformation
Ideological payload of new technologies
A1.2 2: Improving human intelligence and morality
Intelligence
A1.2 3. Cold War, local nuclear wars and WW3 prevention
A1.2 4. Decentralized risk monitoring
Plan 2. Creating Friendly AI
A2.1 Study and Promotion
A2 2. Solid Friendly AI theory
A2.3 AI practical studies
Seed AI
Superintelligent AI
UnfriendlyAI
Plan A3. Improving Resilience

467
469
469
469
470
471
471
471
472
473
473
473
474
475

A3 1.Improving sustainability of civilization


3 2. Useful ideas to limit the scale of catastrophe
3.3 High-speed Tech Development needed to quickly pass risk window
A3.4. Timely achievement of immortality on highest possible level
AI based on uploading of its creator
Plan 4. Space Colonization
477

475
476
476
477
477

4.1. Temporary asylums in space


4.2. Space colonies near the Earth
Colonization of the Solar System
4.3. Interstellar travel
Interstellar distributed humanity
Plan B. Survive the catastrophe

478
478
479
479
480

B1. Preparation
B2. Buildings
Natural refuges
B3. Readiness
B4. Miniaturization for survival and invincibility
7

480
481
481
481
482
482

B5. Rebuilding civilization after catastrophe


Reboot of civilization
Plan . Leave Backups

483
483
483

C1. Time capsules with information


C2. Messages to ET civilizations
C3. Preservation of earthly life
C4. Robot-replicators in space

485

Resurrection by another civilization


Plan D. Improbable Ideas

486

484
484
484
486

D1. Saved by non-human intelligence


486
D2. Strange strategy to escape Fermi paradox
488
D4. Technological precognition
489
D5. Manipulation of the extinction probability using Doomsday argument 489
D6. Control of the simulation (if we are in it)
490
Bad plans
491
Prevent x-risk research because it only increases risk
Controlled regression
Depopulation
Computerized totalitarian control
Choosing the way of extinction: UFAI
Attracting good outcome by positive thinking
Conclusion

494

Literature:

495

491
492
493
493
494
494

Active shields
Existing and future shields
Conscious stop of technological progress
Means of preventive strike
Removal of sources of risks on considerable distance from the Earth
Creation of independent settlements in the remote corners of the Earth
Creation of the file on global risks and growth of public understanding of the problematics
connected with them
Refuges and bunkers
Quick spreading in space
All somehow will manage itself
Degradation of the civilization to level of a steady condition
Prevention of one catastrophe by means of another
Advance evolution of the man
Possible role of the international organizations in prevention of global catastrophe
Infinity of the Universe and question of irreversibility of human extinction
Assumptions of that we live in "Matrix".
Global catastrophes and society organization
Global catastrophes and current situation in the world
The world after global catastrophe
The world without global catastrophe: the best realistic variant of prevention of global
catastrophes
8

495
497
501
502
504
504
505
505
509
510
511
511
512
513
515
516
517
520
521
522

Maximizing pleasure if catastrophe is inevitable


Chapter 24. Indirect ways of an estimate of probability of global catastrophe

524

523

Chapter 25. The most probable scenario of global catastrophe

541

Part 5. Cognitive biases affecting judgments of global risks

546

Chapter 1. General Remarks:

546

Cognitive Biases and Global Catastrophic Risks

546

Chapter. Meta-biases

549

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 3. How cognitive biases in general influence estimates of global risks

565

Chapter 3. How cognitive biases in general influence

565

estimates of global risks

565

Chapter 4. The universal logical errors, able to be shown in reasoning on global risks

615

Chapter 5. Specific errors arising in discussions about danger of uncontrollable development of


Artificial Intelligence
626
Chapter 7. Conclusions from the analysis of cognitive biases in the estimate of global risks and
possible rules for rather effective estimate of global risks
651
The conclusion. Prospects of prevention of global catastrophes
What is AI? MA part
Artificial Intelligence Today
Projects in Artificial General Intelligence
Whole Brain Emulation
Ensuring the Continuation of Moore's Law
The Software of Artificial General Intelligence
Features of Superhuman Intelligence
Seed Artificial Intelligence
From Virtuality to Physicality
The Yudkowsky-Omohundro Thesis of AI Risk
Friendly AI
Stages of AI Risk
AI Forecasting
Broader Risks
Preventing AI Risk and AI Risk Sources
AI Self-Improvement and Diminishing Returns Discussion
Philosophical Failures in Advanced AI, Failures of Friendliness
Impact and Conclusions
Frequently Asked Questions on AI Risk
Chapter. Collective biases and errors

652
666
671
673
675
679
684
687
690
694
699
704
708
712
718
719
723
730
733
740
743

10

Preface
Existential risk one where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and
drastically curtail its potential.
N. Bostrom. Existential Risks: Analyzing Human
Extinction Scenarios and Related Hazards

About this book


This book had developed from an encyclopedia of global catastrophic risks to a roadmap of
risk prevention.

What is human extinction risks?


In the 20th century the possibility of the extinction of humankind was almost entirely
connected to nothing but the threat of nuclear war. Now, at the beginning of the XXI century, we can
easily name more than 10 various sources of possible irreversible global catastrophe, mostly
deriving form novel technologies, and the number of sources of risk constantly grows. Research on
global risk is widely neglected. Problems such as of the potential exhaustion of oil, the future of the
Chinese economy, or outer space exploration get more attention, than irreversible global
catastrophes. It seems senseless to discuss the future of human civilization before we have an
intelligent estimate of its chances of survival. Even if as a result of such research we learn that the
risk is negligibly small, it is important to study the question. Preliminary studies by experts do not
give encouraging results. Sir Martin Rees, formerly British astronomer royal and author of a book
on global risks, estimates mankind's chances of survival to 2100 at only 50% estimate.
This book is devoted to consistent review of the "threats to existence, that is, to risks of the
irreversible destruction of all human civilization and extinction of mankind. The purpose of this
book is to give a wide review of this theme. However, many of the claims therein have a debatable
character. Our goal is not to give definitive answers, but to encourage measured thought in the
reader and to nurture soil for further discussions. Many of the hypotheses stated here might seem
11

unduly radical. However, when discussing them, we were guided by a precautionary principle
which directs us to consider worst realistic scenarios as a matter of caution. The point is not to
fantasize about doom for its own sake, but to give the worst scenarios the attention they deserve
from a straightforward utilitarian moral perspective.
In this volume you will find monograph Risks of human extinction. The monograph consists of
two parts research of concrete threats and methodology of the analysis. The analysis of concrete
threats in the first part consists of a detailed list with references to sources and critical analysis.
Then, systemic effects of the interaction of different risks are investigated, and probability estimates
are assessed. Finally we suggest roadmap of existential risks prevention.
The methodology offered in the second part consists of critical analysis of the ability of
humans to intelligently estimate the probability of global risks. It may be used, with little change,
and in other systematic attempts to assess the probability of uncertain future events.
Though only a few books with general reviews of the problem of global risks have been
published in the world, a certain tradition has already been formed. It consists of discussion of
methodology, classification of possible risks, estimates of their probability, ways of ameliorating
those risks and then a review of further philosophical issues related to global risks, such as the
Doomsday argument, which will be introduced shortly. The current main books on global
catastrophic risks are: J. Leslie's The End of the world. A science and ethics of human extinction,
1996, Sir Martin Rees' Our Final Hour, 2003, Richard Posner's Catastrophe: Risk and Response,
2004, and the volume edited by Nick Bostrom Global catastrophic risks, 2008.
This book differs considerably from previous books on the topic of global risks. First of all,
we review a broader set of risks than prior works. For example, an article by Eliezer Yudkowsky
lists ten cognitive biases affecting our judgment of global risks. In our book, we address more then a
100 such biases. In the section devoted to the classification of risks, we mention some risks, which
are entirely missing from previous works. I have aspired to create a systematic point of view of the
problem of global risks, which allows us not only to list various risks and understand them, but also
how different risks, influencing one another, form an interlocked structure.
I will use the terms global catastrophe, x-risk, existential risk and human extinction as
synonymous designating total and irreversible die off of all Homo sapience.
Main conclusion of the book is that chances of human extinction is around 50 per cent on the
XXI century but they could be lowered by the order of magnitude if all need actions will be done.
All information used in the analysis is taken from the open sources listed in the bibliography.

12

Part 1. General considerations.

Chapter 1. Types and nature of global catastrophic risks

Main open question about x-risks: before or after 2050?

Robin Hanson wrote a lot about two modes of attitude usually displayed, that is near mode and far mode.
People have very different attitude to things that are happening now and to things that may happen
in distant future. For example if there is a fire in the house, everyone would try to escape, but if the
question of discussion will be should humanity live forever many nice people would say that they
are OK with human extinction.
And even inside the discussion of x-risks we could easily distinguish two approaches.
Two main opinions about timing of exist: decades or centuries. Or catastrophe will happen in next 15-30
years, or in next couple of centuries.
If we take in account many predictions about continuing exponential or even hyperbolic development of
new technologies when we should conclude that superhuman AI and ability to create super deadly
biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write this text in
2014, so it is just 15-30 years from now. As well as predictions about runaway global warming,
limits of growth, peak oil and some version of Doomsday argument all of them are centers around
the year 2030 2050.
If it is 2030, or even earlier, not much could be done to prepare. Late dates left more room for possible
action. But the main problem is not the risk are so large, but that society, government and research
communities are completely unprepared and unwilling to deal with them.
But if we take one hundred years risks timeframe we (as authors) will have some advantages. We are
signaling to be more respectful and conservative. It will be almost never proved that we are false
during our lifetime. We have more chances to be right just because we have larger timeframe. We
have plenty of time to implement some defense measures or in fact to think that such measures
13

would be implemented in the remote future (they will not). We may also think that we are correcting
overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.
The difference between two risks timeframe prediction is like difference in two predictions about future
of a man: one that claims that he will die in next 50 years and another that he will have cancer in
next 5 years. First of them doesnt bring almost any new information, the second is urgent message
which could be false and have high costs. But also the urgent messages tend to attract more
attention, which could bias some sensationalist author. More scientific authors tend to be more
careful and try to distinguish themselves from sensationalist and so give prediction on longer time
periods.
Good example here is E. Yudkowsky who in 2001 claimed that super exponential growth is possible with
ever-shorter doubling periods and super AI in 2005. After this prediction failed he and his
community Lesswrong are more biased to around 100 years estimate until super AI.
So the question if technologies continue their exponential growth is equal to the question of time scale of
global catastrophe. If they do continue to grow exponentially, then the global catastrophe will
happened in nearest decades or will be permanently prevented.
Let take a closer look at both scenarios. Arguments for decades scenario:
1. Because of NBI convergence, advanced nanotech, biotech and AI will appear almost
simultaneously.
2. New technologies will grow exponentially with a doubling time of approximately 2 years and
their risks will grow with a similar or even greater speed.
3. They will interact with each other as any smaller catastrophe could lead to a bigger one. For
example global warming will lead to a fight for recourses and nuclear war, and this nuclear war
will result in the release of dangerous biological weapons. It may be called oscillations near
Singularity.
4. Several possible triggers of x-risks could happen in the near future. It is world war and especially
new arms race, peak oil and runaway global warming.
There are several compelling arguments for centuries scenario:
1. Most predictions about AI were premature. The majority of Doomsday prophecies also have
been proven to be false.
2. Exponential growth will level up. Moores law may come to an end in the near future.
3. The most likely x-risks could be caused by an independent accidental event of unknown
origin, and not by complex interaction of known things.
14

4. There will be no cascade chain reaction of destructive even


5. Long-term predictions are more scientific in the public view thus improving the reputation of
the field of x-risks research, finally helping to prevent x-risks.
Decades scenario is worse, because it is sooner, we have less time to prepare in fact no time, knowing
how little have been done before. Because catastrophic scenarios are more complex and because we
will shorter expected personal and civilizational lifetime.
One of main factors in timeframe of x-risks is our assessment of the time then full AI capable to
self-improvement will be created. Several articles tried to address with question from different
points of view. Modelling, extrapolation and expert quiz were used. For example, here:
http://sethbaum.com/ac/2011_AI-Experts.html We will address this question again in AI chapter, but
the fact is that no body knows for sure, and we should use very vague prior to catch all different
estimates. Bostrom claims that AI will be created before the beginning of 22 century or it will be
proved that some hard obstacles exist, and this vaguest estimate seems to be true.
We need to search for effective mode of actions to prevent x-risks. We need to create social
demand for preventing existential risks as for example the fight against nuclear war in 80ies. We
need political parties, which consider prevention of existential risks as well as life extension as the
main goals of society.
I would like to draw attention to the investigation of non-scientific social, and media factors in
discussing global catastrophe. Much attention is concentrated on scientific research with popular
reception and reaction seen as a secondary factor. However, the value of this secondary factor should
not be underestimated, as it can have a huge effect on the way in which global catastrophe might be
met, and eventually, averted. There is the example of 1980s nuclear disarmament after global
antinuclear protests.
The main difference of both scenarios is that first will happened during our expected lifetime, and
the second most likely will not touch us personally. So the second is hypothetical and not urgent.
The border between two scenarios is around 2050.
Claims that global catastrophe may happen only after 2050 make it insignificant problem, and
preclude any real efforts to prevent it.
Human mind and society is build in such a way that not much questions about remote future is
interesting for us with several important exceptions. One is safety of buildings, and another is our
interest in wellbeing and longevity of our children and grandchildren. But these questions existed for
centuries and our culture had adapted to build strong buildings and invest in children. Global risks
15

are new problem and no such adaptation has happened. More about cognition on global risks is said
in the second part of the book.
The difference between a global catastrophe and existential risk
Any global catastrophe will affect the entire surface of the earth and all of its inhabitants, though not all
of them will perish. From the viewpoint of the personal history of any person, there is no big
difference between a global catastrophe and total extinction in both cases, he will most likely die.
But from the viewpoint of human civilization, the difference is enormous. It will either end forever
or simply transform and go on a new path.
Bostrom suggested expanding the term existential risks to include events that threaten human
civilization with irreversible damage, like, for example, half-hostile artificial intelligence that
evolves in the direction completely opposite to the current values of humanity, or a worldwide
totalitarian government that will forever stop progress.
But perpetual worldwide totalitarianism is impossible: it will either lead to the extinction of civilization,
maybe in several million years, or smoothly evolve into a new form.
However, it is possible to still include many other things under the category of existential risks we
have lost many things indeed over the course of history dead languages, extinct styles of the
history of art, ancient philosophy and so on.
The real dichotomy passes between complete extinction and simple global catastrophe. Extinction
means the complete destruction of mankind and cessation of history. A global catastrophe could
destroy 90% of the population, but only slow down the course of history for 100 years.
The difference here, rather, is the value of the continuation of human history and the value of future
generations, which for most people is extremely speculative. This probably presents one of the
reasons for ignoring the risks of complete extinction.

Smaller catastrophes as steps to possible human extinction


Though in this book we investigate global catastrophes, which can lead to human extinction, it
is easy to notice that the same catastrophes on a smaller scale may not destroy mankind, but damage
us greatly.
For example global virus pandemic could or completely destroy humanity or kill only a part of
it. In second case the level of civilization would decline to some extent, and after it is possible or
further extinction or restoration of the civilization. Therefore the same class of catastrophes can be
16

both the direct cause of human extinction, or a factor, which opens a window of vulnerability for
following catastrophes. For example after pandemic the civilization would be more vulnerable to the
next pandemic or famine or cant prevent collision with asteroid.
In 2007 was published article of Robin Hanson Catastrophe, Social Collapse, and Human
Extinction in which he used simple math model to estimate how variance in resilience would
change probability of extinction by aftermath of not total catastrophe. He wrote: For many types of
disasters, severity seems to follow a power law distribution. For some of types, such as wars and
earthquakes, most of the expected harm is predicted to occur in extreme events, which kill most
people on Earth. So if we are willing to worry about any war or earthquake, we should worry
especially about extreme versions. If individuals varied little in their resistance to such disruptions,
events a little stronger than extreme ones would eliminate humanity, and our only hope would be to
prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase
the variance in such resistance, such as by creating special sanctuaries from which the few
remaining humans could rebuild society.
Depending on the size of the catastrophe there can be various degrees of damage, which can
be characterized by different probabilities of the subsequent extinction, and the further recovery. It is
possible to imagine several semi-stable levels of degradation:
1. New Middle ages: Destruction of the social system similar to downfall of the Roman
Empire but in global scale. It means termination of the development of technologies, connectivity
reduction and population falling for several percent, however some essential technologies continue
to develop successfully. For example, some kinds of agriculture continue to develop in the early
Middle Ages. Technological development will continue, manufacture and use of dangerous
weaponry will also continue which could lead to total extinction or to degradation to even lower
level during next war. Degradation of Easer Island during internal war is clear example. But return
to the previous level is rather probable.
2. Postapocalyptic world: Considerable degradation of economy, loss of statehood and
society disintegration on small fighting kingdoms. The example of it could be current Somali in
some extent. The basic form of activity is a robbery or digging in ruins. This is probably a society
after large-scale nuclear war. The population is reduced many times, but, nevertheless, millions of
people survive. Reproduction of technologies will stop, but separate carriers of knowledge and
library will remain. Such world can be united in hands of one governor, and revival of the state will
begin. The further degradation could occur as a result of epidemics, pollution of environment excess
weaponry stored from previous epoch etc.
17

3. Survivals: Catastrophe in which survive only separate small groups of the people, which
are not connected to each other: polar explorers, crews of the sea ships, inhabitants of bunkers. On
one side, small groups appear to be in more favorable position, than in the previous case, as in them
there is no struggle between groups of people. On the other hand, forces, which have led to
catastrophe of such scales, are very great and, most likely, continue to operate and limit freedom of
moving of people from the survived groups. These groups will be compelled to struggle for the life.
The restoration period under the most favorable circumstances will occupy hundreds years and will
be connected with change of generations with loss of knowledge and skills. Ability to continue
sexual reproduction will be the basis of survival of such groups. Hanson estimated that the group of
healthy individuals, which may survive, is around 100 people. It is not accounted for injured, elderly
and parentless young children who could survive initial catastrophe but will not contribute to the
future survival of the group.
4. Last man on Earth: Only a few people survive on the Earth, but they are incapable neither
to keep knowledge, nor to give rise to new mankind. In this case people, most likely, are doomed.
5. Bunker people: It is possible to designate also "bunker" level that is level on which only
those people survive who are out of the usual environment. Groups of people would survive in the
certain closed spaces, accidentally or preplanned. Conscious transition to bunker level is possible
even without quality loss if the mankind will keep ability to further develop technologies.
Intermediate scenarios of the post-apocalyptic world are possible also, but I believe, that the
listed four variants are the most typical. Each step down means bigger chances to fall even lower
and less chances to rise. On the other hand, the long-term stability island is possible at a level of
separate tribal communities when dangerous technologies have already collapsed, dangerous
consequences of their applications have disappeared, and new technologies are not created yet and
cannot be created like different species of Homo lived in tribes for millions years before neolith.
But it required lower level of intelligence as stability factor. And the former required less Darwinian
pressure to increase intelligence again.
It is thus incorrect to think, that degradation is simply switching of historical time for a
century or a millennium in the past, for example, on level of a society XIX or XV centuries.
Degradation of technologies will not be linear and simultaneous. For example, it will be difficult to
forget such thing as Kalashnikov's gun. In Afghanistan, for example, locals have learned to make
rough copies of Kalashnikov's. But in a society where there is an automatic machine, knightly
tournaments and horse armies are impossible. What was stable equilibrium at movement from the
past to the future cannot be an equilibrium condition at the path of degradation. In other words, if
18

technologies of destruction degrade more slowly, than technologies of creation the society is
doomed to continuous sliding downwards.
However we can classify degradation degree not by the quantity of victims, but by degree of
loss of knowledge and technologies. In this sense it is possible to use historical analogies,
understanding, however, that loss of technologies will not be linear. Maintenance of social stability
at lower level of evolution demands lesser number of people, and this level is steadier both against
progress and decline. Such communities can arise only after the long period of stabilization after
catastrophe.
As to "chronology", following base variants of regress in the past (partly similar to the
previous classification) are possible:
1. Industrial production level: railways, coal, a firearms, etc. Level of self-maintenance
demands, possibly, tens millions humans. In this case it is possible to expect preservation of all basic
knowledge and skills of an industrial society, at least by means of books.
2. Level, sufficient for agriculture maintenance. It demands, probably, from thousand to
millions of people.
3. Level of small group like a hunter-gathers society.
We also could measure smaller catastrophes by the time on which they delay technical
progress and by probability that the humanity will recover at all.
Technological Level of Catastrophe and "Periodic System" of possible disasters
Possible disasters can be classified in terms of the contribution of the person and then the
technology in their causes. These disasters can also be referred to by the period of history of when
they are the most typical. But you can also give a total estimate of the probability for each type of
disaster for the 21st century.
In this case, it turns out that the more likely a disaster can happen, in the sense of a
technological disaster, has a high probability. In addition, the disaster can be classified according to
their possible causes, in the sense of how they cause the death of people (explosion, replication,
poisoning or infection disease) on the basis of that, I made a sort of "periodic system", which
outlined the possible global risks, a large map, which is on the site immortality-roadmap.com,
along with a map of the ways to prevent global risks and a map of how to achieve immortality.
1)
Natural. They are the disasters which have nothing to do with human activity,
and can occur on their own. These include falling asteroids, supernova explosions, and so on.
Their likelihood is relatively small, just tens of millions of years, but perhaps we seriously
underestimate them because of the effects of observational selection.
2)
Anthropogenic. These natural processes are caused by human activities. The
first is global warming and resource depletion. There are also more exotic options, such as
19

artificial awakening using atomic bombs. The main thing is that one awakens the natural
course of business.
3)
Process-level existing technologies. They are primarily concerned with atomic
weapons, as well as conventional biological weapons made by already existing influenza,
smallpox, anthrax.
4)
The expected breakthrough of technology. It is, first of all, nanotech and
biotech, i.e., the creation of microscopic robots or synthetic biology with the creation of
entirely new biological organisms through genetic engineering and synthetic biology. They
can be called super-technologies, because in the end they will give power over the dead and
living matter.
5)
Artificial Intelligence - superhuman levels as the ultimate technology.
Although it can be created in about the same period as the nanotech, but due to its potential
ability to self-upgrade, it is able to transcend or create any other technology.
6)
Space and hypothetical risks that we face in the distant future with the
development of the universe.
Various possible disasters may differ in their destructive power. One could be withstood
relatively easily, others practically impossible. It is more difficult to resist the disasters that have a
greater speed, more power, more penetrating power, and most importantly, an agent which has the
greatest intelligence. And also those which are more likely and more sudden, are difficult to predict,
and thus, it makes it difficult to prepare for them.
Therefore, man-made disasters more natural and technological they are man-made. At the
top of the pyramid as disasters are super-technology disasters and their king the enemy of artificial
intelligence.
The view of this catastrophe, which is on the top of the pyramid disasters is that it is the most
likely and the most destructive.

One-factorial scenarios of global catastrophe


In several following chapters we will consider the classic point of view on global catastrophes,
which consists of individual disasters, each of which might lead to the destruction of all mankind.
Clearly this description is incomplete, because it does not consider multifactorial and long-decline
scenarios of global catastrophe. A classic example of the consideration of one-factorial scenarios is
covered in Nick Bostrom's article Existential risks.
Here we will also consider some sources of global risks which, from the point of view of the
author, are not real global risks, but in the public view their danger is exacerbated, and so we will
examine them. In other words, we will consider all events, which usually are considered global risks
even if we later reject these events.
The next point is studding double factor scenarios of existential risks, which recently did Seth
Baum in his article Double Catastrophe: Intermittent Stratospheric Geoengineering Induced By
Societal

Collapse

http://sethbaum.com/ac/2013_DoubleCatastrophe.pdf

20

In

it

he

studied

hypothetical situation in which anti global warming geoengineering program is interrupted by social
collapse which lead to rapid rise of global temperatures.
There are many possible pairs of such double risks, and from such pairs could be built
chains. For example: Nuclear was accidental release of bioweapons.

Principles of classification of global risks


The method of classification of global risks is extremely important, because it allows, as a
periodical table of elements, to figure out empty spots and to predict the existence of new threats.
Also, it opens the possibility of understanding our own methodology and offering principles on
which new risks may be assessed.
The most obvious approach to an establishment of possible sources of global risks is the
historiographical method. It consists in the analysis of all accessible scientific literature on a theme;
first of all, all already conducted survey works on global risks. One could also scan existing hard
science fiction, for interesting ideas of human extinction.
The method of extrapolation of small catastrophes consists in finding of small types of
catastrophes and the analysis of whether there can be a similar event on a much larger scale. For
example, if we see small meteors falling on Earth, we could ask is it possible that large enough
asteroid would wipe all life on Earth?
Upgrading global events method is based on idea that any global catastrophe should have
global outreach. So we should consider any events that could influence on all surface of the Earth
and see if the could became deadly. For example is it possible that sufficiently large tidal wave in
the oceans will destroy all life on the continents if heavy celestial body would fly nearby?
Upgrading causes of death method consist of consideration of one particular way of human
death and asking a question if this cause of death could became global. For example, some people
die because of allergic shock. Is it possible that some bioengineered pollen could cause global
deadly allergy? Probably, not.
Extrapolating cause of extinction from other specie method 99 species that existed had
vanished and many continue to do so. They are going extinct by different reasons, and by analyzing
these reasons we could hypotize that our specie could do so the same way. For example, in the
beginning of XX century total specie of food banana was destroyed by fungal rust, now we re eating
different specie, and fungi also known to completely kill other species. So is it possible to
genetically engineer human killing fungi? I do not know the answer to this question.
21

The paleontological method consists of the analysis of previous mass extinctions in Earth's
history, such as Permian-Triassic extinction, which wiped out 99% of all species on Earth. Could it
happen again and with stronger force?
At last, the method of devils advocate consists in the deliberate design of scenarios of
extinction as though our purpose is to destroy the Earth.
Each extinction risk could be described by following criteria:

Anthropogenic/natural,

Total probability,

Are needed technologies already exist,

How far in time it from us,

How we could defend us from it

Natural risks could happen with any specie of living beings. Technological risks are not quite
identical to anthropogenic risks, as an overpopulation and exhaustion of resources is quite
anthropogenic. The basic sign of technological risks is their uniqueness for a technological
civilization.
Where also a category of proved and unproved existential risks:
Events, which we considered as possible x-risks and decided that they are not.
Events about we cant say now definitely are they risky or not.
Events about which we have good scientific base to say that they are risky.
But the biggest part here is type of events, which may be risky based on some consideration,
which seems to be true but cant be proved as a matter of fact and because of that are questionable
by lot of skeptics.
It is possible to distinguish three categories of technological risks:

Risks for which the technology is completely developed or which demands only slightly
improved technology. This includes nuclear warfare.

Risks, technologies for which are under construction and there are no visible theoretical
obstacles for its development in the foreseeable future (e.g. biotechnology).

Risks, technologies for which are in accordance with known laws of physics, but large
practical and theoretical obstacles need to be overcome that is nanotechnology and AI.
22

Risks, which demand for their appearance new physical discoveries. The bigger part of
global risks in the XX century has occurred from essentially new and unexpected discoveries
at the time I mean nuclear weapons.
The list of global risks in following chapters is sorted by degree of readiness of technologies

necessary for them.

Precautionary principle for existential risks


The classical interpretation of precautionary principle is that it is about who prove the safety:
The precautionary principle or precautionary approach states that if an action or policy has a
suspected risk of causing harm to the public or to the environment in the absence of scientific
consensus that the action or policy is not harmful, the burden of prove that it is not harmful falls on
those taking an action (Wikipedia).

In our reasoning we will use our version of the precaution principle: something is an
existential risk until the otherwise is proved. Something here is a hypothesis or crazy idea. And
the burden of prove is not on one who suggest the crazy idea, but on specialist of existential risks.
For example, if someone asks, could Earths core explode, we should calculate all possible scenarios
in which it may happen and estimate their probabilities.

X-risks and other human values


To prevent x-risks we need value of x-risks prevention.
It probably consist of the value of life all living now people + value to the future generations + the
value of the whole human history.
If we value only people who live now, we should concentrate of preventing their death and aging.

The main thing, which makes x-risks unique is the value of future
generations.
Anyway most people give very small value to future generations,
23

especially as a practical value in real acts, not claimed value. And


so it translate in marginal interest in remote x-risks.
May be we should make stronger value connection with x-risks, if we
want real actions against them.
People were afraid of nuclear catastrophe because it was able to
affect them at the moment. So they thought about it in "near mode"
and were ready to protest against it.
People also give a lot of value to their children and grandchildren,
but almost zero value to the 6th generation after them.
One article was named lets prevent AI from killing our children and
seems to be good way to connect existing values
Global catastrophic risks, human extinction risks and existential risks

Global catastrophic risks (GCR) have been defined as risks of catastrophe in which 90 per cent of
humanity will die. It will most likely include me and the reader. From personal point of view there
will be not much difference between human extinction and a global catastrophe. The main difference
is the future of humanity.
The main question is this what probability GCR will result in human extinction. 700 mln people is
still a lot, but scavenging economy, remaining nukes and other weapons, worldwide guerilla war,
epidemics, AIDS, global warming and depletion of easy resources could result in constant decline
and even in human extinction. I will address the question further, but I think it is safe to estimate that
GCR is equal to 1 per cent chance of human extinction. This will help us to unite two fields of
research, one of which is much more established and the is more important.
Phil Torres gave rightful criticism of Bostrom term of existential risks. This term is not selfapparent, and it combines human extinction and just mere not realizing whole potential of humanity.
Humanity could live billions years and colonize the Galaxy and still not reach its whole potential,
may be because other civilization will colonize another Galaxy. Most human being live full lives,
but how we could say that a random person reached his full potential? Maybe he was born to be
Napoleon? Even Napoleon didnt get what he wanted after all.
Problems with Defining an Existential Risk http://ieet.org/index.php/IEET/more/torres20150121
But term existential risks wins and often reduced to x-risks. Bostroms classifications of 4 types of
x-risks has not become popular.
He suggests the following classification:
1. Bangs abrupt catastrophes, which results in human extinction
2. Crunches slow arrest of development, resource deletion, totalitarian government. It is not
extinction, and may be not very bad for people there, but it is unstable configuration which would
result in either extinction or supercivilization. The lower the level of equilibrium, the longer
24

civilization could exist on it that is it is more stable. Paleolithic people could live for million of
years.
3. Shrieks this is not extinction but replacement of humanity with some other more powerful
agent, which is non human. It is either AI or some posthumans. Most
4. Whimpers A posthuman civilization arises but evolves in a direction that leads gradually but
irrevocably to either the complete disappearance of the things we value or to a state where those
things are realized to only a minuscule degree of what could have been achieved. It is mostly a
catastrophes which could happened with advance supercivilization. Bostrom suggest two thing: war
with aliens and erosion of our core values.
We could also add context risks different situations in the world imply different chances of a
global catastrophe. Cold war results in arm race and risks of hot war. Apocalyptic ideologies raises
probability of the existential terrorism.
We could add risks changing the speed of tech. progress.
May be we should add a category of risks which change our ability to prevent x-risks. I am not in
position to make recommendations which may be implemented. But may be it would be safe to
create a couple more centers of x-risks research. Or not, if it result in rivalry and incomparable
safety solutions.

25

Chapter 2. Problems of probability calculation

Human extinction catastrophe is one-time event, which will not have any observer (as long
as they are alive, it is not a such catastrophe).
For this reason, the traditional use of the term probability in relation to the likelihood
of global risks rather is pointless, no matter whether we understand probability in statistical terms, as
a proportion, or in Bayesian terms, as a measure of the uncertainty of our knowledge.
If the catastrophe starts to happen we cant distinguish is it very rare type of catastrophe or
inevitable one.
The concept of probability has undergone a long evolution, and it has two directions
objectivist, where the probability is considered as the fraction of events of a certain set,
and subjectivist, where the probability is considered as a measure of our ignorance.
Both approaches are applicable to the determination of what constitutes a probability of global
catastrophe, but with certain amendments.
Regularly is rising the question: "what is the probability of global catastrophe," but the answer
depends of what kind of probability is meant. I propose a list of different definitions of probability
and notions of probability of x-risk in which the answer will make sense.
1. The Fermi probability (so named by me in honor of the Fermi paradox). This term I
suggest for probability that a certain percentage of technological civilizations in the Universe die for
one specific reason, and the probability is defined as their share from the total amount of
technological civilizations. This quantity is unknown and unlikely to be objectively known until we
survey the entire galaxy, so that it can only be the object of subjective assumptions. Obviously, some
civilizations will make very big efforts in risk prevention, and some smaller efforts, but Fermi
probability also reflects the total share of the effectiveness of prevention I mean chances that they
will be applied and will be successful. I called it Fermi because knowing this probability distribution
could help answer Fermi paradox.
2. Objective probability if we were in the multiverse, it would be a share of all versions of
the future Earth, where will be a global catastrophe of certain type. In principle, it should be close to
the Fermi probability, but differ from it at the expense of some special features of the Earth, if any,
or if we will have create some.
26

3. Conditional probability of the certain type of catastrophe is the probability of the accident,
provided that no other global catastrophes will happen. (E.g. chance of asteroid impact within the
next million years). It is opposite to the probability that the specific type of accident will happen, in
between of all possible catastrophes.
4. Minimum probability is the probability that a disaster would happen anyway, even if we
undertake all possible efforts to prevent it. And the maximum probability of an x-risk is the
probability of it if nothing will be done to prevent it. I think that these probabilities differ in average
10 times for global risks, but better assessment is needed may be based on some analogies.
5. The total probability of global catastrophe vs. the probability of the extinction because of
one particular reason. Since many scenarios of global catastrophe may include several reasons, for
example, one there most of the population dies as a result of a nuclear war, the rest of population is
severely affected by multipandemia, and the last survivors on a remote island are dying because of
hunger. What is the main reason in this case hunger, biotech or nuclear war or dangerous
combination of this three, which are not so deadly independently?
6. Assigned probability the probability which we must ascribe to the particular risk to protect
ourselves from it in the best way, but do not overspend on it our resources that could be directed to
other risks. This is like a stake in the game; with the only difference being that the game is played
once. Here we are talking about estimates or order of magnitude, which are needed to properly plan
our actions. It is also replacement of Bayesian probability of existential risk, which cannot be
calculated without some subjectivism. Torino scale of asteroid risk is good example here.
7. Expected survival time of the civilization. Although we cannot measure the very probability
of global catastrophe of some type, we can transform it into an expected lifetime. Expected lifetime
includes our knowledge of the future change of the probability of a disaster whether it will grow
exponentially or decrease smoothly with increasing our knowledge to prevent it.
8. Yearly probability density. For example, if the probability of certain event in one year is 1
per cent, during 100 years it would be 63.3 percent, and in 1000 years period it would be 99.9.
Yearly linear probability density implies exponential growth of total probability of the event.
9. Exponentially growing probability density of total x-risk can be associated with the
exponential growth of new technologies based on Moore's Law, and it gives the total probability of
catastrophe as an exponent of the exponent, that is growing very quickly. It could go from near 0 to
almost 1 in just 2-3 doubling of technology based on Moore law, or in 4-6 years based on current
temp of technological developments. This means that a period of high catastrophic risk will be
around 6 years and probably during it some smaller catastrophes will also happen.
27

10. Posteriori probability is the probability of a global catastrophe, which we estimate after it
did not happen, for example, all-out nuclear war in the XX century (if we assume that it was an
existential risk). Such an assessment of probability is greatly distorted by observational selection
toward understatement.
We will return to the question of the probability of x-risks in the end of the book.

Problems of calculation of probabilities of various scenarios


The picture of global risks and their interaction with each other inspires a natural desire to
calculate exact probabilities of different scenarios. However, it is obvious that giving exact answers
is impossible; the probabilities merely represent our guesses, and may be updated with further
information. Even though our probabilities are not perfect, refusing to make any estimate is not
helpful. It is important to interpret and apply the probabilities we derive in a reasonable way. For
example, say we determine, based on some model, that the probability of appearance of dangerous
unfriendly AI is 14% over the next 30 years. How can we use this information? Will our actions
differ if it would be 15% estimate? So exact probabilities matters only if the could differ our actions.
Further, such calculation should consider time sequence of different risks. For example, if the
risk A has probability 50% in first half of the XXI century, and risk B 50% in second half, our
real chances to die from risk B are only 25% because in half of cases we will not survive until it.
At last, for different risks we wish to receive annual probability density. I will remind, that
here should be applied the formula of continuous increase of percent, as in case of radioactive decay.
It means, that any risk set on some time interval is possible to normalize on "half-life period", that is
time on which it would mean 50% probability of extinction of the civilization.
In our methodology part we will discuss approximately 150 different cognitive biases, which
can perturb the rational evaluation of risks. Even if the contribution of each bias error is no more
than one percent, it adds up. When people undertake a project for the first time, they usually
underestimate the riskiness of the project by a factor as much as 40-100. This was apparent in the
examples of Chernobyl and the Challenger. (Namely, the Space Shuttle had been calculated for one
failure every 1000 flights, but exploded on the 25th flight. In his paper on cognitive biases with
respect to global risks, Yudkowsky highlights that a safety estimate of 1 in 25 would be more
correct, 40 times greater than the initial estimate; the Chernobyl reactors were calculated to undergo
one failure every one million years, but the first large scale failure occurred after less than 10,000
station-years of operation, that is, a safety estimate 100 times lower would have been more precise.)
28

So, there are serious biases to consider, which mean we should greatly expand our default
confidence intervals to come up with more realistic estimates. Confidence intervals are range of
probabilities for some risk. Like for example nuclear war may have probability interval 0.5 2% per
year. How much we should expand our confidence intervals?
For decision-making we need to know an order of the size of the risk, instead of exact value.
Let's assume that the probability of global catastrophes can be estimated, at the best, to within
an order of magnitude (and, the accuracy of such an estimate will be plus-minus an order of
magnitude) and that such an estimate is enough to define the necessity of the further attentive
research and problem monitoring. Similar examples of risk scales are the Turin and Palermo scales
of asteroid risk.
The eleven-point (from 0 to 10) Turin scale of asteroid danger characterizes the degree of
potential danger of an Earth-threatening asteroid or comet. The point on the Turin scale of asteroid
danger is assigned to a small Solar system bodies at the moment of their discovery, depending on the
weight of the body, speed, and probability of its collision with the Earth. In the process of further
research of the orbit of an object, its point on the Turin scale can be updated. Zero means an absence
of threat, ten indicates a probability more than 99% of collision with a body with a diameter more
than 1 km. The Palermo scale differs from Turin in that it considers time remaining before the fall of
an asteroid: less time means a higher score on the scale.

Quantitative estimates of the probability of the global catastrophe, given by


various authors
The estimates of the extinction by leading experts are not far from one another. Of course
there could be some bias in peaking experts but these one are forming a group which deliberately
studding risks of human extinction from all possible courses:

J. Leslie, 1996, "The end of the world": 30% the next 500 years with the account of
Doomsday Argument, without it 5%.

N. Bostrom, 2002, in Existential risks. The analysis of scenarios of human extinction


and similar dangers wrote: my subjective opinion is that setting this probability
lower than 25% would be misguided, and the best estimate may be considerably
higher in the next two centuries.

Sir Martin Rees, 2003, Our final hour: 50% in the XXI century.

29

In 2009 was published a book by Willard Wells "Apocalypse when?" [Wells 2009]
devoted to the mathematical analysis of a possible global catastrophe mostly based on
Doomsday argument in Gotts version. Its conclusion is that its probability is about 3%
per decade, which roughly corresponds to 27% per century.

On the other hand, in days of cold war some estimates of probability of extinction was even
higher.
The researcher of a problem of extraterrestrial civilizations Horner attributed to a selfliquidation hypothesis of psyhozoe chances of 65%.
Von Neumann considered that nuclear war is inevitable and also all will die in it.
During 2008 conference Global catastrophic risks was made survey of experts, which
estimated total risks of human extinction as 19 per cent before 2010 (http://www.fhi.ox.ac.uk/gcrreport.pdf). The result of survey is presented in the table:

30

31

The model of the future


The model of global risks depends on the very model of future. The model of the future should
answer, what is the main driving force of historical process, and how this historical process would
develop in near term, medium term and long term perspective.
This book is based on idea that the main driving force is self-sustained evolution of
technologies and (which I will show later) the speed of this evolution is hyperbolic.
Different theories of the future predict different global risks. For example Rome club model
states that driving force of evolution is interaction of 5 forces, from which most important is
exhaustion of natural resources. This predicts sinusoidal graph of future evolution, where
overpopulation and ecological crisis is the main expected catastrophes.
If the take religious model of the world, it is driving by God will and its end is Apocalypses.
Talebs black sawn world model emphasis unknown risks as most important.
There are also only two possible outcomes of our civilization evolution that is extinction or
becoming supercivilization, controlled by AI and practicing interstellar travel.
There are many more world models and they are presented in the following table. Models are
not reality and they may be or may be not useful.
Many of these models are based on empirical data, so how we should choose the right model?
Main principle: strongest process wins. Large wave will obliterate smaller waves.
In this case hyperbolic model should win. But the main contrurgument to it is its too extreme
predictions which will begin to differ from other model very soon. It predict singularity in 2030 and
extreme acceleration of the events should start now. (We could use ad hoc idea that singularity
would postponed for 20 years or something because of growing resistance of materials. It is like in
collapsing star, in stops collapse several times because compressing and heating matter produce
enough pressure to temporary postpone contraction, until this matter cools somehow and pressure
drops)
Meta model: As we are uncertain about which of the models is correct, we could try to use
Bayesian approach. We choose several most promising models and update our estimates of them
based on new facts.
United models: for example, spiral-hyperbolic model. Some models may be paired with one
another.

32

The precautionary principle recommends us to do the same: as we cant know for sure which
future model is correct, and different models imply different global risks, we should use several
most plausible models. (we cant use all models, as we get to much noise in result).
In our case hyperbolic, exponential, wave and black swan models are most plausible.
To assess the models we could use several characteristics:
1)

Refutability some models have early predictions and we could check these
predictions and see if this model works. This is like Popper criteria. Other models
cant be falsified and this makes them weaker.

2)

Completeness the model should take into account all known strong historic trends.

3)

Evidence base the model should predict past that is be built on large empirical
base.

4)

Support the model should have support from different futurists, and better if they
came to it independently.

5)

We should distinct prediction models from planning (normative) models. The


planning models tend to turn into wishful thinking. Planning must be based on some
model of future itself.

6)

Complexity. If the model is too complex or its predictions are too sharp it is probably
false as we live in very uncertain world.

7)

Too strong predictions for near future contradict Copernican mediocracy. Because
if we randomly chosen from the time when the model works, we should be
somewhere in the middle of it. For example, if I predict that nuclear war will happen
tomorrow, I will strongly go against the fact that if didnt happen for 70 years, and its
a priory probability to happen tomorrow of very small.

---- The same text in thesis mode ----

TL;DR: Many models of the future exist. Several are relevant.


Hyperbolic model is strongest, but too strange.
Wall of text: off
Our need: correct model of the future
Different people: different models = no communication.
Assumptions:
Model of the future = main driving force of historical process +
33

graphic of changes
Model of the future determines global risks

The map
The map: lists all main future models.
Structure: from fast growth to slow growth models.

How to estimate validity of future's


models?
Refutability
check early predictions. Popper criteria.
Falsification.
Completeness Include: all known strong historic trends.
Evidence base predict past + built on large empirical base.
Support Who: thinkers and futurists. How: independently
Planning (normative) models. No wishful thinking. Planning
must be based on some model of future itself.
Complexity. Too complex = false. Too exact = false.
Too strong predictions for near future contradict Copernican
mediocracy. Because if we randomly chosen from the time
when the model works, we should be somewhere in the middle
of it. For example, if I predict that nuclear war will happen
tomorrow, I bet against the fact that if didnt happen for 70
years, and its a priory probability to happen tomorrow of very
small.
Main principle: strongest process wins.
Metaphor: Large wave will obliterate smaller waves.
Conclusion:
Hyperbolic model should win. It is strongest process.
But: Too extreme predictions.
Like: Singularity in 2030
Ad hoc repair: growing inertia of process results in postponed
Singularity (20 years).
Analogy: Stellar collapse into black hole. Overheated matter delays
collapse by pressure. Not for long.

34

Combining models

1. Meta model: Bayesian approach, many models.


How: Update estimates of models based on evidence

2. United models
Example: spiral-hyperbolic model.
Some models may be paired with one another.

3. The precautionary principle: use several models in risks


assessment.
Criteria: Only plausible models.
In case of global risks: hyperbolic, exponential, wave and black swan
models

Other methods of prediction:


Extrapolation
Analogy
Statistical
Induction
Trends
35

Poll of experts, foresight


Prediction markets
Debiasing

Chapter 3. History of the research on global risk

Antique and medieval ideas about the doomsday were that it could happened based on will of
God or as a result of war between demons (Armageddon, Ragnarek).
First scientific ideas of the end of the world and human race appeared in XIX century. In the
beginning of XIX century Malthus created idea of overpopulation though it was not directly
related to complete human extinction, it became a base of future limits of growth ideas. Lord
Kelvin in 1850ies suggested the possibility of thermal death of the Universe that is thermal
equilibrium and stop of all processes. Most popular idea was that life on earth would vanish after
dimming of Sun. Now we know that the Sun is becoming even brighter as it is going older as well as
19 century timescale for these event was quite different in order of several million years as the
source of the energy of stars was not yet known. As the idea of space travel didnt appear yet the
death of life on Earth was equal to death of humanity. All these scenarios had in common that
natural end of the world would be slow and remote process of something like freezing. Humans
cant do anything about it neither stop nor create.
In first half of the XX century we find descriptions of grandiose natural disasters in science
fiction, for example, the works of H.G. Wells (War of the Worlds) and Sir Conan Doyle (The
poison belt). Like collision with giant meteors, poisoning by comets gases, genetic degradation.
During great influenza pandemic in 1918 one physician said that if the thing would go this
way the humanity would be finished in several weeks.
The history of modern scientific study of global risks dates back to 1945. Before the first
atomic bomb tests in the United States there were worries if it would lead to chain reaction of fusion
of the nitrogen in Earth's atmosphere.
In order to assess the risk of it was established commission, headed by physicist Arthur
Compton. He prepared report LA-602 Ignition of the Atmosphere with Nuclear Bombs [LA-602
36

1945], which was recently declassified and is now available to everyone on the Internet in the form
of poorly scanned typewritten pages. Compton shows that due to the scattering of photons by
electrons, the latter will be cooled (because their energy is greater than that of photons), and the
radiation will not heat and but cool the reaction region. Thus, increasing the area of the reaction
process is not able to become self-sustaining. This ensured the impossibility of the chain reaction in
nitrogen atmosphere. However at the end of the text has been mentioned that not all factors were
taken into account for instance, the effect of water vapor contained in the atmosphere.
Because it was a secret report, it was not intended to convince the public which differs it
from in favorable way from recent reports on the safety of the collider. But its target audience was
the decision makers. Compton told them that the chance that a chain reaction will start reaction in
the atmosphere is 3 per million. In the 1970s was conducted journalist investigation, found that
Compton took these figures "out of my head" because found them compelling enough for the
president in the report there is no probability estimates [Kent 2004]. Compton believed that a
realistic assessment of the probability of disaster is not important, because if we Americans
repudiate the bomb tests the Germans or other hostile countries would do them.
In 1979 was published an article by Wood and Weaver [Weaver, Wood 1979] on
thermonuclear detonation in the atmosphere and oceans, which shows that conditions suitable for
thermonuclear self-sustainable reaction do not exist on our planet (but they are possible on other
planets, if there is a high enough concentration of deuterium and they provide minimum
requirements for it).
The next important step in history of global risks research was the realization of not just the
possibility of accidental global catastrophe, but also technical possibility of the deliberated
destruction of humanity. It has become clear after proposal of cobalt bomb by Leo Szilard [Smith
2007]. During the debate on the radio show with Hans Bethe in 1950 about a possible threat to life
on Earth by nuclear weapons, he proposed a new type of bomb: the hydrogen bomb (which was still
not physically at that time, but the ability to create which has been widely discussed), wrapped in a
shell of cobalt-59, which would be converted during the explosion in cobalt-60. This highly
radioactive isotope with a half-life of about 5 years could make the entire continent or the whole
Earth uninhabitable if the bomb is large enough. After such declaration the Department of Energy
has decided to conduct an investigation in order to prove that such a bomb is impossible. However,
it hired scientist has shown that if the mass of the bomb is 200 000 tonnes (i.e. something like 20
modern nuclear reactors and so theoretically achievable), it is enough to destroy all highly organized
life on Earth. Such a device would inevitably be stationary. So could be used only as a doomsday
37

weapon. After all, no one would dare attack a country that has created such a device. In the 60s the
idea of the theoretical possibility of the destruction of the world by using a cobalt bomb was a very
popular and widely discussed in the press and in scientific literature and in art, but then was quite
forgotten. For example, in the book by Kahn "On Thermonuclear War [Khan 1960], N. Shute's
novel On the beach [Shute 1957], in the Kubrick's movie "Dr. Strangelove".
In 1950 Fermi postulated his famous paradox Where are they? that if alien civilizations
exist, why we dont see them. One of obvious explanation at that time was that civilizations use to
destroy themselves in nuclear war and silence of the Universe is explained by this selfdectruction.
And as we are typical civilization we will probably also destroy our selves.
Furthermore, in the 60s appeared a lot of ideas about potential disasters or dangerous
technologies that would be developed in the future. English mathematician J. Good wrote an essay
On the first super-intelligent machine [Good 1965], where he showed that as soon as such a
machine will be created, it will be able to improve itself, and leave humans behind forever later
these ideas formed the basis of ideas about technological singularity by V. Vinge [Vinge 1993], the
essence of which is that, based on current trends, by 2030, it will be create artificial intelligence, a
superior to human beings, and then the history will become fundamentally unpredictable.
Astrophysicist F. Hoyle [Hoyle 1962] wrote the novel "Andromeda", which described the
attack on Earth by hostile alien artificial intelligence, downloaded via radio telescope from space.
He gave most plausible description of scenario of such attack with several steps.
Physicist Richard Feynman wrote an essay Theres plenty of room at the bottom [Feynman
1959], where he was first to suggested the possibility of molecular manufacturing, i.e. nanorobots.
Important role in the realization of global risks played science fiction, which had its golden
age in sixties.
Forester published in 1960 an article entitled Judgment Day: Friday, November 13, 2026. At
this date human population will approach infinity if it grows as it has grown in the last two
millennia [Foerester, 1960]. It is more likely that he chose this title not for making prediction but to
draw attention to his explanation of past growth. So he did false projection of infinite growth of
human population but these prediction interestingly show the same date as several other predictions
made by other by different methods, like on done by Vinge about AI 2030. It also paved the way to
Limits of Growth theories. Forester idea was that human population is growing based on hyperbolic
law and any hyperbolic law is reaching infinity in final time. Of course population cant grow as
much by biological reasons but if we add population of computers we could find that his
prediction was still working around 2010.
38

In 1972 was published Meadows book Limits of growth. It didnt directly predict human
extinction, but only a decline of the human population at the end of the XXI century due to complex
crisis caused by overpopulation, limited recourses and pollution.
In general we have two lines of thoughts regarding future global catastrophe. One is based on
Malthusian theories, and another is based on predictions of technological developments. Idea of total
human extinction belongs to the second line, because Limit of growth theories tend to
underestimate role of technologies in all aspects of human life and one of them is role of
technologies in building weapons. Meadows theory doesnt take in account possibility of nuclear
war (or even more destructive wars and catastrophes based on XXI century technologies), which
could be logically predicted as a result of war for recourses.
In the 1970s became clear the danger associated with biotechnology. In 1971, American
biologist Robert Pollack learned [Teals 1989] that in the next laboratory are planned experiments to
embed oncogenic SV40 virus genome into the bacterium Escherichia coli. He immediately
suggested that if as E. coli spread throughout the world and it could cause a worldwide epidemic of
cancer. He appealed to this laboratory to suspend experiments before they started the experiments.
The result was the ensuing discussions in Asilomar Conference in 1975, which adopted the
recommendations

for

the

safe

genetic

engineering.

http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
In 1981 Asimov published a book "Choice of catastrophes" [Asimov 2002]. Although it was
one of the first attempts to systematize various global risks, the focus was on distant events, such as
the expansion of the Sun, and the main message of the book was that people would be able to
overcome the global risks.
In 1983 B. Carter suggested now famous anthropic principle. Carter's reasoning had the
second part, which he decided not to publish, but only to report on the meeting of the Royal Society,
because he knew that it would cause an even bigger protest. Later it was popularized by J. Leslie
[Leslie 1996]. This second half of the argument became known as the Doomsday argument, DA.
Briefly its essence is that on the basis of past humanity lifetime and the assumption that we are
roughly in the middle of its existence, we can estimate the future existence of mankind. Carter used
more complex form of DA with conditional probability of future catastrophe, which should change
depending of the fact if we find ourselves before or after the catastrophe. In 1993 Richard Gott
suggested simpler version, which is working directly with future lifetime.
In the early '80s appeared a new theory of human extinction as a result of use of nuclear
weapons the theory of "nuclear winter." In the computer simulation of the behavior of the
39

atmosphere after nuclear war was shown that shade from the emission of soot particles in the
troposphere will be a long and significant. This would result in prolong freezing. The question of
how realistic is such blackout and what temperature drop can survive mankind, remains open. This
theory was part of ongoing in 80s political fight against nuclear war. Nuclear war was portrayed in
mass consciousness as inevitably leading to human extinction. While it was not true in most realistic
cases, it was helpful in promoting idea of nuclear disarmament. And it has resulted in drastic
reduction of nuclear arsenals after cold war. So successful public fighting against existential risks is
possible.
In the 80s first publications appeared about the risks of particle accelerators experiments.
In 1985 was published a book of E. Drexler Engines of Creation [Drexler 1985] devoted to
radical nanotechnology that is, the creation of self-replicating nanobots. Drexler showed that such
an event would have revolutionary consequences for the economy and military affairs. He examines
various scenarios of global catastrophe associated with nanorobots. The first is gray goo", i.e.
unrestricted breeding of nanorobots over which control is lost in the environment. In just a few days
they could fully absorb the Earth's biosphere. The second risk is "unstable arms race."
Nanotechnology will allow fast and extremely cheap creation of weapons of unprecedented
destructive power. First of all we are talking about microscopic robots capable of engaging
manpower and equipment of the enemy. Instability of the arms race means that it "one who starts
first takes all" and the balance between two opposing forces, as it was during the Cold War is
impossible.
Public perception of existential risks was mostly formed by art, which later was criticized for
unrealistic description of risk scenarios. In 1984 appeared first movie of Terminator trilogy, where
military AI named Skynet tried to eliminate humanity for self-defense reasons, which later became a
metaphor of dangerous AI. In fact risks of military AI are still underestimated by Friendly AI
community partly because of rejection, which they feel to the Terminator movie.
In 1993 Vernor Vinge coined idea of technological Singularity that will be the moment when
will be created first superhuman AI and one of clear options after it as that it will destroy all
humanity. He predicted that he would be surprise if it happen before 2005 or after 2030. All
predictions about AI are known to be premature.
In 1996 was published the book by Canadian philosopher John Leslie "End of the World.
Science and ethics of human extinction [Leslie 1996], which was radically different from the
Asimovs book primarily by its pessimistic tone and focus on the near future. It examines all the

40

new discoveries of hypothetical disaster, including nanorobots and DA, and concludes that the
chances of human extinction are 30 percent in the next 200 years.
John Leslie was probably the first one who summarized all possible risks as well as DA
argument and started the modern tradition in its discussion but it was Bill Joy who brings these ideas
to the public.
In 2000 Wired magazine came out with sensational article by Bill Joy one of the founders of
Sun Microsystems Why the future doesnt need us [Joy 2000]. In it, he paints a very pessimistic
picture of the future of civilization in which people will be replaced by robots. Human will be like
pets for AI at best. Advances in technology will create a "knowledge of mass destruction" that can
be distributed over the Internet, for example, the genetic codes of dangerous viruses. In 2005, Joy
was in the company to remove from the Internet recently published Spanish flu virus genome. In
2003, Joy said that he wrote two manuscript books, that he decided not to publish. At first he wanted
to warn people of impending danger, but his published article fulfilled this task. In the second, he
wanted to offer possible solutions, but the solutions are not yet satisfying him, and that "knowledge
is not an area where you have right to a second shot."
Since the end of the last century, J. Lovelock [Lovelock 2006] has developed the theory of the
possibility of runaway global warming. The gist of it is that if the usual warming associated with the
accumulation of carbon dioxide in the atmosphere exceeds a certain threshold which is very small
(1-2 degrees C), than the vast reserves of methane hydrates on the seabed and in the tundra,
accumulated there during the recent ice ages begin stand out. Methane is tens times stronger
greenhouse gas than carbon dioxide, and this may lead to a further increase of the temperature of the
Earth, it will launch other chains with positive feedback. For example, it could start burning of the
vegetation on land and more CO2 will be emitted into the atmosphere; oceans would also warm up
and as a result would fall solubility CO2, and again it will be emitted into the atmosphere, and will
form the anoxic area in the ocean there will emit methane. In September 2008 were discovered
bubbles of methane escaping from the bottom of the pillars of the Arctic Ocean. Finally, water vapor
is also a greenhouse gas, and with its concentration rising temperatures will also rise. As a result, the
temperature could rise by tens of degrees, greenhouse catastrophe happens, and all living things die.
Although it is not inevitable, the risk of such a development is the worst possible outcome with the
maximum expected damage. From 2012 and as of 2014 group of scientists unite in Arctic Methane
emergency group with collective blog (http://arctic-news.blogspot.ru/). They are predicting total ice
melt in Arctic as yearly as in 2015 which will help more relise of methane and could lead to growing
of temperature of 10-20 degrees in XXI century and total human extinction by their opinion.
41

At the end of XX beginning of XXI century appeared several articles describing


fundamentally new risks, the realization of which was made possible by creative analysis of the
capabilities of new technologies. It is works by R. Freitas Gray goo problem [Freitas 2000], R.
Carrigan Do potential SETI signals need to be decontaminated? [Carrigan 2006], the book
Doomsday men by P.D. Smith [Smith 2007] and "Accidental nuclear war" by Bruce Blair [Blair
1993]. Another risk is an artificial revival of supervolcano using deep drilling. There are projects of
autonomous probes, which can penetrate into the core of the earth to a depth of 1000 km by melting
of mantle material [Stivenson 2003], [Circovic 2004].
From 2001 E. Yudkowsky explores the problem of so called Friendly AI (that is safe selfimproving AI) in California. He created Singularity Institute (now MIRI, do not confuse it with
startup incubator Singularity University). He wrote several papers on security issues of AI first of
them is Creating Friendly AI. In 2006, he wrote two articles on which we will often refer in this
book: Cognitive Biases Potentially Affecting Judgment of Global Risks [Yudkowsky 2008b] and
Artificial Intelligence as a positive and negative factor in global risk [Yudkowsky 2008a].
In 2003 English Royal Astronomer Sir Martin Rees published the book Our Final Hour
[Rees 2003]. It is much smaller in volume than the Leslies book, and does not contain
fundamentally new information, however, it was addressed to a wide audience and sold in large
quantities.
In 2002 Nick Bostrom wrote his seminal article Existential risks. Analizing human extinction
scenarios and several other articles about DA and so called Simulation. He coined term existential
risk, which included not only extinction but also everything, which could harm human potential
forever. He also showed that main risks are unknown unknowns that is new unpredictable risks.
In 2004 Richard Posner published Catastrophe: Risk and response [Posner 2004], which
repeat many of previous books but provide an economic analysis of the risks of global catastrophe
and prices of efforts to prevent it (for example, efforts to deflect asteroids and values of the
experiments on accelerators).
In the XXI century the main task of the researchers was not only listing of the various possible
global risks, but analysis of the general mechanisms of their origin and prevention. It turned out that
most of the possible risks associated with the wrong knowledge and wrong decisions people. At the
beginning of the XXI century began the formation of global risk analysis methodology, the
transition from the description of risks to the meta-analysis of human ability to detect and correctly
assess global risks. It should be mostly attributed to Bostrom and Yudkowsky.

42

In 2008 several events have increased interest to the risks of global catastrophe: it is planned
(but not yet fully realized) start of adron collider, vertical jump in oil prices, release of methane in
the Arctic, the war with Georgia and the global financial crisis.
At the beginning of the XXI century is the formation of global risk analysis methodology, the
transition from the transfer of risk to the meta-analysis of human ability to detect and correctly
assess global risks.
In 2008, a conference was held in Oxford "Global catastrophic risks and its materials
proceedings were published under the same title, edited by N. Bostrom and M. irkovi [Bostrom,
Circovic 2008]. It includes more than 20 articles by various authors.
There was an article by M. irkovi about the role of observational selection in the evaluation
of the frequency of future disasters, which claim impossible to draw any conclusions about the
frequency of future disasters, based on a previous frequency
Arnon Dar examines risks of supernovae and gamma-ray bursts, and also shows that the
specific threat to the Earth comes from cosmic rays produced by galactic gamma-ray bursts.
William Neiper in the article about the threat of comets and asteroids showed that perhaps we
are living in a period of intense cometary bombardment, when the frequency of impact is 100 times
higher than average.
Michael Rampino gave an overview of catastrophe risk associated with supervolcanoes.
At the beginning of the XXI century appeared several organizations that promote the
protection of the global risks, for example, Lifeboat Foundation and CRN (Centre for Responsible
Nanotechnology), MIRI, Future of Humanity Institute in Oxford and CGR by Seth Baum. Most of
them are very small and dont have any impact. Most interesting work are done by MIRI and its
subsidiary Lesswrong community forum.
In Cambridge in 2012 was created Centre for study of existential risks with several prominent
figures in board: Huw Price, Martin Rees and Jaan Tallin (http://cser.org/). It got a lot of public
attention, which is good, but no much actual work was done.
Except MIRI which created working community all other institution are in the shadow of
work of their leaders, or not doing any work at all. For example Lifeboat Foundation has very large
boards with thousands people in them but rarely consult them. But they have good mailing list about
x-risks.
The blog Overcoming Bias and articles of Robin Hanson were important contribution to the
study of x-risks. Katja Grace from Australia did important contribution to the theory of Doomsday

43

argument

by

mathematically

connecting

it

with

Fermi

Paradox

(http://www.academia.edu/475444/Anthropic_Reasoning_in_the_Great_Filter).
The study of global risks had the following path: awareness about possibility of human
extinction and the possibility of extinction in the near future, then realization of several different
risks and then attempt to create an exhaustive list of global risks, and then create a system of their
description, which takes into account any global risks and determine the risk of any new
technologies and discoveries. Systematic description has greater predictive value than just a list, as it
allows finding new points of vulnerability, just as the periodic table allows us to find new elements.
And then, study the limits of human thinking about global risks as a first step in creating
methodology that is capable of effectively find and evaluate global risks.
From 2000 to 2008 was the golden age of x-risk research. Many seminal books and articles
were published from Bill Joy to Bostrom and Yudkowsky and many new ideas appeared. But
after that, the stream of new ideas almost stopped. This might be good, because every new idea
increased the total risk, and perhaps all important ideas about the topic had been discussed but
unfortunately nothing was done for preventing x-risks, and dangerous tendencies continued.
Risks of nuclear war are growing. No FAI theory exists. Biotech is developing very quickly
and genetically modified viruses are cheaper and cheaper. The time until the catastrophe is running
out. The next obvious step is to create a new stream of ideas ideas on how to prevent x-risks and
when to implement these ideas. But before doing this, we need a consensus between researchers
about the structure of the incoming risks. This can be via dialog, especially informal dialogs during
scientific conferences.
Lack of new ideas was somehow compensated by appearance of many think tanks as well as
publication of many popular articles about the problem. FHI, GCRI, Lesswrong, Arctic methane
group are among new players in field. But communication between them was not good especially
then it was ideological barrier these are mostly about which risk is most serious: AI or climate, war
or viruses.
Also x-risks researches seems to be less cooperative than for example anti-aging researches,
because maybe each of x-risk researchers pretend to save the world and has his own understanding
how to do it. And these theories dont add up to one another. This is my personal impression.

Current situation with global risks

44

The first version of this book was written in Russian in 2007 (under the name The Structure
of the global catastrophe) and from that time not much changes for good. The predicted risky
trends have continued and now risks of world war are very high. The world war is a corridor for
creating x-risks weapons and situations, as well as of disseminating values, which are promoting xrisks. These values are values of nationalism, religion sectarianism, mysticism, fatalism, short-term
income, risky behaviour and winning in general. The values of human life, safety, rationality and
unity of humankind are not growing as quick as they should be.
In fact there are two groups of human values. One of them is about fighting with other
groups of people and is based on false and irrational beliefs from nationalism and religion, while the
other is about value of human life. The first of these promotes global catastrophe while the second is
good. Of course this is oversimplification but this simple map of values is very useful. And values
can't save us from catastrophe, because different people have different values but they can change
probability.
1. Most important thing: a global catastrophe has not happened. Colossal terrorist attacks, wars,
natural disasters also didnt happened.
2. Key technology trends like exponential growth in the spirit of Moore's Law have not changed. It
is especially true about biology and genes.
3. Economic crisis of 2008 has began. And I think its aftermath is not ended yet because
Quantitative Easing creates a lot of money, which could spiral as inflation, and large defaults are still
possible.
4. Several new potentially pandemic viruses appear like Swine flu and MERS.
5. New artificial viruses were created to test how mutated bird flu could wipe out humanity and
protocols of the experiments were published.
6. Arctic ice is collapsing and methane readings in Arctic are high.
7. Fukushima nuclear catastrophe shows again that unimaginable catastrophes could happen. By the
way Martin Rees predicted it in his book about existential risks.
8. Orbit infrared telescope Wise was launched, it will be able to clarify the question of the
existence of dark comets and directly answer the question of risk associated with them.
9. Many natural language processing AI projects has started and maximum computer power has rose
around 1 000 times after first edition of the book.
10. 2012 end of world crazy bonanza spoiled a lot the efforts of promoting rational approach to
global risks.
45

11. The start of adron collider helped to raise questions about risks of scientific experiments but
truth was lost in quarrels between opinions for and against it. I mean works of Adrian Kent and
Andreas Sandberg about small risks with large consequences.
12. In 2014 the situation in Ukraine becomes close to war between Russia and West. The peculiarity
of this situation is that it could deteriorate in small steps, and there is no natural barrier like not
using nuclear weapons was for nuclear war. And it will already result in new cold war, arm race and
nuclear race.
13. Peak oil has not happened mostly because of shale oil and shale gas. Again intellect proved to be
more powerful then limits of resources.

46

Part 2. Typology of x-risks


Chapter 4. The map of all know global risks

I like to create full exhaustive lists, and I could not stop myself from creating a list of
human extinction risks. Soon I reached around 100 items, although not all of them
are really dangerous. I decided to convert them into something like periodic table
i.e to sort them by several parameters in order to help predict new risks.
For this map I chose two main variables: the basic mechanism of risk and the
historical epoch during which it could happen. Also any map should be based on
some kind of future model, and I chose Kurzweils model of exponential
technological growth which leads to the creation of super technologies in the middle
of the 21st century. Also risks are graded according to their probabilities: main,
possible and hypothetical. I plan to attach to each risk a wiki page with its
explanation.
I would like to know which risks are missing from this map. If your ideas are too
dangerous to openly publish them, PM me. If you think that any mention of your idea
will raise the chances of human extinction, just mention its existence without the
details.
I think that the map of x-risks is necessary for their prevention. I offered prizes for
improving the previous map which illustrates possible prevention methods of x-risks
and it really helped me to improve it. But I do not offer prizes for improving this map
as it may encourage people to be too creative in thinking about new risks.

http://immortality-roadmap.com/x-risks%20map15.pdf
lesswrong discussion: http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/
In the following chapters I will go in details about all mentioned risks.

Block 1 Natural risks


Chapter 5. The risks connected with natural catastrophes

Universal catastrophes
47

Catastrophes which will change all Universe as whole, on scale equal to the Big Bang
are theoretically possible. From statistical reasons their probability is less than 1 % in the
nearest billion years as have shown by Bostrom and Tegmark. However the validity of
reasonings of Bostrom and depends on the validity of their premise - namely that
the intelligent life in our Universe could arise not only now but also a several billions years
ago. This suggestion is based on that the heavy elements necessary for existence of a life,
have arisen already after several billions years after Universe appearance, long before
formation of the Earth. Obviously, however, that degree of reliability which we can attribute
to this premise is less than 100 billion to 1 as we do not have its direct proofs - namely the
traces of early civilisations. Moreover, obvious absence of earlier civilisations (Fermi's
paradox) gives certain reliability to an opposite idea - namely, that the mankind has arisen
extremely improbable early. Probably, that existence of heavy elements is not a unique
necessary condition for emergence of intelligent life, and also there are other conditions,
for example, that frequency of flashes of close quasars and hypernovas has considerably
decreased (and the density of these objects really decreases in process of expansion of
the Universe and exhaustion of hydrogen clouds). Bostrom and write: One might
think that since life here on Earth has survived for nearly 4 Gyr (Gigayears), such
catastrophic events must be extremely rare. Unfortunately, such an argument is flawed,
giving us a false sense of security. It fails to take into account the observation selection
effect that precludes any observer from observing anything other than that their own
species has survived up to the point where they make the observation. Even if the
frequency of cosmic catastrophes were very high, we should still expect to find ourselves
on a planet that had not yet been destroyed. The fact that we are still alive does not even
seem to rule out the hypothesis that the average cosmic neighborhood is typically sterilized
by vacuum decay, say, every 10000 years, and that our own planet has just been
extremely lucky up until now. If this hypothesis were true, future prospects would be
bleak.
And though further Bostrom and reject the assumption of high frequency of
"sterilising catastrophes, being based on late time of existence of the Earth, we cannot
accept their conclusion, because as we spoke above, the premise on which it is based, is
unreliable. It does not mean, however, inevitability of close extinction as a result of
universal catastrophe. The only our source of knowledge of possible universal
catastrophes is theoretical physics as, by definition, such catastrophe never happened
48

during life of the Universe (except for Big Bang). The theoretical physics generates a large
quantity of unchecked hypotheses, and in case of universal catastrophes they can be
essentially uncheckable. We will notice also, that proceeding from today's understanding,
we cannot prevent universal catastrophe, nor be protected from it (though, we can provoke
it - see the section about dangerous physical experiments.) Let's designate now the list of
possible - from the point of view of some theorists - universal catastrophes:
1. Disintegration of false vacuum. We already discussed problems of false vacuum in
connection with physical experiments.
2. Collision with object in multidimensional space - brane. There are assumptions,
that our Universe is only object in the multidimensional space, named brane (from a word
"membrane"). The Big Bang is a result of collision of our brane with another brane. If there
will be one more collision it will destroy at once all our world.
3. The Big Rupture. Recently open dark energy results, as it is considered, to more
and more accelerated expansion of the Universe. If speed of expansion grows, in one
moment it will break off Solar system. But it will be ten billions years after modern times, as
assumes theories. (Phantom Energy and Cosmic Doomsday. Robert R. Caldwell, Marc
Kamionkowski, Nevin N. Weinberg. http://xxx.itep.ru/abs/astro-ph/0302506)
4. Transition of residual dark energy in a matter. Recently the assumption has been
come out, that this dark energy can suddenly pass in a usual matter as it already was in
time of the Big Bang.
5. Other classic scenario of the death of the universe are heat-related deaths rise
in entropy and alignment temperature in the universe and the compression of the Universe
through gravitational forces. But they again away from us in the tens of billions of years.
6. One can assume the existence of certain physical process that makes the
Universe unfit for habitation after a certain time (as it was unfit for habitation because of
intense radiation of nuclei of galaxies - quasars - billions of early years of its existence). For
example, such a process can be evaporation of primordial black holes through Hawking
radiation. If so, we exist in a narrow interval of time when the universe is inhabitable - just
as Earth is located in the narrow space of habitable zone around the Sun, and Sun - in a
narrow field of the galaxy, where the frequency of its rotation synchronized with the rotation
of the branches of the galaxy, making it does not fall within those branches and is not
subjected to a supernova.
8. If our world has to some extent arisen from anything by absolutely unknown to us
way, what prevents it to disappear suddenly also?

Geological catastrophes
Geological catastrophes kill in millions times more people, than falling of asteroids,
however they, proceeding from modern representations, are limited on scales.
Nevertheless the global risks connected with processes in the Earth, surpass space risks.

49

Probably, that there are mechanisms of allocation of energy and poisonous gases from
bowels of the Earth which we simply did not face owing to effect of observation selection.

Eruptions of supervolcanoes
Probability of eruption of a supervolcano of proportional intensity is much more, than
probability of falling of an asteroid. However modern science cannot prevent and even
predict this event. (In the future, probably, it will be possible to pit gradually pressure from
magmatic chambers, but this in itself is dangerous, as will demand drilling their roofs.) The
basic hurting force of supereruption is volcanic winter. It is shorter than nuclear as it is
heavier than a particle of volcanic ashes, but them can be much more. In this case the
volcanic winter can lead to a new steady condition - to a new glacial age.
Large eruption is accompanied by emission of poisonous gases - including sulphur. At
very bad scenario it can give a considerable poisoning of atmosphere. This poisoning not
only will make its of little use for breath, but also will result in universal acid rains which will
burn vegetation and will deprive harvest of crops. The big emissions carbon dioxid and
hydrogen are also possible.
At last, the volcanic dust is dangerous to breathe as it litters lungs. People can easily
provide themselves with gas masks and gauze bandages, but not the fact, that they will
suffice for cattle and pets. Besides, the volcanic dust simply cover with thick layer huge
surfaces, and also pyroclastic streams can extend on considerable distances. At last,
explosions of supervolcanoes generate a tsunami.
It all means that people, most likely, will survive supervolcano eruption, but it with
considerable probability will send mankind on one of postapocalyptic stages. Once the
mankind has appeared on the verge of extinction because of the volcanic winter caused by
eruption of volcano Toba 74 000 years ago. However modern technologies of storage of
food and building of bunkers allow considerable group of people to go through volcanic
winter of such scale.
In an antiquity took place enormous vulgar eruptions of volcanoes which have flooded
millions square kilometres with the fused lava - in India on a plateau the Decan in days of
extinction of dinosaurs (probably, is was provoked by falling of an asteroid on the Earth
opposite side, in Mexico), and also on the East-Siberian platform. There is a doubtful
50

assumption, that strengthening of processes of hydrogen decontamination on Russian


plain is a harbinger of appearance of the new magmatic centre. Also there is a doubtful
assumption of possibility catastrophic

dehiscence of Earth crust on lines of oceanic

breaks and powerful explosions of water steam under a curst.


An interesting question is that whether the overall inner heat inside the Earth groes
through the disintegration of radioactive elements, or vice versa, decreases due to cooling
emissivity. If increases, volcanic activity should increase throughout hundreds millions
years. (A. Asimov writes in the book Choice of catastrophes, about glacial ages: On
volcanic ashes in ocean adjournment it is possible to conclude, that volcanic activity in the
last of 2 million years was approximately four times more intensively, than for previous 18
million years.)

Falling of asteroids
Falling of asteroids and comets is often considered as one of the possible reasons of extinction
of mankind. And though such collisions are quite possible, the chances of total extinction as a result
of them are often exaggerated. Experts think an asteroid would need to be I about 37 miles (60 km)
in diameter to wipe out all complex life on Earth. However, the frequency of asteroids of such size
hitting the Earth is extremely rare, approximately once every billion years. In comparison, the
asteroid that wiped out the dinosaurs was about 6 mi (10 km) in diameter, which is a volume about
200 times less than a potential life-killer.
The asteroid Apophis has approximately a 3 in a million chance of impacting the Earth in
2068, but being only about 1,066 ft (325 m), is not a threat to the future of life. In a worst-case
scenario, it could impact in the Pacific Ocean and produce a tsunami which kills several hundred
thousand people.
2,2 million years ago the comet in diameter of 0,5-2 km fell between southern America and
Antarctica (Eltanin catastrophehttp://de.wikipedia.org/wiki/Eltanin_(Asteroid) ). The wave in 1 km
in height threw out whales to the Andes. In vicinities of the Earth there are no asteroids in the sizes
which could destroy all people and all biosphere. However comets of such size can come from Oort
cloud. In article of Napir, etc. Comets with low reflecting ability and the risk of space collisions is
shown, that the number of dangerous comets can be essential underestimated as the observable
quantity of comets in 1000 times less than expected which is connected with the fact that comets
51

after several flights round the Sun become covered by a dark crust, cease to reflect light and become
imperceptible. Such dark comets are invisible by modern means. Besides, allocation of comets from
Oort cloud depends on the tidal forces created by the Galaxy on Solar system. These tidal forces
increase, when the Sun passes through more dense areas of the Galaxy, namely, through spiral
sleeves and a galactic plane. And just now we pass through a galactic plane that means, that during a
present epoch comet bombardment is in 10 times stronger, than on the average for history of the
Earth. Napir connects the previous epoch intensive of comet bombardments with mass extinction 65
and 251 million years ago.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere. Falling of a
large asteroid can cause deformations in Earth crust which will lead to eruptions of
volcanoes. Besides, the large asteroid will cause the worldwide Earthquake dangerous first
of all for technogenic civilisation.
The scenario of intensive bombardment of the Earth by set of splinters is more
dangerous. Then strike will be distributed in more regular intervals and will demand smaller
quantity of a material. These splinters to result from disintegration of some space body
(see further about threat of explosion Callisto), comet splitting on a stream of fragments
(the Tungus meteorite was, probably, a splinter of comet Enke), as a result of asteroid hit in
the Moon or as the secondary hurting factor from collision of the Earth with a large space
body. Many comets already consist of groups of fragments, and also can collapse in
atmosphere on thousand pieces. It can occur and as a result unsuccessful attempt to bring
down an asteroid by means of the nuclear weapon.
Falling of asteroids can provoke eruption of supervolcanoes if the asteroid gets to a
thin site of Earth crust or in a cover of a magmatic copper of a volcano or if shift from the
stike disturbs the remote volcanoes. The melted iron formed at falling of an iron asteroid,
can play a role Stevenson's probe - if it is possible in general, - that is melt Earth crust
and a mantle, having formed the channel in Earth bowels that is fraught with enormous
volcanic activity. Though usually it did not occur at falling of asteroids to the Earth, lunar
"seas" could arise thus. Besides, outpourings of magmatic breeds could hide craters from
such asteroids. Such outpourings are Siberian trap basalts and a Decan plateau in India.
The last is simultaneous to two large impacts (Chixulub and crater Shiva). It is possible to
52

assume, that shock waves from these impacts, or the third space body, a crater from which
has not remained, have provoked this eruption. It is not surprising, that several large
impacts

occur simultaneously. For example, core s of comets can consist of several

separate fragments - for example, comet Shumejker-Levi running into Jupiter in 1994, has
left on it a dotted trace as by the collision moment has already broken up to fragments.
Besides, there can be periods of intensive formation of comets when the solar system
passes near to other star. Or as a result of collision of asteroids in a belt of asteroids.
Much more dangerously air explosions of meteorites in some tens metres in diameter
which can cause false operations of systems of early warning of a nuclear attack, or hits of
such meteorites in areas of basing of rockets.

Pustynsky in his article comes to following conclusions: According to the estimates made in
present article, the prediction of collision with an asteroid is not guaranteed till now and is casual. It
is impossible to exclude that collision will occur absolutely unexpectedly. Thus for collision
prevention it is necessary to have time of an order of 10 years. Asteroid detection some months prior
to collision would allow to evacuate the population and nuclear-dangerous plants in a falling zone.
Collision with asteroids of the small size (to 1 km in diameter) will not result to all planet
consequences (excluding, of course, practically improbable direct hit in area of a congestion of
nuclear materials). Collision with larger asteroids (approximately from 1 to 10 km in diameter,
depending on speed of collision) is accompanied by the most powerful explosion, full destruction of
the fallen body and emission in atmosphere to several thousand cubic km. of stones. On the
consequences this phenomenon is comparable with the largest catastrophes of a terrestrial origin,
such as explosive eruptions of volcanoes. Destruction in a falling zone will be total, and the planet
climate will in sharply change and will settle into shape only in some years (but not decades and
centuries!) Exaggeration of threats of global catastrophe proves to be true by the fact that during the
history of the Earth it has survived set of collisions with similar asteroids, and it has not left is
proved an appreciable trace in its biosphere (anyway, far not always left). Only collision with larger
space bodies (diameter more ~15-20 km) can make more appreciable impact on planet biosphere.
53

Such collisions occur less often, than time in 100 million years, and we while do not have the
techniques allowing even approximately to calculate their consequence.
So, the probability of destruction of mankind as a result of asteroid falling in the XXI century
is very small.

Asteroid threats in the context of technological development


It is easy to notice that the direct risks of collision with an asteroid decrease as the
technology development. First of all, they are decreasing due to more accurate
measurement of their very probability - that is due to more and more accurate detection of
dangerous asteroids and measuring their orbits. (If, however, confirmed the assumption
that we live during an episode of cometary bombardment, the risk assessment will increase
by 100 times the background.) Second, they are decreasing due to the growth of our
abilities deflect asteroids.
On the other hand, the effects of asteroid strikes are becoming larger - not only
because the density of the population is growing, but because of growing global
connectivity of the system, resulting in damage in one spot can backfire on the entire
planet.
In other words, although the probability of collision is reducing, indirect risks related to
the asteroid danger is increasing.
The main indirect risks are as follows:
A) The destruction of hazardous industries in the crash site, for example, a nuclear
power plant. The whole mass of the station in such a case would be evaporated and the
release of radiation will be higher than in Chernobyl. In addition, there may be additional
nuclear reactions because of heavy compression of the power plant when the asteroid
destroys it. Yet the chances of a direct hit by an asteroid in a nuclear plant are small, but
they grow with the number of plants.
B) There is a risk that even a small group of meteors, moving at a certain angle in a
certain place of the earth's surface can trigger early warning and cause of accidental
nuclear war. The same consequences will be of an explosion of a small asteroid (a few
54

meters in size) in the air. The first option is more likely to happen with superpowers (with
some unsecured areas in their missile defense system like in the Russian Federation which
results in inability to track full missile traectory) warning system missile attack, while the
second - for a regional nuclear powers (like India and Pakistan, North Korea, etc.), not
reaching is able to track missiles, but able to react to a single explosion.
C) Technology to move asteroids in the future will create a hypothetical possibility to
direct the asteroids not only from the Earth, but also to it. And even if there will be a
accidential asteroid impact, there will be gossips that it was sent on purpose. Yet hardly
anyone will direct the asteroids to the Earth, as such action is easy to spot, hit accuracy is
low, and this should be done for decades before the impact.
D) For safe deflection of asteroids will require the creation of space weapons, which
can be nuclear, laser or kinetic. Such weapons may be used against the Earth or against
the satellites of the potential enemy. Although the risk of the use of it against the land is
small, it still creates a greater potential for damage than falling asteroids.
E) Asteroid destruction by a nuclear explosion would lead to an increase in its lethal
force at the expense of its fragments that is an increasing number of explosions over a
larger area, as well as radioactive contamination of the debris.
Modern technical means are able to reject only the relatively small asteroids which
dont have global threat. The real danger is black comet bodies several kilometers accross
moving along elongated elliptical orbits with great speed.
However, in the future (may be as soon as 2030_2050), the space can be quickly and
cheaply scanned (and transformed) via self-replicating robots, based on nanotech. They
will create a huge telescopes in space able detect all dangerous body in the solar system.
It will be enough to land on an asteroid a microrobot that will multiply on it and then took it
apart into pieces and built the engine, which will change its orbit. Nanotech will help to
create self-sustaining human settlements on the Moon and other celestial bodies. This
suggests that the problem of the asteroid hazard will be irrelevant in a few decades.
Thus, the problem of preventing the collision of Earth with asteroids in the coming
decades can only be a diversion of resources from global risks.

55

Firstly, because that we still cannot reject the objects that can really lead to the
complete extinction of mankind.
Secondly by the time (or shortly thereafter), when the system nuclear annihilation of
asteroids will be created, it will become obsolete as nanotech can be used for quick and
cheap exploration of the solar system by the middle of the 21st century, and perhaps
earlier.
Third, because such a system in the conditions when the Earth is divided into warring
states, asteroid deflection system will be weapon in the event of war.
Fourth, because the probability of human extinction as a result of asteroid impact in
the narrow period of time, when the asteroid deflection system is already deployed, but
powerful nanotechnology may not be established, is extremely small. This time interval can
be set to 20 years, say from 2030 to 2050, and the chances of falling 10 kilometer body
during this time, even if we assume that we live in a period of cometary bombardment,
when the intensity is 100 times higher than is 1 in 15 000 (based on the average rate of
incidence of such bodies in time 30 million years). Moreover, if we consider the dynamics,
we will able to reject only by the end of this period the really dangerous objects, and
perhaps even later, since the larger the asteroid, the more large-scale and long-term
project for its rejection is required. Although 1 to 15 000 is still unacceptably high risk, it is
commensurate with the risk of the use of space-based weapons against Earth.
Fifth, anti-asteroid protection diverts attention from other global problems, due to the
limited human attention span (even mass media attention span) and financial resources.
This is due to the fact that the asteroid danger is very easy to understand: it is easy to
imagine the impact, it is easy to calculate its probability and it is understandable to the
general public. And there is no doubt in its reality and it is clear how to protect us against it.
(For example, the probability of a volcanic catastrophe comparable to the asteroid
according to various estimates, from 5 to 20 times more for the same level of energy - but
have no idea how it can be prevented.) This differs from other risks that are difficult to
imagine, that cannot be to quantify, but may mean the probability of extinction of tens of
percent. It is AI risks, biotech, nanotech and nuclear weapons.
56

Sixth, if we talk about relatively small bodies like Apophis, it may be cheaper to
evacuate the area of future impact than to deflect the asteroid. And, most likely, will it fall
area ocean, so it would require antitsunami measures.
Still, I do not call to abandon anti-asteroid protection, because our first need is to find
out whether we are living in a period of cometary bombardment. In this case, the probability
1 km body impact within the next 100 years is like 6 percent. (Based on data from a
hypothetical impacts in the last 10 000 years like a so called comet Clovis
http://en.wikipedia.org/wiki/Younger_Dryas_impact_event, which traces may be 500,000 or
similar to craters formations called Carolina bays http://en.wikipedia.org/wiki/Carolina_bays
and

created

large

crater

near

New

Zealand

in

1443

http://en.wikipedia.org/wiki/Mahuika_crater etc.). It is necessary first of all to give up power


to the monitor dark comets and on the analysis of fresh craters.
These ideas that we live during Comet bombardment episode are presented by group
of scientists https://en.wikipedia.org/wiki/Holocene_Impact_Working_Group
Napier also wrote that we could strongly underestimate number of dark comets.
http://lib.convdocs.org/docs/index-124751.html?page=30 The number of such comets we
have found is 100 times less than expected.
If we improve our observation technics, we could prove that the probability of
extinction level impact in 21 century is zero. If we prove it, there will be no need of large
scale space defense systems, which could be dangerous itself. Smaller scale system may
be useful to stop regional effect size impactors like Cheliabinsk asteroid.
We already proved the reduced probability of impact by creating large catalog of Near
Earth objects.
Very important instrument in it is infrared telescope Wise. If its observation is correct it
now found 90 per cent of planetary killing asteroids that is equal to 90 per cent reduction of
their risk in near term perspective.
In 2005 Congress gave the agency until 2020 to catalogue 90 percent of the

total population of midsized NEOs at or above 140 meters in diameterobjects


big enough to devastate entire regions on Earth. NASA has already catalogued 90
percent of the NEOs that could cause planetary-scale catastrophethose with a
57

diameter of one kilometer or morebut is unlikely to meet the 2020 deadline for
cataloguing midsize NEOs.
But in 2016 some question aroused about validity of their data.
http://www.scientificamerican.com/article/for-asteroid-hunting-astronomersnathan-myhrvold-says-the-sky-is-falling1/
If infrared observations are correct they greatly reduce chances of large family of
dark comets (but 99 per cent of them are most time beyond Mars which make
difficult to find them).
In 2015 article Napier speaks about dangers of comets centauris, which sometimes
come from outer Solar System, disintegrate and result in the period of bombardment
https://www.ras.org.uk/images/stories/press/Centaurs/Napier.Centaurs.revSB.pdf
Encke comet could be fragment of larger disintegrated comet. Analysisoftheagesofthelunar
microcraters(zappits)onrocksreturnedintheApolloprogrammeindicatethatthenearEarthinterplanetarydust
(IPD)fluxhasbeenenhancedbyafactorofabouttenoverthepast~10kyrcomparedtothelongtermaverage.

Theeffectsofrunningthroughthedebristrailofalargecometareliabletobecomplex,andtoinvolveboththe
depositionoffinedustintothemesosphereand,potentially,thearrivalofhundredsorthousandsofmegatonlevel
bolidesoverthespaceofafewhours.Incomingmeteoroidsandbolidesmaybeconvertedtomicronsizedsmoke
particles(Klekociuketal.2005),whichhavehighscatteringefficienciesandsothepotentialtoyieldalargeoptical
depthfromasmallmass.Modellingoftheclimaticeffectsofdustandsmokeloadingoftheatmospherehasfocusedon
theinjectionofsuchparticulatesinanuclearwar.
Suchworkhasimplicationsforatmosphericdustingeventsofcosmicorigin,althoughtherearesignificantdifferences,
ofcourse.Hoyle&Wickramasinghe(1978)consideredthattheacquisitionof~1014gofcometdustintheupper
atmospherewouldhaveasubstantialeffectontheEarthsclimate.Suchanencounterisareasonablyprobableevent
duringtheactivelifetimeofalarge,disintegratingcometinanEnckelikeorbit(Napier2010)
Apartfromtheireffectsonatmosphericopacity,aswarmofTunguskalevelfireballscouldyieldwildfiresoveranarea
oforder1%oftheEarthssurface

Zone of defeat depending on force of explosion


Here we will consider hurting action of explosion as a result of asteroid falling (or for
any other reason). The detailed analysis with similar conclusions see in article of
Pustynsky.
The defeat zone grows very slowly with growth of force of explosion that is true as
asteroids, and for super-power nuclear bombs. Though energy of influence falls
proportionally to a square of distance from epicentre, at huge explosion it falls much faster,
first, because of curvature of the Earth which as though protects that is behind horizon
(therefore nuclear explosions are most effective in air, instead of on the Earth), and
58

secondly, that ability of a matter is elastic to transfer a shock wave is limited by a certain
limit from above, and all energy moreover is not transferred, and turns to heat around
epicentre. For example, at ocean there can not be a wave above its depth, and as
explosion epicentre is a dot (unlike epicentre of a usual tsunami which it represents a break
line), then will linearly decrease depending on distance. The superfluous heat formed at
explosion, or is radiated in space, or remains in the form of lake of the fused substance in
epicentre. The sun delivers for days to the Earth light energy of an order 1000 gigaton (10
22

joules), therefore the role of the thermal contribution of superexplosion in the general

temperature of the Earth is insignificant. (On the other hand, the mechanism of distribution
of heat from explosion will be not streams of heated air, but the cubic kilometres of splinters
thrown out by explosion with the weight comparable to weight of the asteroid, but smaller
energy, many of which will have the speed close to first cosmic speed, and owing to it to fly
on ballistic trajectories as intercontinental rockets fly. In an hour they reach all corners of
the Earth and though they, operating as the kinetic weapon, will hurt not each point on a
surface, they will allocate at the input in atmosphere huge quantities of energy, that is will
warm up atmosphere on all area of the Earth, probably, to temperature of ignition of a tree
that else will aggravate.)
We can roughly consider, that the destruction zone grows proportionally to a root of 4
power from force of explosion (exact values are defined by military men empirically as a
result of tests and heights lay between degrees 0,33 and 0,25, thus depending from force
of explosion, etc.). Thus each ton of weight of a meteorite gives approximately 100 tons of
a trotyl equivalent of energy - depending on speed of collision which usually makes some
tens kilometres per second. (In this case a stone asteroid in 1 cubic km. in the size will give
energy in 300 gigatons. The density of comets is much less, but they can be scattered in
air, strengthening strike, and, besides, move on perpendicular to ours orbits with much
bigger speeds.) Accepting, that the radius of complete destruction from a hydrogen bomb
in 1 megaton makes 10 km, we can receive radiuses of destruction for asteroids of the
different sizes, considering, that the destruction radius decreases proportionally the fourth
degree force of explosion. For example, for an asteroid in 1 cubic km it will be radius in 230
km. For an asteroid in diameter in 10 km it will be radius in 1300 km. For 100 km of an
asteroid it will be radius of dectruction of an order of 7000 km. That this radius of the
guaranteed destruction became more than half of width of the Earth (20 000 km), that is
guaranteed covered all Earth, the asteroid should have the sizes of an order of 400 km. (If
59

to consider, that the destruction radius grows as a root of the third degree it will be
diameter of an asteroid destroying all about 30 km. Real value lays between these two
figures (30-400 km), also the estimation Pustynsky gives independent estimation: 60 km.)
Though the given calculations are extremely approximate, from them it is visible, what
even that asteroid which connect with extinction of dinosaurs has not hurt all territory of the
Earth, and even all continent where it has fallen. And extinction if it has been connected
with an asteroid (now is considered, that there complex structure of the reasons) it has
been caused not by strike, but by the subsequent effect - the asteroid winter connected
with the dust carrying over by atmosphere. Also collision with an asteroid can cause an
electromagnetic impulse, as in a nuclear bomb, for the account of fast movement of
plasma. Besides, it is interesting to ask a question, whether there can be thermonuclear
reactions at collision with a comet if its speed is close to greatest possible about 100 km/s
(a comet on a counter course, the worst case) as in a strike point there can be a
temperature in millions degrees and huge pressure as at implosion in a nuclear bomb. And
even if the contribution of these reactions to energy of explosion will be small, it can give
radioactive pollution.
Strong explosion will create strong chemical pollution of all atmosphere, at least by
oxides of nitrogen which will form rains of nitric acid. And strong explosion will litter
atmosphere with a dust that will create conditions for nuclear winter.
From the told follows, that the nuclear superbomb would be terrible not force of the
explosion, and quantity of radioactive deposits which it would make. Besides, it is visible,
that terrestrial atmosphere represents itself as the most powerful factor of distribution of
influences.

Solar flashes and luminosity increase


That is known to us about the Sun, does not give the bases for anxiety. The sun
cannot blow up. Only presence unknown to us or the extremely improbable processes can
lead to flash (coronary emission) which will strongly singe the Earth in the XXI century. But
other stars have flashes, in millions times surpassing solar. However change of luminosity
of the sun influences change of a climate of the Earth that proves coincidence of time of a
small glacial age in XVII century with a Maunder minimum of solar spots. Probably, glacial
ages are connected with luminosity fluctuations also.
Process of gradual increase in luminosity of the sun (for 10 percent in billion years)
will result anyway to boiling oceans - with the account of other factors of warming - during
60

next 1 billion years (that is much earlier, than the sun becomes the red giant and,
especially, white dwarf). However in comparison with an interval investigated by us in 100
years this process is insignificant (if only it has not developed together with other
processes conducting to irreversible global warming - see further).
There are assumptions, that in process of hydrogen burning out in the central part of
the Sun, that already occurs, will grow not only luminosity of the Sun (luminosity grows for
the account of growth of its sizes, instead of surface temperatures), but also instability of its
burning. Probably, that last glacial ages are connected with this reduction of stability of
burning. It is clear on the following metaphor: when in a fire it is a lot of firewood, it burns
brightly and steadily but when the most part of fire wood burns through, it starts to die away
a little and brightly flash again when finds not burnt down branch.
Reduction of concentration of hydrogen in the sun centre can provoke such process
as convection which usually in the Sun core does not occur therefore in the core fresh
hydrogen will arrive. Whether such process is possible, whether there will be it smooth or
catastrophic, whether will occupy years or millions years, it is difficult to tell. Shklovsky
assumed, that as a result of convection the Sun temperature falls each 200 million years
for a 10 million perod, and that we live in the middle of such period. That is end of this
process when fresh fuel at last will arrive in the core and luminosity of the sun will increase
is dangerous. (However it is marginal theory, as at the moment is resolved one of the basic
problems which has generated it - a problem of solar neitrino.)
It is important to underline, however, that the Sun cannot flash as supernova or nova,
proceeding from our physical representations.
At the same time, to interrupt a intelligent life on the Earth, it is enough to Sun to be
warmed up for 10 percent for 100 years (that would raise temperature on the Earth on 1020 degrees without a greenhouse effect, but with the green house effect account, most
likely, it would appear above a critical threshold of irreversible warming). Such slow and
rare changes of temperature of stars of solar type would be difficult for noticing by
astronomical methods at supervision of sun-like stars - as necessary accuracy of the
equipment only is recently reached. (The logic paradox of a following kind is besides,
possible: sun-like stars are stable stars of spectral class G7 by definition. It is not
surprising, that as a result of their supervision we find out, that these stars are stable.)
So, one of variants of global catastrophe consists that as a result of certain internal
processes luminosity of the sun will steadily increase on dangerous size (and we know,
61

that sooner or later it will occur). At the moment the Sun is on an ascending century trend
of the activity, but any special anomalies in its behaviour has not been noticed. The
probability of that it happens in the XXI century is insignificant is small.
The second variant of the global catastrophe connected with the Sun, consists that
there will be two improbable events - on the Sun there will be very large flash and emission
of this flash will be directed to the Earth. Concerning distribution of probability of such event
it is possible to assume, that the same empirical law, as concerning Earthquakes and
volcanoes here operates: 20 multiple growth of energy of event leads to 10 multiple
decrease in its probability (the law of repeatability of Gutenberg-Richter). In XIX century
was observed flash in 5 times, by modern estimations, stronger, than the strongest flash in
the XX century. Probably, that time in tens and hundred thousand years on the Sun there
are the flashes similar on a rarity and scale to terrestrial eruptions of supervolcanoes.
Nevertheless it is the extremely rare events. Large solar flashes even if they will not be
directed to the Earth, can increase a little solar luminosity and lead to additional heating of
the Earth. (Usual flashes give the contribution no more than 0,1 percent).
At the moment the mankind is incapable to affect processes on the Sun, and it looks much
more difficult, than influence on volcanoes. Ideas of dump of hydrogen bombs on the Sun for
initiation of thermonuclear reaction look unpersuasively (however such ideas were expressed, that
speaks about tireless searches by human mind of the weapon of the Doomsday).
There is a precisely enough reckoned scenario of influence to the Earth magnetic making solar
flash. At the worst scenario (that depends on force of a magnetic impulse and its orientation - it
should be opposite to a terrestrial magnetic field), this flash will create the strong currents in electric
lines of distant transfer of the electric power that will result in burning out of transformers on
substations. In normal conditions updating of transformers occupies 20-30 years, and if all of them
burn down will be nothing to replace them there, as will require many years on manufacture of
similar quantity of transformers that it will be difficult to organize without an electricity. Such
situation hardly will result in human extinction, but is fraught with a world global economic crisis
and wars that can start a chain of the further deterioration. The probability of such scenario is
difficult for estimating, as we possess electric networks only about hundred years.

Gamma ray bursts


Gamma ray bursts are intensive short streams of gamma radiation coming from far space.
Gamma ray bursts, apparently, are radiated in the form of narrow bunches, and consequently their
62

energy more concentrated, than at usual explosions of stars. Probably, strong gamma ray bursts from
close sources have served as the reasons of several mass extinctions tens and hundred millions years
ago. It is supposed, that gamma ray bursts occur at collisions of black holes and neutron stars or
collapses of massive stars. Close gamma ray bursts could cause destruction of an ozone layer and
even atmosphere ionization. However in the nearest environment of the Earth there is no visible
suitable candidates neither on sources of gamma ray bursts, nor for supernovas (the nearest
candidate for a gamma ray burst source, a star Eta Carinae - it is far enough - an order of 7000 light
years and hardly its axis of inevitable explosion in the future will be directed to the Earth - Gamma
ray bursts extend in a kind narrow beam jets; However at a potential star-hypernew of star WR 104
which are on almost at same distance, the axis is directed almost towards the Earth. This star will
blow up during nearest several hundreds thousand years that means chance of catastrophe with it in
the XXI century less than 0.1 %, and with the account of uncertainty of its parameters of rotation
and our knowledge about scale - splashes - and is even less). Therefore, even with the account of
effect of observant selection, which increases frequency of catastrophes in the future in comparison
with the past in some cases up to 10 times (see my article Anthropic principle and Natural
catastrophes) the probability of dangerous gamma ray burst in the XXI century does not exceed
thousand shares of percent. Mankind can survive even serious gamma ray burst in various bunkers.
Estimating risk of gamma ray bursts, Boris Stern writes: We take a moderate case of energy relies
of 10 ** 52 erg and distance to splash 3 parsec, 10 light years, or 10 ** 19 sm - in such limits from
us are tens stars. On such distance for few seconds on each square centimeter of a planet got on
ways of gamma ray will be allocated 10 ** 13 erg. It is equivalent to explosion of a nuclear bomb on
each hectare of the sky! Atmosphere does not help: though energy will be highlighted in its top
layers, the considerable part will instantly reach a surface in the form of light. Clearly, that all live
on half of planet will be instantly exterminated, on second half hardly later at the expense of
secondary effects. Even if we take in 100 times bigger distance (it a thickness of a galactic disk and
hundred thousand stars), the effect (on a nuclear bomb on a square with the party of 10 km) will be
the hard strike, and here already it is necessary to estimate seriously - what will survive and whether
something will survive in general. Stern believes, that gamma ray burst in Our galaxy happens on
the average time in one million years. Gamma ray burst in such star as WR 104, can cause intensive
destruction of the ozone layer on half of planet. Probably, Gamma ray burst became reason of
Ordovician mass extinction 443 million years ago when 60 % of kinds of live beings (and it is
considerable the big share on number of individuals as for a survival of a specie there is enough
preservation of only several individuals) were lost. According to John Scalo and Craig Wheeler,
63

gamma ray bursts make essential impact on biosphere of our planet approximately everyone five
millions years.
Even far gamma ray burst or other high-energy space event can be dangerous by radiation hurt
of the Earth - and not only direct radiation which atmosphere appreciably blocks (but avalanches of
high-energy particles from cosmic rays reach a terrestrial surface), but also for the formation
account in atmosphere of radioactive atoms, that will result in the scenario similar described in
connection with cobalt bomb. Besides, the scale radiation causes oxidation of nitrogen of
atmosphere creating opaque poisonous gas dioxide of nitrogen which is formed in an upper
atmosphere and can block a sunlight and cause a new Ice age. There is a hypothesis, that neutrino
radiation arising at explosions of supernovas can lead in some cases to mass extinction as neutrino is
elastic dissipate on heavy atoms with higher probability, and energy of this dispersion is sufficient
for infringement of chemical bonds, and therefore neutrino will cause more often DNA damages,
than other kinds of radiation having much bigger energy. (J.I.Collar. Biological Effects of Stellar
Collapse Neutrinos. Phys.Rev.Lett. 76 (1996) 999-1002 http://arxiv.org/abs/astro-ph/9505028)
Danger of gamma ray burst is in its suddenness - it begins without warning from invisible
sources and extends with a velocity of light. In any case, gamma ray burst can amaze only one
hemisphere of the Earth as they last only a few seconds or minutes.
Activization of the core of galaxy (where there is a huge black hole) is too very improbable
event. In far young galaxies such cores actively absorb substance which twists at falling in accretion
disk and intensively radiates. This radiation is very powerful and also can interfere with life
emerging on planets. However the core of our galaxy is very great and consequently can absorb stars
almost at once, not breaking off them on a part, so, with smaller radiation. Besides, it is quite
observed in infra-red beams (a source the Sagittarius), but is closed by a thick dust layer in an
optical range, and near to the black hole there is no considerable quantity of the substance ready to
absorption by it, - only one star in an orbit with the period in 5 years, but also it can fly still very
long. And the main thing, it is very far from Solar system.
Except distant gamma ray bursts, there are the soft Gamma ray bursts connected with
catastrophic processes on special neutron stars - magnitars. On August, 27th, 1998 flash on magnitar
has led to instant decrease in height of an ionosphere of the Earth on 30 km, however this magnitar
was on distance of 20 000 light years. Magnitars in vicinities of the Earth are unknown, but find out
them it can not to be simple.

Supernova stars
64

Real danger to the Earth would be represented by close explosion supernova on distance
to 25 light years or even less. But in vicinities of the Sun there are no stars which could become
dangerous supernova. (The nearest candidates - the Mira and Betelgeuse - are on distance of
hundreds light years.) Besides, radiation of supernova is rather slow process (lasts months), and
people can have time to hide in bunkers. At last, only if the dangerous supernova will be strict in an
equatorial plane of the Earth (that is improbable), it can irradiate all terrestrial surface, otherwise one
of poles will escape. See Michael Richmond's review. Will a Nearby Supernova Endanger Life on
Earth?

http://www.tass-survey.org/richmond/answers/snrisks.txt Rather close supernova can

be sources of space beams which will lead to sharp increase in cloud amount at the Earth that is
connected with increase in number of the centers of condensation of water. It can lead to sharp
cooling of a climate for the long period. (Nearby Supernova May Have Caused Mini-Extinction,
Scientists Say http://www.sciencedaily.com/releases/1999/08/990803073658.htm)

Super-tsunami
Ancient human memory keep enormous flooding as the most terrible catastrophe.
However on the Earth there is no such quantity of water that ocean level has risen above
mountains. (Messages on recent discovery of underground oceans are a little exaggerated
- actually it is a question only of rocks with the raised maintenance of water - at level of 1
percent.) Average depth of world ocean is about 4 km. And limiting maximum height of a
wave of the same order - if to discuss possibility of a wave, instead of, whether the reasons
which will create the wave of such height are possible. It is less, than height of highmountainous plateaus in the Himalayas where too live people. Variants when such wave is
possible is the huge tidal wave which has arisen if near to the Earth fly very massive body
or if the axis of rotation of the Earth would be displaced or speed of rotation would change.
All these variants though meet in different "horror stories" about a doomsday, look
impossible or improbable.
So, it is very improbable, that the huge tsunami will destroy all people - as the
submarines, many ships and planes will escape. However the huge tsunami can destroy a
considerable part of the population of the Earth, having translated mankind in a
postapocalyptic stage, for some reasons:

65

1. Energy of a tsunami as a superficial wave, decreases proportionally 1/R if the


tsunami is caused by a dot source, and does not decrease almost, if a source linear (as at
Earthquake on a break).
2. Losses on the transmission of energy in the wave are small.
3. The considerable share of the population of the Earth and a huge share of its
scientific and industrial and agricultural potential is directly at coast.
4. All oceans and the seas are connected.
5. To idea to use a tsunami as the weapon already arose in the USSR in connection
with idea of creations gigaton bombs.
Good side here is that the most dangerous tsunami are generated by linear natural
sources - movements of geological faults, and the most accessible for artificial generation
sources of a tsunami are dots: explosions of bombs, falling of asteroids, collapses of
mountain.

Super-Earthquake
We could name super-earthquake a hypothetical large scale quake leading to full
destructions of human built structures on the all surface of the Earth. No such quakes
happened in human history and the only scientifically solid scenario for them seems to be
large asteroid impact.
Such event could not result in human extinction itself, as there would be ships,
planes, and people on the countryside. But it unequivocally would destroy all technological
civilization. To do

so

it should have

intensity around

10

in

Mercally scale

http://earthquake.usgs.gov/learn/topics/mercalli.php

It would be interesting to assess probability of worldwide


earthquakes, which could destroy everything on the earth surface.
Plate tectonics as we know it can't produce them. But distribution of
largest earthquakes could have long and heavy tail which may
include worldwide quakes.
So, how it could happen?
1) Asteroid impact surely could result in worldwide earthquake. I
think that 1 mile asteroid is enough to create worldwide
earthquake.
66

2) Change of buoyancy of large land mass may result whole


continent uplifting may be in miles. (This is just my conjecture,
not proved scientific fact, so the possibility of it needs further
assessment.) Smaller scale event of this type happened in
1957 during Gobi-Altay earthquake when the whole mountain
ridge moved.
https://en.wikipedia.org/wiki/1957_Mongolia_earthquake
3) Unknown processes in mantle sometimes results in large deep
earthquakes
https://en.wikipedia.org/wiki/2013_Okhotsk_Sea_earthquake
4) Very hypothetical changes in Earth core also may result in
worldwide earthquakes. If core somehow collapse, because of
change of crystal structure of iron in it or because of possible
explosion of (hypothetical) natural uranium nuclear reactor in
it. Passing through clouds of dark matter may result in
activation of Earth core as it could be heated by annihilation of
dark matter particles as was suggested in one recent research.
http://www.sciencemag.org/news/2015/02/did-dark-matter-killdinosaurs
Such warming of the earth core will result in its expansion and
may trigger large deep quakes.
5)

Superbomb explosion. Blockbusters bombs in WW2 was used to create miniquakes


as its main killing effect, and they explode after they penetrate ground. Large nukes
may be used the same way, but super earthquake requires energy which is beyond
current power of nukes on several orders of magnitude. Many superbombs may be
needed to create superquake.

6)

The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic
rifts expand not gradually but in large jumps. This middle oceanic rifts creates new
oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is
large steps in ocean floor in the zone of oceanic rifts. Boiling of water trapped into
the rift and contacted with magma may also contribute to explosive zip style rapture
of the rifts. But this idea may be from fridge science catastrophism so should be
taken with caution.
67

7) Supervolcano explosions. Largescale eruptions like the


Kimberlitic tube explosion would also produce earthquake
which will be felt on all earthquake, but not uniformly. But they
must be much stronger than Krakatoa explosion in 1883. Large
explosions of natural explosives at the depth of 100 km like
https://en.wikipedia.org/wiki/Trinitrotoluene were suggested as
a possible mechanism of Kimberlitic explosions.
Superquakes effects:
1.Superquake surely will come with megatsunami which will
result in most damage. Supertsunami may be miles high in some
areas and scenarios. Tsunamis may have different ethology, for
example resonance may play a role or change of speed of rotation
of the Earth.
2.Ground liquefaction
https://en.wikipedia.org/wiki/Soil_liquefaction may result in ground
waves, that is some kind of surface waves on some kind of soils
(this is my idea, which should be more researched).
3. Supersonic impact waves and high frequency vibration.
Superquake could come with unusual patterns of wavering, which
are typically dissipate in soil or not appear. It could be killing sound
more than 160 db, or shock supersonic waves which reflect from the
surface, but result in destruction of solid surface by spalling the
same way as antitank munitions do it
https://en.wikipedia.org/wiki/High-explosive_squash_head
4. Other volcanic events and gases release. Methane deposits in
Arctic would be destabilized strong greenhouse methane will erupt
on surface. Carbon dioxide will be released from oceans as a result
of shaking (the same way as shaking of soda can result in bubbles).
Other gases including sulfur and CO2 will be realized by volcanos.
5. Most dams will fall resulting in flooding
6. Nuclear facilities will meltdown. See Seth Baum discussion
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4143
7. Biological weapons will be released from facilities
8. Nuclear warning system will be triggered.
9. All roads and buildings will be destroyed.
10. Large fires will happen.
11. As natural ability of the earth to dissipate the seismic waves will
be saturated, the waves will reflect inside the earth several times
resulting in very long and repapering quake.
68

12. The waves (from the surface location event) will focus on the
opposite side of the Earth, as it may be happened after Chicxulub
asteroid impact which coincide with Deccan traps on opposite side
of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the
speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from
tectonic tensions and mountain collapses.
Large, non-global earthquakes also could become precursors for
global catastrophes in several occasions. The following podcast by
Seth Baum is devoted to this possibility.
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4147
1) Destruction of biological facilities like CDC which has smallpox
samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ
in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San
Francisco and around London. They are more concentrated than
possible sources of risks. So in event of devastating EQ in SF our
ability to prevent x-risks may be greatly reduced.

Polarity reversal of the magnetic field of the Earth


We live in the period of easing and probably

polarity reversal of the magnetic field

of the Earth. In itself inversion of a magnetic field will not result in extinction of people as
polarity reversal already repeatedly occurred in the past without appreciable harm. In the
process of polarity reversal the magnetic field could fall to zero or to be orientated toward
Sun (pole will be on equator) which would lead to intense suck of charged particles into
the atmosphere. The simultaneous combination of three factors - falling to zero of the
magnetic field of the Earth, exhaustion of the ozone layer and strong solar flash could
result in death of all life on Earth, or: at least, to crash of all electric systems that is fraught
with falling of a technological civilisation. And itself this crash is not terrible, but is terrible
what will be in its process with the nuclear weapon and all other technologies.
69

Nevertheless the magnetic field decreases slowly enough (though speed of process
accrues) so hardly it will be nulled in the nearest decades. Other catastrophic scenario magnetic field change is connected with changes of streams of magma in the core, that
somehow can infuence global volcanic activity (there are data on correlation of the periods
of activity and the periods of change of poles). The third risk - possible wrong
understanding of the reasons of existence of a magnetic field of the Earth.
There is a hypothesis that the growth of solid nucleus of Earth did the Earth's
magnetic field less stable, and it exposed more often polarity reversal, that is consistent
with the hypothesis of weakening the protection that we receive from anthropic principle.

Emerge of new illness in the nature


It is extremely improbable, that there will be one illness capable at once to destroy all people.
Even in case of a mutation of a bird flu or bubonic plague many people will be survived and do not
catch the diseased. However as the number of people grows, the number of "natural reactors in
which the new virus can be cultivated grows also. Therefore it is impossible to exclude chances of a
large pandemic in the spirit of "Spaniard" flu of in 1918. Though such pandemic cannot kill all
people, it can seriously damage level of development of the society, having lowered it on one of
postapocaliptic stages. Such event can happen only before will appear powerful biotechnologies as
they can create quickly enough medicines against it - and will simultaneously eclipse risks of natural
illnesses possibility with much bigger speed of creation of the artificial deceases. The natural
pandemic is possible and on one of postapocaliptic stages, for example, after nuclear war though and
in this case risks of application of the biological weapon will prevail. For the natural pandemic
became really dangerous to all people, there should be simultaneously a set of essentially different
deadly agents - that is improbable naturally. There is also a chance, that powerful epizootic like the
syndrome of a collapse of colonies of bees CCD, the African fungus on wheat (Uganda mold
UG99), a bird flu and similar - will break the supply system of people so, that it will result in the
world crisis fraught with wars and decrease of a level of development. Appearance of new illness
will make strike not only on a population, but also on connectivity which is the important factor of
existence of a uniform planetary civilization Growth of the population and increase in volume of
identical agricultural crops increase chances of casual appearance of a dangerous virus as speed of
"search" increases. From here follows, that there is a certain limit of number of the interconnected
70

population of one specie after which new dangerous illnesses will arise every day. From real-life
illnesses it is necessary to note two:
Bird flu. As it was already repeatedly spoken, not the bird flu is dangerous, but possible
mutation of strain H5N1, capable to be transferred from human to human. For this purpose, in
particular, should change attaching fibers on a surface of the virus that would attached not in the
deep in lungs, but above where there are more chances for virus to get out as cough droplets.
Probably, that it is rather simple mutation. Though there are different opinions on, whether H5N1 is
capable to mutate this way, but in history already there are precedents of deadly flu epidemics. The
worst estimate of number of possible victims of mutated bird flu was 400 million humans. And
though it does not mean full extinction of mankind, it almost for certain will send the world on a
certain post-apocalyptic stage.
AIDS. This illness in the modern form cannot lead to full extinction of mankind though he has
already sent a number of the countries of Africa on a post-apocalyptic stage. There are interesting
reasoning of Supotinsky about the nature of AIDS and on how epidemics of retroviruses repeatedly
cut the population of hominids. He also assumes, that the HIV have a natural carrier, probably, a
microorganism. If AIDS began to transfer as cold, the mankind fate would be sad. However and now
AIDS is deadly almost on 100 %, and develops slowly enough to have time to spread.
We should note new strains of microorganisms which are steady against antibiotics, for
example, the hospital infection of golden staphylococci and medicine-steady tuberculosis. Thus
process of increase of stability of various microorganisms to antibiotics develops, and such
organisms spread more and more, that can give in some moment cumulative wave from many steady
illnesses (against the weakened immunity of people). Certainly, it is possible to count, that
biological supertechnologies will win them but if in appearance of such technologies there will be a
certain delay a mankind fate is not good. Revival of the smallpox, plague and other illnesses though
is possible, but separately each of them cannot destroy all people. On one of hypotheses,
Neanderthal men have died out because of a version of the mad cow decease that is the illness,
caused by prion (autocatalytic form of scaling down of protein) and extended by means of
cannibalism so we cannot exclude risk of extinction because of natural illness and for people.
At last, the story that the virus of "Spaniard" flu has been allocated from burial places, its
genome was read and has been published on the Internet looks absolutely irresponsible. Then under
requirements of the public the genome have been removed from open access. But then still there was
a case when this virus have by mistake dispatched on thousand to laboratories in the world for
equipment testing.
71

Marginal natural risks


Further we will mention global risks connected with natural events which probability in the
XXI century is smallest, and moreover which possibility is not conventional. Though I believe, that
these events should be taken into consideration, and they in general are impossible, I think, that it is
necessary to create for them a separate category in our list of risks that, from a precaution principle,
to keep certain vigilance concerning appearance of the new information, able to confirm these
assumptions.

Hypercanes

Kerry Emanuel from the University of Michigan put forward the hypothesis that in the past,
the Earth's atmosphere was much less stable, resulting in mass extinction. If the temperature of
ocean surface would be increased to 15-20 degrees, which is possible as a result of a sharp global
warming, falling asteroid or underwater eruption, it would invoke the so-called Hypercane--a huge
storm with wind speeds of approximately 200-300 meters per second, the size of a continent, high
live-time and pressure in the center of about 0.3 atmosphere. Removed from their place of
appearance, such hypercane would destroy all life on land and at the same time, in its place over
warm ocean site would form new hypercane. (This idea is used in the Barnes novel The mother
storms.)
Emanuel has shown that when fall asteroid with diameter more than 10 km in the shallow sea
(as it was 65 million years ago, when the fall asteroid near Mexico, which is associated with the
extinction of dinosaurs) may form site of high temperature of 50 km, which would be enough to
form hypercane. Hypercane ejects huge amount of water and dust in the upper atmosphere that
could lead to dramatic global cooling or warming.
http://en.wikipedia.org/wiki/Great_Hurricane_of_1780
http://en.wikipedia.org/wiki/Hypercane
Emanuel, Kerry (1996-09-16). "Limits on Hurricane Intensity". Center for Meteorology and
Physical

Oceanography

MIT

http://wind.mit.edu/~emanuel/holem/node2.html#SECTION00020000000000000000
Did

storms

land

the

dinosaurs

in

hot

water?

http://www.newscientist.com/article/mg14519632.600-did-storms-land-the-dinosaurs-in-hot72

water.html

Unknown processes in the core of the Earth


There are assumptions, that the source of terrestrial heat is the natural nuclear
reactor on uranium several kilometres in diameter in the planet centre. Under certain
conditions, assumes V. Anisichkin, for example, at collision with a large comet, it can pass
in supercritical condition and cause planet explosion, that, probably, caused Phaeton
explosion from which, probably, the part of a belt of asteroids was generated. The theory
obviously disputable as even Phaeton existence is not proved, and on the contrary, is
considered, that the belt of asteroids was generated from independent planetesimals.
Other author, R. Raghavan assumes, that the natural nuclear reactor in the centre of the
Earth has diameter in 8 km and can cool down and cease to create terrestrial heat and a
magnetic field.
If to geological measures certain processes have already ripened, it means what
much easier to press a trigger hook, to start them, - so, human activity can wake them.
The dictanse to border of the terrestrial core is about 3000 km, and to the Sun - of 150 000
000 km. From geological catastrophes every year perish ten thousand people, and from
solar catastrophes - nobody. Directly under us there is a huge copper with the stuck lava
impregnated with compressed gases. The largest extinction of live beings well correlate
with epoch of intensive volcanic activity. Processes in the core in the past, probably,
became the reasons of such terrible phenomena, as trap volcanics. On the border of the
Perm period 250 million years ago in the Eastern Siberia has streamed out 2 million cubic
km. of lavas, that in thousand times exceeds volumes of eruptions of modern
supervolcanoes. It has led to extinction of 95 % of species.
Processes in the core also are connected with changes of a magnetic field of the
Earth, the physics of that is not very clear yet. V.A. Krasilov in article Model of biospheric
crises. Ecosystem reorganisations and biosphere evolution assumes, that the invariance
periods, and then periods of variability of a magnetic field of the Earth precede enormous
trap eruptions. Now we live in the period of variability of a magnetic field, but not after a
long pause. The periods of variability of a magnetic field last ten millions years, being
replaced by not less long periods of stability. So at a natural course of events we have
73

millions years before following act of trap volcanic if it at all will happen. The basic danger
here consists that people by any penetrations deep into the Earths can push these
processes if these processes have already ripened to critical level.
In a liquid terrestrial core the gases dissolved in it are most dangerous. They are
capable to be pulled out of a surface if they get a channel. In process of sedimentation of
heavy iron downwards, it is chemically cleared (restoration for the heat account), and more
and more quantity of gases is liberated, generating process of de-gasation of the Earth.
There are assumptions, that powerful atmosphere of Venus has arisen rather recently as a
result of intensive de-gasation of its bowels. Certain danger is represented by temptation to
receive gratuitous energy of terrestrial bowels, extorting the heated magma. (Though if it to
do it in the places which have been not connected with plumes it should be safe enough).
There is an assumption, that shredding of an oceanic bottom from zones of median rifts
occurs not smoothly, but jerky which, on the one hand, are much more rare (therefore we
did not observe them), than Earthquakes in zones of subduction, but are much more
powerful. Here the following metaphor is pertinent: Balloon rupture is much more powerful
process, than its corrugation. Thawing of glaciers leads to unloading plates
and to strengthening of volcanic activity (for example, in Iceland - in 100 times). Therefore
the future thawing of a glacial board of Greenland is dangerous.
At last, is courageous assumptions, that in the centre of the Earth (and also other
planets and even stars) are microscopic (on astronomical scales) relict black holes which
have arisen in time of Big Bang. See A.G. Parhomov's article About the possible effects
connected with small black holes. Under Hawking's theory relic holes should evaporate
slowly, however with accruing speed closer to the end of the existence so in the last
seconds such hole makes flash with the energy equivalent approximately of 1000 tons of
weight (and last second of 228 tons), that is approximately equivalent to energy 20 000
gigaton of trotyl equivalent - it is approximately equal to energy from collision of the Earth
with an asteroid in 10 km in diameter. Such explosion would not destroy a planet, but would
cause on all surface Earthquake of huge force, possibly, sufficient to destroy all structures
and to reject a civilisation on deeply postapocalyptic level. However people would survive,
at least those who would be in planes and helicopters during this moment. The microscopic
black hole in the centre of the Earth would test simultaneously two processes accretion of
matter and energy losses by hawking radiation which could be in balance, however
balance shift in any party would be fraught with catastrophe - either hole explosion, or
74

absorption of the Earth or its destruction for the account of stronger allocation of energy at
accretion. I remind, that there are no facts confirming existence of relic black holes and it is
only the improbable assumption which we consider, proceeding from a precaution principle.

Sudden de-gasation of the gases dissolved at world ocean


Gregory Ryskin has published in 2003 the article Methane-driven oceanic eruptions and mass
extinctions in which considers a hypothesis that infringements of a metastable condition of the
gases dissolved in water were the reason of many mass extinctions, first of all relies of methane.
With growth pressure solubility of methane grows, therefore in depth it can reach considerable sizes.
But this condition is metastable as if there will be a water hashing de-gasation chain reaction as in
an open bottle with champagne will begin. Energy allocation thus in 10 000 times will exceed
energy of all nuclear arsenals on the Earth. Ryskin shows, that in the worst case the weight of the
allocated gases can reach tens billions tons that is comparable to weight of all biosphere of the
Earth. Allocation of gases will be accompanied by powerful tsunami and burning of gases. It can
result or in planet cooling for the account of formation of soot, or, on the contrary, to an irreversible
warming up as the allocated gases are greenhouse. Necessary conditions for accumulation of the
dissolved methane in ocean depths is the anoxia (absence of the dissolved oxygen, as, for example,
in Black sea) and absence of hashing. Decontamination off metan-hydrates on a sea-bottom can
promote process also. To cause catastrophic consequences, thinks Ryskin, there is enough
decontamination even small area of ocean. Sudden decontamination of Lake Nios which in 1986 has
carried away lives of 1700 humans became an example of catastrophe of a similar sort. Ryskin
notices that question on what the situation with accumulation of the dissolved gases at modern world
ocean, demands the further researches.
Such eruption would be relatively easy to provoke, lowering pipe in the water and starting to
pour up water that would run self-reinforcing process. This can happen accidentally when drilling
deep seabed. A large quantity of hydrogen sulfide has accumulated in the Black Sea, and there also
is anoxic areas.
Gregory Ryskin. Methane-driven oceanic eruptions and mass extinctions. Geology 31, 741 744 2003. http://pangea.stanford.edu/Oceans/GES205/methaneGeology.pdf

75

Explosions of other planets of solar system


There are other assumptions of the reasons of possible explosion of planets, besides explosions
of uranium reactors in the centre of planets by Anisichkin, namely, special chemical reactions in the
electrolyzed ice. E.M. Drobyshevsky in article Danger of explosion of Callisto and priority of
space missions ( ..

//

1999.

69,

9.

http://www.ioffe.ru/journals/jtf/1999/09/p10-14.pdf) assumes that such events regularly occur in


the ice satellites of Jupiter, and they are dangerous to the Earth by formation of a huge meteoric
stream. Electrolysis of ice occurs as a result of movement of containing it celestial body in a
magnetic field, causing powerful currents. These currents result in the degradation of water to
hydrogen and oxygen, which leads to the formation of explosive mixture. He states hypothesis, that
in all satellites these processes have already come to the end, except Callisto which can blow up at
any moment, and suggests to direct on research and prevention of this phenomenon considerable
means. (It is necessary to notice, that in 2007 has blown up Holmes's comet, and knows nobody why
- and electrolysis of ice in it during Sun fly by is possible.)
I would note that if the Drobyshevsky hypothesis is correct, the very idea of the research
mission to Callisto and deep drilling of his bowels in search of the electrolyzed ice is dangerous
because it could trigger an explosion.
In any case, no matter what would cause destruction of other planet or the large satellites in
Solar system, this would represent long threat of a terrestrial life by fall of splinters. (The
description of one hypothesis about loss of splinters see here: An asteroid breakup 160 Myr ago as
the

probable

source

of

the

K/T

http://www.nature.com/nature/journal/v449/n7158/abs/nature06070.html

76

impactor
)

Cancellation of "protection" which to us provided Antropic principle


In detail I consider this question in article Natural catastrophes and Antropic
principle. The threat essence consists that the intelligent life on the Earth, most likely, was
generated in the end of the period of stability of natural factors necessary for its
maintenance. Or, speaking short, the future is not similar to the past because the past we
see through effect of observant selection. An example: a certain human has won three
times successively in a roulette, putting on one number. Owing to it it, using the inductive
logic, he comes to a fallacy that will win and further. However if he knew, that in game,
besides it, 30 000 humans participated, and all of them were eliminated, it could come to
more true conclusion, as he with chances 35 to 36 will lose in following round. In other
words, his period of the stability consisting in a series from three prizes, has ended.
For formation of intelligent life on the Earth there should be a unique combination of
conditions which operated for a long time (uniform luminosity of the sun, absence of close
supernova, absence of collisions with very big asteroids etc.) However from this does not
follow at all, these conditions will continue to operate eternally. Accordingly, in the future we
can expect that gradually these conditions will disappear. Speed of this process depends
on that, and unique the combination of the conditions, allowed to be generated intelligent
life on the Earth (as in an example with a roulette was how much improbable: the situation
of prize got three times in a row is more unique successively, the bigger probability the
player will lose in the fourth round - that is be in that roulette of 100 divisions on a wheel
chances of an exit in the fourth round would fall to 1 to 100). If such combination is more
improbable, it will end faster. It speaks effect of elimination - if in the beginning there were,
let us assume, billions planets at billions stars where the intelligent life could start to
develop as a result of elimination only on one Earth the intelligent life was formed, and
other planets have withdrawn, as Mars and Venus. However intensity of this elimination is
unknown to us, and to learn to us it stirs effect of observation selection - as we can find out
ourselves only on that planet where the life has survived, and the intelligence could
develop. But elimination proceeds with the same speed.
For the external observer this process will look as sudden and causeless deterioration
of many vital parametres supporting life on the Earth. Considering this and similar
examples, it is possible to assume, that the given effect can increase probability of the
sudden natural catastrophes, capable to tear off a life on the Earth, but no more, than in 10
77

times. (No more as then enter the actions of the restriction similar described in article of
Bostrom and which consider this problem in the relation of cosmic catastrophes.
However real value of these restrictions for geological catastrophes requires more exact
research.) For example if absence of superhuge eruptions of volcanoes on the Earth,
flooding all surface, is lucky coincidence, and in norm they should occur time in 500 million
years the chance of the Earth to appear in its unique position would be 1 to 256, and
expected time of existence of a life - 500 million years.
We still will return to discussion of this effect in the chapter about calculation of
indirect estimations of probability of global catastrophe in the end of the book. The
important methodological consequence is that we cannot use concerning global
catastrophes any reasonings in the spirit of: it will not be in the future because it was not in
the past. On the other hand, deterioration in 10 times of chances of natural catastrophes
reduces expected time of existence of conditions for a life on the Earth from billion to
hundred millions that gives very small contribution to probability of extinction to the XXI
century.
Frightening acknowledgement of the hypothesis that we, most likely, live in the end of
the period of stability of natural processes, is R.Rods and R.Muller's article in Nature about
cycle of extinctions of live beings with the period 62 (+/-3 million years) - as from last
extinction has passed just 65 million years. That is time of the next cyclic event of
extinction has come for a long time already. We will notice also, that if the offered
hypothesis about a role of observant selection in underestimations of frequency of global
catastrophes is true, it means, that intelligent life on the Earth is extremely unusual event in
the Universe, and we are alone in the observable Universe with a high probability. In this
case we cannot be afraid of extraterestial intrusion, and also we cannot do any conclusions
about frequency of self-destruction of the advanced civilisations in connection with Fermi's
paradox (space silence). As a result net contribution of the stated hypothesis to our
estimation of probability of a human survival can be positive.

Debunked and false risks from media, science fiction and fringe science or old
theories
Nemesis
Gases from comets
78

Rogue black holes


Neutrinos are warming earth core
There are also a number of theories that are either made by different researchers and have been
refuted, or circulating in the yellow press and in popular consciousness and are based on honest
mistakes, lies and misunderstanding, or associated with certain belief systems. It should, however,
allow a tiny chance that some of these theories prove correct.
1. The sudden change of direction and / or the speed of rotation of the Earth, which causes
catastrophic earthquakes, floods and climate change. Changing the shape of Earth, associated with
the increase of the polar caps may cause the axis of rotation will cease to be the axis with the lowest
moment of inertia, and turn the Earth as nut Dzhanibekova. Or does it happen as a result of
changes in the Earth's moment of inertia associated with the rebuild of its subsoil, or the speed of
change as a result of a collision with a large asteroid.
2.Theories of great deluge, based on the Biblical legend.
3.Explosion of the Sun after six years, supposedly predicted the Dutch astronomer.
4.Collision of the Earth with wandering black hole. In the vicinity of the Sun is not black
holes, as far as is known, because they could be found on the accretion of interstellar gas on them
and on gravitational distortions of light from more distant stars. In doing so, sucks ability of black
hole is no different from any stars with similar mass, so black hole is not more dangerous than a star.
But collisions with stars, or at least, dangerous rapprochement with them, occur rarely, and all such
convergence are millions of years from now. Because black holes at galaxy are far less than the
stars, then the chances of the collision with a black hole are even smaller. We cannot, however,
exclude collision of Solar system with single rogue planets, but it is highly unlikely, and relatively
no-harm event.

Weakening of stability and human interventions


The contribution of probability shift because of cancellation of defence by Antropic
principle in total probability of extinction in XXI century, apparently, is small. Namely, if the
Sun maintains comfortable temperature on the Earth not for 4 billion years, but only 400
million in the XXI century it all the same gives ten-thousand shares of percent of probability
of catastrophe, if we uniformly distribute this probability of the Sun failture (0,0004 %).
However easing of stability which to us gave Antropic principle, means, first, that processes
79

become less steady and more inclined to fluctuations (that is quite known concerning the
sun which will burn, in process of hydrogen exhaustion, more and more brightly and nonuniformly), and secondly, that it seems to more important, - they become more sensitive to
possible small human influences. That is one business to pull a hanging elastic band, and
another - for an elastic band tense to a limit of rapture.
For example, if a certain eruption of a supervolcano has ripened, there can pass still
many thousand years while it will occur, but there is enough chink in some kilometres depth
to break stability of a cover of the magmatic chamber. As scales of human activity grow in
all directions, chances to come across such instability increase. It can be both instability of
vacuum, and terrestrial lithosphere, and something else of what we do not think at all.

Block 2 Anthropogenic risks


Chapter 6. Global warming
TL;DR: Small probability of runaway global warming requires preparation of urgent unconventional
measures of its prevention that is sunlight dimming.
Abstract:
The most expected version of limited global warming of several degrees C in 21 century will not
result in human extinction, as even the thawing after Ice Age in the past didnt have such an impact.
The main question of global warming is the possibility of runaway global warming and the
conditions in which it could happen. Runaway warming means warming of 30 C or more, which
will make the Earth uninhabitable. It is unlikely event but it could result in human extinction.
Global warming could also create some context risks, which will change the probability of other
global risks.
I will not go here in all details about nature of global warming and established ideas about its
prevention as it has extensive coverage in Wikipedia (https://en.wikipedia.org/wiki/Global_warming
and https://en.wikipedia.org/wiki/Climate_change_mitigation).
Instead I will concentrate on heavy tails risks and less conventional methods of global warming
prevention.
The map provides summary of all known methods of GW prevention and also of ideas about scale
of GW and consequences of each level of warming.
80

The map also shows how prevention plans depends of current level of technologies. In short, the
map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.
The following post consists of text wall and the map, which are complimentary: the text provides in
depths details about some ideas and the map gives general overview of the prevention plans.
The map: http://immortality-roadmap.com/warming3.pdf
Uncertainty
The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate
change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about
its exact scale and timing, and especially about low probability tails with high consequences. In the
case of risk analysis we cant ignore these tails as they bear the most risk. So I will focus mainly on
the tails, but this in turn requires a focus on more marginal, contested or unproved theories.
These uncertainties are especially large if we make projections for 50-100 years from now; they are
connected with the complexity of the climate, the unpredictability of future emissions and the
chaotic nature of the climate.
Clathrate methane gun
An unconventional but possible global catastrophe accepted by several researchers is a greenhouse
catastrophe named the runaway greenhouse effect. The idea is well covered in wikipedia
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis
Currently large amounts of methane clathrate are present in the Arctic and since this area is warming
quickly than other regions, the gasses could be released into the atmosphere.
https://en.wikipedia.org/wiki/Arctic_methane_emissions
Predictions relating to the speed and consequences of this process differ. Mainstream science sees
the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in
global temperature, which seems bad but it is survivable. It will also take thousands of years.
It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal
maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum
(PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven
global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane
clathrate deposits were around 10 times smaller than they are at present because the ocean was
warmer. This means that if the clathrate gun fires again it could result in much more severe
consequences.
But some scientists think that it may happen quickly and with stronger effects, which would result in
runaway global warming, because of several positive feedback loops. See, for example the blog
http://arctic-news.blogspot.ru/
There are several possible positive feedback loops which could make methane-driven warming
stronger:
1)
The Sun is now brighter than before because of stellar evolution. The increase in the Suns
luminosity will eventually result in runaway global warming in a period 100 million to 1 billion
years from now. The Sun will become thousand of times more luminous when it becomes a red
giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans

81

2)
After a long period of a cold climate (ice ages), a large amount of methane clathrate
accumulated in the Arctic.
3)
Methane is short living atmospheric gas (seven years). So the same amount of methane
would result in much more intense warming if it is released quickly, compared with a scenario in
which it is scattered over centuries. The speed of methane release depends on the speed global
warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of
the methane.
4)
Water vapor is the strongest green house gas and more warming results in more water vapor
in the atmosphere.
5)
Coal burning resulted in large global dimming
https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could
stop the masking of the global warming.
6)
The oceans ability to solve CO2 falls with a rise in temperature.
7)
The Arctic has the biggest temperature increase due to global warming, with a projected
growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earths albedo
which would result in higher temperatures. The same is true for permafrost and snow cover.
8)
Warmer Siberian rivers bring their water into the Arctic ocean.
9)
The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.
10)
The current period of a calm, spotless Sun would end and result in further warming.
Anthropic bias
One unconventional reason for global warming to be more dangerous than we used to think is
anthropic bias.
1. We tend to think that we are safe because not runaway global warming events have ever
happened in the past. But we could observe only a planet where this never happened. Milan
Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher.
See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf
2. Also we, humans tend to find ourselves in a period when climate changes are very strong because
of climate instability. This is because human intelligence as a universal adaptation mechanism was
more effective in the period of instability. So climate instability helps to breed intelligent beings.
(This is my idea and may need additional proof).
3. But if runaway global warming is long overdue this would mean that our environment is more
sensitive even to smaller human actions (compare it with an over-pressured balloon and small
needle). In this case the amount of CO2 we currently release could be such an action. So we could
underestimate the fragility of our environment because of anthropic bias. (This is my idea and I
wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defendus-observation-selection-and-fragility-of-our-environment)
The timeline of possible runaway global warming
We could name the runaway global warming a Venusian scenario because thanks to a greenhouse
effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo
(0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).
A greenhouse catastrophe can consist of three stages:
1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of a trigger
point. We dont where the tipping point is, we may have passed it already, conversely we may be
underestimating natural self-regulating mechanisms.
2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as
the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is
82

limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process
can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions
of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it
becomes, as its technologies will be damaged. But the later that global warming happens, the higher
the tech will be that can be used to stop it.
3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even
stronger and quicker positive feedback loop. A moist greenhouse will start if the average
temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the
oceans, resulting in 900 C surface temperatures.
(https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet
will boil, resulting in a dense water vapor atmosphere. See also here:
https://en.wikipedia.org/wiki/Runaway_greenhouse_effect
Prevention
If we survive until positive Singularity, global warming will be not an issue. But if strong AI and
other super techs dont arrive until the end of the 21st century, we wll need to invest a lot in its
prevention, as the civilization could collapse before the creation of strong AI, which means that we
will never be able to use all of its benefits.
I have a map, which summarizes the known ideas for global warming prevention and adds some
new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf
The map has two main variables: our level of tech progress and size of the warming which we want
to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the
plans are:
No plan do nothing, and just adapt to warming
Plan A cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of
investment and cooperation. Long term action and remote results.
Plan B geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are
possible. Quicker action and quicker results, but involves risks in the case of switching off.
Plan C emergency actions for Sun dimming, like artificial volcanic winter.
Plan D moving to other planets.
All plans could be executed using current tech levels and also at a high tech level through the use of
nanotech and so on.
I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and
its not working, because it is very expensive and requires cooperation from all sides. Even then it
will not achieve immediate results and the temperature will still continue to rise for many other
reasons.
Plan B is changing the opacity of the Earths atmosphere. It could be a surprisingly low cost exercise
and could be operated locally made. There are suggestions to release something as simple as sulfuric
acid into the upper atmosphere to raise its reflection abilities.
"According to Keiths calculations, if operations were begun in 2020, it would take 25,000 metric
tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of
sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric
tons of it each year, at an annual cost of $700 million, would be required to compensate for the
83

increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program
would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft."
https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/
There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the
oceans and by dispersing the carbon capturing mineral olivine.
The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller
catastrophe could result in the disruption of such engineering and the consequent immediate return
of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html
There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter
through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more
controversial and untested than geo-engineering.
A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere,
average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years
afterward, the models suggest.
http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warmingenvironment-science-climate-change/ Nuclear explosions in deep forests may have the same effect
as attacks on cities in term of soot production.
Fighting between Plan A and Plan B
So we are not even close to being doomed by global warming but we may have to change the way
we react to it.
While cutting emissions is important it will probably not work within a 10-20 year period, quicker
acting measures should be devised.
The main risk is abrupt runaway global warming. It is low probability event with the highest
consequences. To fight it we should prepare rapid response measures.
Such preparation should be done in advance, which requires expensive scientific experiments. The
main problem here is (as always) funding, and regulators approval. The impact of sulfur aerosols
should be tested. Complicated math models should be evaluated.
Contra-arguments are the following: Openly embracing climate engineering would probably also
cause emissions to soar, as people would think that there's no need to even try to lower emissions
any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever
was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are
safe? Of course, if we believe that history will end anyways within decades or centuries because of
singularity, long-term effects of such measures may not matter so much Another big issue with
changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to
start geo-engineering until they at least take simple measures to reduce their emissions. (comments
from Lesswrong discussion about GW).
Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geoengineering), where plan As approval is winning. It has been suggested not to implement Plan B as
an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions).
84

Regulators didnt approve even the smallest experiments with sulfur shielding in Britain. Iron ocean
seeding also has regulatory problems.
But the same logic works in the opposite direction. China and the coal companies will not cut
emissions, because they want to press policymakers to implement plan B. It looks like a prisoners
dilemma of two plans.
The difference between the two plans is that plan A will return everything to its natural state and
plan B is aimed on creating instruments to regulate the planets climate and weather.
In the current global political situation, cutting emissions is difficult to implement because it
requires collaboration between many rival companies and countries. If several of them defect (most
likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work,
even if all of Europe were solar powered.
Transition to zero-emission economy could happen naturally in 20 years after electric transportation
will become widespread as well as solar energy.
Plan C should be implemented if the situation suddenly changes for the worse, with the temperature
jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to
make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt
other geo-engineering measures.
I would also advocate for a mixture of both plans, because they work on different timescale. Cutting
emissions and removing CO2 using the current level of technologies would take decades to have an
impact on the climate. But geo-engineering has a reaction time of around one year so we could use it
to cover the bumps in the road.
Especially important is the fact that if we completely stop emissions, we could also stop global
dimming from coal burning which would result in a 3 C global temperature jump. So stopping
emissions may result in a temperature jump, and we need a protection system in this case.
In all cases we need to survive until stronger technologies develop. Using nanotech or genetic
engineering we could solve the warming problem with less effort. But we have to survive until this
time.
It seems to me that the idea of cutting emissions is overhyped and solar management is
"underhyped" in terms of public opinion and funding. By changing that misbalance we could
achieve more common good.
An unpredictable climate needs a quicker regulation system
The management of climate risks depends on their predictability and it seems that this is not very
high. The climate is a very complex and chaotic system.
It may react unexpectedly in response to our own actions. This means that long-term actions are less
favorable. The situation could change many times during their implementation.
The quick actions like solar shielding are better for management of poor predictable processes, as
we can see the results of our actions and quickly cancel them or make them stronger if we don't like
the results.
85

Context risks influencing the probability of other global risks


Global warming has some context risks:
1 it could slow tech progress,
2. it could raise the chances of war (probably already happened in Syria because of draught
http://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/),
and exacerbate conflicts between states about how to share recourses (food, water etc.) and about the
responsibility for risk mitigation. All such context risks could lead to a larger global catastrophe. See
also a book Climate wars https://www.amazon.com/Climate-Wars-Fight-SurvivalOverheats/dp/1851688145
3. Another context risk is that global warming is captures almost all the available public attention for
global risks mitigation, and other more urgent risks may get less attention.
4 Impaired cognition. Rising CO2 levels could also impair human intelligence and slow tech
progress as CO2 levels near 1000 ppm are known to have negative effects on cognition.
5. Warming may also result in large hurricanes. They can appear if the sea temperature reaches 50 C
and they have a wind speed of 800 km/h, which is enough to destroy any known human structure.
They will be also very stable and live very long, thus influencing the atmosphere and creating strong
winds all over the world. The highest sea temperature currently is around 30C.
http://en.wikipedia.org/wiki/Hypercane
Warming and other risks
Many people think that runaway global warming constitutes the main risk of global catastrophe.
Another group think it is AI, and there is no dialog between these two groups.
The level of warming which is survivable strongly depends of our tech level. Some combinations of
temperature and moisture are non-survivable for human beings without air conditioning: If the
temperature rises by 15 C, half of the population will be in a non-survivable environment
http://www.sciencedaily.com/releases/2010/05/100504155413.htm because very humid and hot air
prevents cooling by perspiration and feels like a much higher temperature. With the current level of
tech we could fight it, but if humanity falls to a medieval level, it would be much more difficult to
recover in such conditions.
In fact we should compare not the magnitude but speed of global warming with the speed of tech
progress. If the warming is quicker it wins. If we have very slow warming, but even slower progress,
the warming still wins. In general I think that progress will overrun warming, and we will create
strong AI before we have to deal with serious global warming consequences.
The war also could happen if one country attempts geo-engineering, say USA, and another will
think about is as climate weapon which undermines its agricultural productivity (China or Russia).
Scientific studies and preparation for GE is probably the longest part of GE, it and could and should
be done in advance, and it should not provoke war. If real necessity of GE appear, all need
technologies will be ready.
Different predictions
86

Multiple people predict extinction due to global warming but they are mostly labeled as alarmists
and are ignored. Some notable predictions:
1.
David Auerbach predicts that in 2100 warming will be 5 C and combined with resource
depletion and overcrowding it will result in global catastrophe.
http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankindextinct-100-years-climate-change-warns-expert.html
2.
Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and
extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-adecade.html
3.
Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst
case emission scenario and worst climate sensitivity to them:
https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios
4.
The consensus of scientists is that climate tipping point will be in 2200
http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-22002012967.html
5.
If humanity continues to burn all known carbon sources it will result in a 10 C warming by
2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turnearths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not
extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars
or other smaller catastrophes which will permit the existence of regional powers which often smash
each other into ruins and then rebuild using coal energy. Something like global nuclear Somali
world.
6. Kopparapu says that if current IPCC temperature projections of a 4 degrees K (or Celsius)
increase by the end of this century are correct, our descendants could start seeing the signatures of
a moist greenhouse by 2100. Earth on the edge of runaway warming. http://arctic-

news.blogspot.ru/2013/04/earth-is-on-the-edge-of-runaway-warming.html
We should give more weight to less mainstream predictions, because they describe heavy tails of
possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway
global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global
warming.

87

The map of (runaway) global warming prevention


Time

High tech realization


in the second half of 21 century

Low tech realization


s
in the firt hal f of 21 cent ur y
No
plan

Plan A

Greenhouse
gases
limitation

Local adaptations
to changing
climate

Cutting emissions
of CO2

Reduction of the
emissions of other
greenhouse
gases

Air conditioning
Cities replacement
New crops
Irrigation

Could work in case small


changes like 2-4 C
Will not work in case of
runaway warming

Switching from coal to gas


Switching to electric
i
cars
More fuel-efficent car s
The transition to renewable and solar energy, which does not result in CO2
emissions
Lower consumption

Wait until strong AI creation

Using nanotech and biotech to create new modes of transport and energy
sources
Using AI to create a comprehensive climate model and calculate impacts

Use of lasers to destroy CFC gases


Methane in the atmosphere can be destroyed by releasing hydroxyl groups or
methanophile organisms

wiki

Capturing CO2
from the
atmosphere

Reforestation
Stimulation of plankton: seeding the ocean with iron
Using common mineral olivine, which when spray enters into chemical reaction
with CO2 and absorbs it, and

GMO organisms capable of capturing CO2


Nanorobots use carbon from the air for self replication
Nanomachines in the upper layers of the atmosphere trap harmful gases

wiki

Plan B

Use of
geo-engineering for
c
reflet i on
of sunlight
Risk: turning off
could result in immediate bounce of
global warming
Risk: Less
incentives in
cutting emissions

Reflet ion of
sunlight
radiation
in stratosphere

Increase of
albedo of the
earths surface

Increase of
cloud albedo

Space solutions
i

1. Stratospheric aerosols based on sulfuric acid wiki


Risk: the destruction of the ozone layer.
Risk: acid rains
Expected price: may be zero, if cheap plane fuel used

2. Change the albedo


https://en.wikipedia.org/wiki/Refle
c
t i v e_surfaces_(geoengineering)
c
Using reflet i v e construction technologies
c
Crops with high albedo
The foam in the ocean
Covering deserts with c
reflet i v e plastic
Huge airships
m reflet i v e thin fil in the upper at m
o spher e

3. Increasing of the albedo of clouds over the sea with the help of injecting sea-c
water-based condensation nuclei
https://en.wikipedia.org/wiki/Cloud_refle
a
t i vi t y_modifict i on
1500 ships can do it. The spray itself will go to the upper atmosphere
Risk: The water turns to steam, which is itself a greenhouse gas.
Risk: Incorrect altitude clouds can lead to heating
https://en.wikipedia.org/wiki/Solar_radiation_management#Weaponization

Mirrors in space
Spraying moondust: explosions on the moon could create a cloud of dust
Construction of factories for the production of the moon, satellites umbrellas
Lenses or smart dust at L1 point
Deviation of an asteroid to the Moon so that it would result into impact and a
cloud of dust on the moons orbit will be created, and will be screening solar radiation.
c

Robots replicators in space will building mirrors

Artifical expl osi ons of iv olcanoes to create artifical v olcanic iwinter


Artifical nuc l ear wi nt er - expl osi ons i n tai ga and in coal beds

Plan
Urgent measures
to stop global
warming

Plan C could be realized in half a year with help of already existing nuclear
weapons

Plan D
Escape
Escape into high mountains, like Himalaya or in Antarctic
High-tech air condition escapes

Space stations
Other planet colonisation
Uploading into AI

Types of warming and itshconsequences

Consequences

Probability

New Ice age

Large economic problems


Millions people will die

No warming

Useless waste of money, time and attention in figt ing gl obal w arming

Warming according ICPP


predictions: 2-4 C until
the end of 21 century

Large economic problems


Hurricanes, famine, sea change levels
Millions people will die

Catastrophic warming
10-20 C

Big parts of land became inhabitable,


human moves to poles
and mountains
Civilizational collapse

Catastrophic warming
New equilibrium of the
climate at 55 C (40 C
warming)

Small groups of people could survive


on very high mountains,
like Himalaya

Venusian runaway
warming (Mild)
The medium temperature
reach more than 100 C

Some form of life survive


on mountains tops

Venusian runaway warming (Strong, all water


evaporation)
The medium temperature
reach more than 1600 C

Life on Earth irreversibly ends

88

Chapter 7. The anthropogenic risks which are not connected with new technologies
Exhaustion of resources
The problem of exhaustion of resources, growth of the population and pollution of
environment is system problem, and in this quality we will consider to it further. Here we will
consider only, whether each of these factors separately can lead to mankind extinction.
Widespread opinion is that the technogenic civilization is doomed because of exhaustion of
readily available hydrocarbons. In any case, this in itself will not result in extinction of all mankind
as earlier people lived without oil. However there will be vital issues if oil ends earlier, than the
society will have time to adapt for it - that is will end quickly. However coal stocks are considerable,
and the "know-how" of liquid fuel from it was actively applied in Hitlers Germany. Huge stocks of
hydrate of methane are on a sea-bottom, and effective robots could extract it. And wind-energy,
transformation of a solar energy and similar as a whole it is enough existing technologies to keep
civilization development, though probably certain decrease in a standard of life is possible, and in
the worst case - considerable decrease in population, but not full extinction.
In other words, the Sun and a wind contain energy which in thousand times surpasses
requirements of mankind, and we as a whole understand how to take it. The question is not, whether
will suffice energy for us, but whether we will have time to put necessary capacities into operation
before shortage of energy will undermine technological possibilities of the civilization at the adverse
scenario.
To the reader can seem, that I underestimate a problem of exhaustion of resources to which the
is devoted set of books (Meadows, Parhomenko), researches and the Internet of sites (in the spirit of
www.theoildrum.com ). Actually, I do not agree with many of these authors as they start with the
precondition, that technical progress will stop. We will pay attention to last researches in the field of
maintenance with power resources: In 2007 in the USA industrial release of solar batteries in cost
less than 1 dollar for watt has begun, that twice it is less, than energy cost on coal power station, not
considering fuel. The quantity wind energy which can be taken from ocean shoal in the USA makes
900 gigawatts, that covers all requirements of the USA for the electric power. Such system would
give a uniform stream of energy for the account of the big sizes. The problem of accumulation of
surpluses of the electric power is solved for the account of application of return back waters in
89

hydroelectric power stations and developments of powerful accumulators and distribution, for
example, in electromobiles. The large amount of energy can be taken from sea currents, especially
Gulf Stream, and from underwater deposits methane hydrates.
Besides, end of exhaustion of resources is behind horizon of the forecast which is established
by rate of scientific and technical progress. (But the moment of change of the tendency - Peak Oil is in this horizon.)
One more variant of global catastrophe is poisoning by products of our own live. For example,
yeast in a bottle with wine grows on exponent, and then poisoned with products of the disintegration
(spirit) and all to one will be lost. This process takes place and with mankind, but it is not known,
whether we can pollute and exhaust so our inhabitancy that only it has led to our complete
extinction. Besides energy, following resources are necessary to people:
Materials for manufacture - metals, rare-Earth substances etc. Many important ores can end
by 2050. However materials, unlike energy, do not disappear, and at development nanotechnology
there is possible a full processing of a waste, extraction of the necessary materials from sea water
where the large quantity is dissolved, for example, uranium, and even transportation of the necessary
substances from space.
Food. According to some information, the peak of manufacture of foodstuff is already
passed: soils disappear, the urbanization grasps the fertile fields, the population grows, fish comes to
an end, environment becomes soiled by waste and poisons, water does not suffice, wreckers extend.
On the other hand, transition to essentially new industrial type of manufacture of the food plants,
based on hydroponics - that is cultivation of plants in water is possible, without soil in the closed
greenhouse that protects from pollution and parasites and is completely automated. (see Dmitry
Verhoturova's and Kirillovsky article Agrotechnologies of the future: from an arable land to
factory). At last, margarine and, possibly, many other things necessary components of a foodstuff, it
is possible to develop from oil at the chemical enterprises.
Water. It is possible to provide potable water for the account desalination sea water, today it
costs about dollar on ton, but the water great bulk goes on crop cultivation - to thousand tons of
water on wheat ton that does desalination unprofitable for agriculture. But at transition on
hydroponic water losses on evaporation will sharply decrease, and desalination can become
profitable.
Place for a life. Despite fast rates of a gain of quantity of the population on the Earth, it is
still far to a theoretical limit.

90

Pure air. Already now there are the conditioners clearing air from a dust and raising in it the
maintenance of oxygen.
- Exceeding the global hypsithermal limit.
Artificial Wombs

Technological revolution causes following factors in population growth:


Increase in number of beings which we attribute the rights equal to the human: monkeys,
dolphins, cats, dogs.
Simplification of a birth and education of children. Possibilities of reproductive cloning,
creation of artificial mothers, robots-assistants on housekeeping etc.
Appearance of the new mechanisms applying for the human rights and-or consuming
resources: cars, robots, AI systems.
Possibilities of prolongation of a life and even revival of dead (for example, by cloning on
remained DNA).
Growth of a "normal" consumption level.

Crash of the biosphere


If people seize genetic technologies it presumes both to arrange crash of biosphere of
improbable scales, and to find resources for its protection and repair. It is possible to imagine the
scenario at which all biosphere is so infected by radiation, genetically modified organisms and
toxins, that it will be not capable to fill requirement of mankind for the foodstuffs. If it occurs
suddenly, it will put a civilization on a side of economic crash. However advanced enough
civilization can adjust manufacture of a foodstuff in a certain artificial biosphere, like greenhouses.
Hence, biosphere crash is dangerous only at the subsequent recoil of a civilization on the previous
step - or if crash of biosphere causes this recoil.
But biosphere is very complex system in which self-organized criticality and a sudden collapse
are possible. Well known story is destruction of sparrows in China and the subsequent problems
with the foodstuffs because of invasion of wreckers. Or, for example, now corals perish worldwide
because sewage take out a bacterium which hurt them.

91

Chapter 8. Artificial Triggering of Natural Catastrophes


There are a number of natural catastrophes which could potentially be triggered by
the use of powerful weapons, especially nuclear weapons. These include: 1) initiation of a
supervolcano eruption, 2) dislodging of a large part of the Cumbre Vieja volcano on La
Palma in the Canary Islands, causing a megatsunami, 3) possible nuclear ignition of a gas
giant planet or the Sun. The first is the most dangerous and plausible, the second is
plausible but not dangerous to the whole of mankind, and the third is currently rather
speculative, but in need of further research. Besides these three risks, there is also the risk
of asteroids or comets being intentionally redirected to impact the Earth's surface and the
risk of destruction of the ozone layer through an auto-catalyzing reaction. These five risks
and some closely related others will be examined in this chapter.
We begin with the Cumbre Vieja risk, as it is the most studied among these
possibilities and serves as an illustrative example. According to heavily contested claims,
there may be a block with volume between 150 km 3 and 500 km3 on the Cumbre Vieja
volcano on La Palma in the Canary Islands, which if dislodged, would cause 3-8 m (10- 26
ft, in the instance of 150 km3 collapse) to 10-25 m (33- 108 ft, in 500 km3 collapse) waves
to hit the North American Atlantic seaboard, causing massive destruction and waves
traveling up to 25 km (16 mi) inland1. All the assumptions underlying this scenario have
been heavily questioned, including the most basic, that there is an unstable block to begin
with2,3. According to the researchers (Ward & Day) who made the original claims, there is a
30 km (17 mi) fissure along Cumbre Vieja. Quoting Ward and Day, The unstable block
above the detachment extends to the north and south at least 15 km. The evidence for this
detachment is not based on any obvious surface feature but rather is based on an analytic
argument by Day et al. which incorporates evidence such as vent activity. As such, the
statement has been questioned by critics, who say that the evidence does not imply such a
large detachment and that at most the detachment is 3 km (1.8 mi) in length.
We leave detailed discussion on the topic of the Cumbre Vieja volcano to the
referenced worksour point is to establish an archetype for the category of artificiallytriggered natural risk. If the volcano exploded, or a nuclear bomb were detonated in the
right place, it could send a large chunk of land from La Palma into the ocean, creating a
mega-tsunami that would sink ships, impact the East Coast and cost many lives. Saying
92

that this would certainly not occur, with confidence, is not possible now because the case
has not been investigated thoroughly enough. Unfortunately, the other risks which we
discuss in this chapter have been analyzed even less. It is important to highlight them,
however, so that future research can be prompted.
Yellowstone Supervolcano Eruption
A less controversial claim than the status of the Cumbre Vieja volcano is that there is
a huge, pressurized magma chamber beneath Yellowstone National Park in Wyoming.
Others have gone on to suggest, off the record, that it would blow if its cap were destroyed
by a nuclear weapon. No geologists have publicly addressed the possibility, but it is entirely
consistent with their statements on the pressure of the magma chamber and its depth 4. The
magma chamber is 80 km (50 mi) long and 40 km (24 mi) wide, and has 4,000 km 3 (960 cu
mi) of underground volume, of which 1030% is filled with molten rock. The top of the
chamber is 8 km (5 mi) below the surface, the bottom about 16 km (10 mi) below. That
means that anything which could weaken or annihilate the 8 km (5 mi) cap could release
the hyper-pressurized gases and molten rock and trigger a supervolcanic eruption, causing
the loss of millions of lives.
Before we review the effects of a supervolcano eruption, it is worth considering the
depth penetration of nuclear explosions. During Operation Plowshare, an experiment of the
peaceful use of nuclear weapons, craters 100 m (320 ft) deep were created. Simplistically
speaking, this means that 80 similar weapons would be needed to burrow all the way down
to the Yellowstone Caldera magma chamber. Realistically speaking, fewer would be
needed, since deep nuclear explosions cause collapses which reduce overhead pressure.
Our estimate is that just 10 ten-megaton nuclear explosions or fewer would be sufficient to
connect the magma chamber to the surface. If there are solid boundaries between the
explosion cavities, they could be bridged by a drilling machine. That would release the
pressure and allow the magma to explode to the surface. You might wonder what sort of
people would have the motivation to do such a thing. America's enemies, for one, but there
are other possibilities. We explore the general case in a later chapter.
For now, let's review the effects of a supervolcano eruption. Like many of the risks
discussed in this book, they would not be likely to kill all of humanity, but only a few
hundreds of millions of people. It's in combination with other risks that the supervolcano
93

risk becomes a threat to the human species in general. Regardless, it would be


cataclysmic.
Specifically: a supervolcano eruption on the scale of Yellowstone's prior events would
eject about 1,000 km3 (250 cu mi) of molten rock into the sky, creating ash plumes which
cover as much as two-thirds of the United States in a layer of ash a foot thick, making it
uninhabitable for decades. 80,000 people would be killed instantly, and hundreds of
millions more would die over the subsequent months due to lack of food and water in the
wake of the eruption. Ash would kill all plants on the surface, causing the death of any
animals not able to survive on detritus or fungus. The average world temperature would
drop by several degrees, causing catastrophic crop failures and hundreds of millions of
deaths by starvation. The situation would be similar to that after a nuclear war, with the
ameliorating factor that volcanic aerosols would persist in the atmosphere for less time
than nuclear aerosols, due to greater average particle size. Therefore, a volcanic winter
would be shorter than a nuclear winter, although the acute effects would be far worse.
Nuclear war would barely affect many inland areas, but a Yellowstone supereruption would
cover them uniformly in ash. As a world power, the United States would be done for.
A foot thick layer of ash pretty much puts an end to all human activity. USGS geologist
Jake Lowenstern is quoted as saying that a Yellowstone supereruption would deposit a
layer of ash at least 10 cm (4 in) thick for a radius of 500 miles around Yellowstone 5. This is
somewhat less than the 10 foot thick layer of ash claims seen in disaster documentaries
and in other sensationalist sources, but still more than enough to heavily disrupt activity
across a huge area. The effects would be similar to the catastrophe outlined in the nuclear
weapons chapter, but even more severe. Naturally, water would still be available from wells
and streams, but all crops and much game would be ruined, leaving people completely
dependent upon canned and other stored food. The systems that provide power would be
covered in dust, requiring weeks to months to repair. More likely than not, civil order would
completely collapse across the effected area, making repairs to power systems difficult and
piecemeal. Of course, if the victims don't prepare enough canned food, they can always
eat one another, which is the standard (and rarely spoken of) outcome in this kind of
disaster scenario.

94

Still, little of this directly matters in the context of this book, since such an event, while
severe, does not threaten humanity as a whole. Although worldwide temperatures would
drop by a few degrees, and the continental US would be devastated, the world population
as a whole would survive and live on, guaranteeing humanity's future. It's still worth noting
this scenario because 1) there may be multiple supervolcanos worldwide which could be
triggered simultaneously, which either individually or concurrently with nuclear weapons
could cause a volcanic/nuclear winter so severe that no one survives it, 2) it is an
exacerbating factor which could add to the tension of a World War or similar scenario. A
strike on a supervolcano would be deadlier than a strike on a major city, and could
correspondingly raise the stakes of any international conflict. Threatening to attack a
supervolcano with a series of ICBMs could be a potential blackmailing strategy in the
darkest hours of a World War.
Asteroid Bombardment
A risk which has more potential to be life-ending than a supervolcano eruption is that
of intentionally-directed asteroid bombardment, which could be quite dangerous to the
planet indeed. Directing an asteroid of sufficient size towards the Earth would require a
tremendous amount of energy, orders of magnitude greater than mankind's current total
annual energy consumption, but it could eventually be done, perhaps with the tools
described in the nanotechnology chapter.
The Earth's orbit already places it in some danger of being hit by a deadly asteroid.
65 million years ago, an asteroid between 5 km (2 mi) and 15 km (6 mi) impacted the
Earth, causing the extinction of the dinosaurs. There is an asteroid, 1950 DA, 1 km (0.6 mi)
in diameter, which scientists say has a 0.3 percent chance of impacting Earth in 2880 6. The
dinosaur-killer impact was so severe that its blast wave ignited most of the forests in North
America, destroying them and making fungus the dominant species for several years after
the impact7. Despite this, some human-sized species survived, including various turtles and
alligators. On the other hand, many human-sized and larger species were wiped out,
including all non-avian dinosaurs. More research is needed to determine whether a
dinosaur-killer-class asteroid would be likely to wipe out humanity in our entirety, taking into
account our ability to take refuge in bunkers with food and water for decades or possibly

95

even centuries at a time. Detailed studies have not been done, and we were only able to
locate one paper on the topic8.
In the geological record, there are asteroid impacts up to half the size of the
Chicxulub impactor which wiped out the dinosaurs, which are known not to have caused
mass extinctions. The Chicxulub impactor, on the other hand, caused a mass extinction
that destroyed about three-quarters of all living plant and animal species on Earth. It seems
fair to assume that an asteroid needs to be at least as large as the Chicxulub impactor to
have a chance of wiping out humanity, and probably significantly larger.
Sometimes, asteroids with a diameter of greater than 10 km (4 mi) are called lifekiller asteroids, though this is an exaggeration. At least one asteroid of this size has
impacted the Earth during the last 600 million years and not wiped out multicellular life,
though it did burn down the entire biosphere. The two largest impact craters known, with a
diameter of 300 and 250 km (186 and 155 mi) respectively, correspond to impacts which
occurred before the evolution of multicellular life. The Chicxulub crater, with a diameter of
180 km (112 mi), is the third-largest impact crater on Earth which is definitively known, and
the only known major asteroid to hit the planet after the rise of complex, multicellular life.
Craters of similar size from the last 600 million years cannot be definitively identified, but
that does not mean that such an impact has not occurred. The crater could very well have
been on part of the Earth's surface that has since been subsumed beneath a continental
plate, or could be camouflaged by surface features. There is one possible crater of even
larger size, the Shiva crater, which, if real (it is highly contested) is 600 km (370 miles)
long by 400 km (250 mi) wide, and may correspond to the impact of a 40 km (25 mi) sized
object. If this were confirmed, it would substantially increase the necessary size for a
human-killer asteroid, but since it is highly contested, it does not count, and we ought to
assume that an asteroid just modestly larger than the Chicxulub impactor, say near the top
of its probable size range, 15 km (6 mi) could potentially wipe out the human species, just
as it did the dinosaurs. Comets have somewhat greater speed than asteroids, due to their
origin farther out in the solar system, and can be correspondingly smaller than asteroids
but still do equivalent damage. It is unknown whether the Chicxulub impactor was an
asteroid or a comet. To put it in perspective, an asteroid 10 km (4 mi) across is similar to
the size of Mt. Everest, but with greater mass (due to its spherical, rather than conical
shape).
96

Restricting our search to asteroids of size 15 km (9 mi) or larger, we might wonder


where objects of this size may be found. Among 1,006 potentially hazardous asteroids
(PHOs) classified by NASA, the largest is 4.752.41.95 km, with a mass of 5.010 13 kg.
This is rather large, but not large enough to be categorized as a potential humanity-killer,
according to our analysis. (Furthermore, this object has a relatively low density, which
would make its impact even slighter than its size alone would suggest.) Moving on to nearEarth objects (NEOs), asteroids which orbit in Earth's neighborhood but are not at risk of
impact, more than 10,713 are categorized by NASA. Over 1,000 are estimated to have a
diameter greater than 1 km (0.6 mi). The largest, 1036 Ganymed, has an estimated
diameter of 3234 km (2021 mi), more than enough to qualify as a potential humanitykiller. Like all impacts of similar size, it would eject a huge amount of rock and dust into the
upper atmosphere, which would heat up into molten rock upon reentry, broiling the
surface upon its return. At a distance of 800 km (~500 mi), the ejecta would take roughly 8
minutes to arrive. According to one source, the impact would result in a volcanic winter with
a drop of 13 Kelvin after 20 days, rebounding by about 6 K after a year, at which point onethird of the Northern Hemisphere would be covered in ice 9. In addition, a sufficiently large
impact would create a mantle plume at the antipodal point (opposite side of the planet),
which would cause a supervolcanic eruption and a volcanic winter all on its own. The
combination of an impact and a supervolcanic eruption could cause a temperature drop
even greater than 13 Kelvin. Impactors greater than 3 km (1.2 mi) in diameter are likely to
create global firestorms, probably burning up a majority of the world's dense forests 10,11.
The smoke from the wood fires would cause the impact winter to last even longer, creating
even thicker ice sheets at lower latitudes which would be even more omnicidal. Still,
according to our best guess, a decade of food and underground refuges would be enough
for humans to survive an impactor of equivalent size to the Chicxulub impactor. Studies of
the civilizational outcome of a Ganymed-sized impactor cannot be found, but it is fair to say
it would be very bad, though thankfully very unlikely.
There are several reasons it seems unlikely that a single asteroid impact could wipe
out humanity. The foremost is, that, unlike the dinosaurs, some minority of us could seal
ourselves in caves with a pleasant climate and live for dozens if not hundreds of years on
stored food alone. If supplied with a nuclear reactor, metal shop, a large supply of light
bulbs, a reliable water source, and means of disposing of waste, people could even grow
97

plants underground and extend their stay for the life of the reactor, which could be 100
years or longer. A thick ice sheet forming on top of the bunker could kill all life inside, by
denial of oxygen, but such ice sheets are unlikely to form in the tropics, where billions of
people live today. Perhaps a greater risk would be a series of artificially engineered
asteroid impacts, 50-100 years apart, designed to last for thousands of years. It seems
more likely that an unnatural scenario such as that could actually wipe out all human life,
but also seems correspondingly more difficult to engineer. It could become possible during
the late 21st century and beyond, however.
Manually redirecting an asteroid with an orbit relatively far from the Earth, say 0.3
astronomical units (AU) in the case of 1036 Ganymed, would require a tremendous amount
of energy, even by the standards of high energy density MNT-built machinery. Redirecting
an object of that size by a substantial amount would require many tens of thousands of
years and a corresponding number of orbits. If someone had an objective to destroy a
target on Earth, it would be much simpler to blow it up with a nuclear weapon or direct
sunlight from a space mirror to vaporize it rather than drop an asteroid on it. It would be
even far easier to pick up a mountain, launch it off the surface of the Earth, orbit it around
the Earth until it picked up sufficient energy, then drop it on the Earth, rather than
redirecting a distant object. For this reason, a man-engineered asteroid impact seems like
an unlikely risk to humanity for the foreseeable future (tens of thousands of years), and an
utterly negligible one for the time frame under consideration, the 21 st century. Of course,
the probability of a natural impact of this size in the next century is minute.
Runaway Autocatalytic Reactions
Very few people know that if the concentration of deuterium (an isotope of hydrogen,
also known as heavy water) in the world's oceans were just 22 times higher than it is, it
would be possible to ignite a self-sustaining nuclear reaction that would vaporize the seas
and the Earth's crust with it. The oceans have 1 atom per 6,247 of deuterium, and the
critical threshold for a self-sustaining nuclear chain reaction is one atom per 300 12. It may
be that there are deposits of ice or heavy water with the required concentration of
deuterium. Heavy water has a slightly higher melting point than normal water and thus
concentrates during natural processes. Even a cube of heavy water ice just 100 m (320 ft)
on a side, small in terms of geologic deposits, could release energy equivalent to many
98

gigatons of TNT if ignited, greater than the largest nuclear bombs ever detonated. The
reason we do not observe these events in other star systems may be due to the fact that
artificial nuclear explosions are needed to trigger them, or that the required concentrations
of deuterium do not exist naturally. More research on the topic is needed. There is even a
theory that the Moon formed during a natural nuclear explosion from concentrated uranium
at the core-mantle boundary13. It may be that we have not observed such an event in other
star systems yet because it is sufficiently uncommon.
It should be possible to go looking for higher-than-normal concentrations of deuterium
in Arctic ice deposits. Theses studies should be pursued out of an abundance of caution. If
such deposits are found, this would provide more evidence for the usefulness of space
stations distant from the Earth as an insurance policy against geogenic human extinction
triggered by nuclear chain reaction in geologic deposits. There may be other autocatalytic
runaway reactions which are possible, threatening structures such as the ozone layer.
These possibilities should be studied in greater detail by geologic chemists. Dangerous
thresholds of deuterium may exist in the icy bodies of the outer solar system, on Mars, or
among the asteroids. The use of nuclear weapons in space should be restricted until these
objects are thoroughly investigated for deuterium levels.
Gas Giant Ignition
The potential ignition of gas giants is of sufficient importance that it merits its own
section. The first reaction to such an idea is that it sounds utterly crazy. There is a reflexive
search for a rationalization, a quick rebuttal, to put such an implausible idea out of our
heads. Problematically, however, many of the throw-away rebuttals have been dismissed
on rational grounds, and if we are going to rule out this possibility decisively, more work
needs to be done14. Specifically, we are talking about a nuclear chain reaction being
triggered by a nuclear bomb detonated in a deuterium-rich cloud layer of a planet like
Jupiter, Saturn, Uranus, or Neptune. Even a pocket only several kilometers in diameter
would be enough to sterilize the solar system if ignited.
For ignition to be a danger, there needs to be a pocket in a gas giant where the level
of deuterium per normal hydrogen atom is at least 1:300. That is all it takes. A nuclear
explosion could then theoretically start a self-reinforcing nuclear chain reaction. If all the
deuterium in the depths of Jupiter were somehow ignited, it would release energy
99

equivalent to 3000 years of luminescence of the Sun during a few tens of seconds, enough
to melt the first few kilometers of the Earth's crust and penetrate much deeper with x-rays.
Surviving this would be extremely difficult, though possibly machine-based life forms buried
deeply underground could do it. If it turns out to seem possible, the threat of blowing up a
gas giant could be used as a blackmailing device by someone to get all of humanity to do
what they want.
The average deuterium concentration of Jupiter's atmosphere is low, about 26 per 1
million hydrogen atoms, similar to what is thought to be the primordial ratio of deuterium
created shortly after the Big Bang. Although this is a small amount, there may be chemical
and physical processes, which concentrate deuterium in parts of the atmosphere of Jupiter.
Natural solar heating of ice in comets leads to isotope separation and greater local
concentrations, for instance. Scientific details of how isotope separation can occur and
what isotope concentrations are reached in the interior of gas giants is poorly understood.
If the required deuterium concentrations exist, there are additional reasons to be
concerned that a runaway reaction could be initiated in a gas giant. One is the immense
pressure and opacity of the gas giants beneath the cloud layer. 10,000 km beneath
Jupiter's cloud tops, the pressure is 1 million bar. For comparison, the pressure at the core
of the Earth is 3 million bar. Deep in the bowels of gas giants, hydrogen changes to a
different phase called metallic hydrogen, where it is so compressed that it behaves as an
electrical conductor, and would be considerably more opaque than normal hydrogen. The
pressure and opacity would help contain any nuclear reaction and ensure it gets going
before fizzling out. Another factor to consider is that a gas giant has plenty of fusion fuel
which has never been involved in nuclear reactions, elements like helium-3 and lithium.
This makes it potentially more ignitable than a star.
There are runaway fusion reactions which occur in astronomical bodies in nature. The
most obvious is a supernova, where a red giant star collapses and fuses a large amount of
fuel very quickly. Another example is a helium flash, where a degenerate star crosses a
critical threshold of helium pressure and 60-80 percent of the helium in its core fuses in a
matter of seconds. This causes the luminosity of star to increase to about 10 11 solar
luminosities, similar to the luminosity of an entire galaxy. Jupiter, which is about a thousand

100

times less massive than the Sun, would still sterilize the solar system if any substantial
portion of its helium fused. Helium makes up about 10 percent of the mass of Jupiter.
The arguments for and against the likelihood of planetary ignition are complicated,
and require some knowledge of nuclear physics. For this reason, we will conclude
discussion of the topic here and point to some key references. Besides the risk of planetary
ignition, there has also been some discussion of the possibility of solar ignition, but this has
been more limited. Like with planetary ignition, it is tempting to dismiss the possibility
without considering the evidence, which is a danger.
Deep Drilling and Geogenic Magma Risk
Returning back to the realm of Earth, it may be possible to dig very deeply, creating
an artificial supervolcano even more energetic than Yellowstone. We should always
remember that 1792 mi (2885 km) beneath us is a molten, pressurized, liquid core
interspersed with gas. If even a tiny amount of that energy could be released to the
surface, it could wipe out all life. This is especially concerning in light of recent proposals to
send a probe to the mantle-core boundary. Although the liquid iron in the core would be too
heavy to rise to the surface on its own, the gases in the core could eject it through a
channel, like opening the cork of a champagne bottle. If even a small channel were flooded
with pressurized magma, it could make the fissure larger by melting its walls all the way to
the top, until it became large enough to eject a substantial amount of liquid iron.
A scientist at Caltech has devised a proposal for sending a grapefruit-sized
communications probe to the Earth's core, by creating a crack and pouring in a huge
amount of molten iron. This would slowly melt its way downwards, taking about a week to
travel the 1,792