Академический Документы
Профессиональный Документы
Культура Документы
The Singularity is the technological creation of smarter-than-human intelligence. Several technologies are often mentioned as heading
in this direction: Artificial Intelligence, direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, and
ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier
than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity – several differ-
ent technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.
A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled
with bigger and better gadgets. Vernor Vinge originally coined the term “Singularity” in observing that, just as our model of physics
breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to
model a future that contains entities smarter than human.
Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology
can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more ef-
fective at building still smarter minds. This loop appears most clearly in the example of an Artificial Intelligence improving its own
source code, but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating
the next generation of brain-computer interfaces, or biologically augmented humans working on an Artificial Intelligence project.
Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer interfaces, offer the possibility of
faster intelligence as well as smarter intelligence. Ultimately, speeding up intelligence is probably comparatively unimportant next to
creating better intelligence; nonetheless the potential differences in speed are worth mentioning because they are so huge. Human neu-
rons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By
comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike
a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern
theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual
signals. By comparison, speeds in modern computer chips are currently at around 2GHz – a ten millionfold difference – and still
increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which
rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to
modern-day humanity would pass in under twenty-two hours.
Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something
like a hundred billion neurons and a hundred trillion synapses. That’s an enormous amount of sheer brute computational force by com-
parison with today’s computers – although if we had to write programs that ran on 200Hz CPUs we’d also need massive parallelism to
do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one
to two years. The original Moore’s Law says that the number of transistors in a given area of silicon doubles every eighteen months;
today there is Moore’s Law for chip speeds, Moore’s Law for computer memory, Moore’s Law for disk storage per dollar, Moore’s
Law for Internet connectivity, and a dozen other variants.
By contrast, the entire five-million-year evolution of modern humans from primates involved a threefold increase in brain capac-
ity and a sixfold increase in prefrontal cortex. We currently cannot increase our brainpower beyond this; in fact, we gradually lose
neurons as we age. (You may have heard that humans only use 10% of their brains. Unfortunately, this is a complete urban legend; not
just unsupported, but flatly contradicted by neuroscience.)
An Artificial Intelligence would be different. Some discussions of the Singularity suppose that the critical moment in history is not
when human-equivalent AI first comes into existence but a few years later when the continued grinding of Moore’s Law produces AI
minds twice or four times as fast as human. This ignores the possibility that the first invention of Artificial Intelligence will be fol-
lowed by the purchase, rental, or less formal absorption of a substantial proportion of all the computing power on the then-current
Internet – perhaps hundreds or thousands of times as much computing power as went into the original Artificial Intelligence.
But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are
better chimps. This is the hardest part of the Singularity to discuss – it’s easy to look at a neuron and a transistor and say that one is
slow and one is fast, but the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on faster brains
or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe. This
doesn’t mean the subject is impossible to discuss; section III of the Singularity Institute’s “Levels of Organization in General Intel-
“To any thoughtful person, the singularity idea, even if it seems wild, raises a gigantic, swirling cloud of profound and vital
questions about humanity and the powerful technologies it is producing. Given this mysterious and rapidly approaching
cloud, there can be no doubt that the time has come for the scientific and technological community to seriously try to figure
out what is on humanity’s collective horizon. Not to do so would be hugely irresponsible.”
- Douglas Hofstadter
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
ligence” does take a stab at discussing some specific design improvements on human intelligence, but that involves a specific theory of
intelligence, which we don’t have room to go into here.
However, that smarter minds are harder to discuss than faster brains or bigger brains does not show that smarter minds are harder to
build – deeper to ponder, certainly, but not necessarily more intractable as a problem. It may even be that genuine increases in smart-
ness could be achieved just by adding more computing power to the existing human brain – although this is not currently known. What
is known is that going from primates to humans did not require exponential increases in brain size or thousandfold improvements in
processing speeds. Relative to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 95% similar DNA; given
that the human genome has 3 billion base pairs, this implies that at most twelve million bytes of extra “software” transforms chimps into
humans. And there is no suggestion in our evolutionary history that evolution found it more and more difficult to construct smarter and
smarter brains; if anything, hominid evolution has appeared to speed up over time, with shorter intervals between larger developments.
But leave aside for the moment the question of how to build smarter minds, and ask what “smarter-than-human” really means. And as
the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don’t know
because we’re not that smart. We’re trying to guess what it is to be a better-than-human guesser. Could a gathering of apes have predict-
ed the rise of human intelligence, or understood it if it were explained? For that matter, could the 15th century have predicted the 20th
century, let alone the 21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th century could not
predict five centuries ahead across constant minds, what makes us think we can outguess genuinely smarter-than-human intelligence?
Because we have a past history of people making failed predictions one century ahead, we’ve learned, culturally, to distrust such
predictions – we know that ordinary human progress, given a century in which to work, creates a gap which human predictions cannot
cross. We haven’t learned this lesson with respect to genuine improvements in intelligence because the last genuine improvement to
intelligence was a hundred thousand years ago. But the rise of modern humanity created a gap enormously larger than the gap between
the 15th and 20th century. That improvement in intelligence created the entire milieu of human progress, including all the progress
between the 15th and 20th century. It is a gap so large that on the other side we find, not failed predictions, but no predictions at all.
Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence are all interrelated. If you’re smarter
that makes it easier to figure out how to build fast brains or improve your own mind. In turn, being able to reshape your own mind
isn’t just a way of starting up a slope of recursive self-improvement; having full access to your own source code is, in itself, a kind of
smartness that humans don’t have. Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite
its own source code can potentially make itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind
doesn’t make it smarter, but adding more processing power to the cognitive processes underlying intelligence is a different matter.
But despite the interrelation, the key moment is the rise of smarter-than-human intelligence, rather than recursively self-improving
or faster-than-human intelligence, because it’s this that makes the future genuinely unlike the past. That doesn’t take minds a million
times faster than human, or improvement after improvement piled up along a steep curve of recursive self-enhancement. One mind
significantly beyond the humanly possible level would represent a Singularity. That we are not likely to be dealing with “only one”
improvement does not make the impact of one improvement any less.
Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that
there are no metaphors left. There’s nothing remaining to compare it to.
The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will
find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can
have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on
its tip. (Human technological civilization occupies a metastable state in which the Singularity is an attractor; once the system starts
to flip over to the new state, the flip accelerates.) All it takes is one technology – Artificial Intelligence, brain-computer interfaces, or
perhaps something unforeseen – that advances to the point of creating smarter-than-human minds. That one technological advance is
the equivalent of the first self-replicating chemical that gave rise to life on Earth.
If you travelled backward in time to witness a critical moment in the invention of science, or the creation of writing, or the evolution
of Homo sapiens, or the beginning of life on Earth, no human judgement could possibly encompass all the future consequences of that
event – and yet there would be the feeling of being present at the dawn of something worthwhile. The most critical moments of history
are not the closed stories, like the start and finish of wars, or the rise and fall of governments. The story of intelligent life on Earth is
made up of beginnings.
“Only a small community has concentrated on general intelligence. No one has tried to make a thinking machine and then
teach it chess — or the very sophisticated oriental board game Go. [...] The bottom line is that we really haven’t progressed
too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of
general intelligence still awaits our attack. [...] We have got to get back to the deepest questions of AI and general intel-
ligence and quit wasting time on little projects that don’t contribute to the main goal.”
- Marvin Minsky
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
Imagine traveling back in time to witness a critical moment in the dawn of human intelligence. Suppose that you find an alien
bystander already on the scene, who asks: “Why are you so excited? What does it matter?” The question seems almost impossible
to answer; it demands a thousand answers, or none. Someone who valued truth and knowledge might answer that this was a critical
moment in the human quest to learn about the universe – in fact, the beginning of that quest. Someone who valued happiness might
answer that the rise of human intelligence was a necessary precursor to vaccines, air conditioning, and the many other sources of hap-
piness and solutions to unhappiness that have been produced by human intelligence over the ages. There are people who would answer
that intelligence is meaningful in itself; that “It is better to be Socrates unsatisfied than a fool satisfied; better to be a man unsatisfied
than a pig satisfied.” A musician who chose that career believing that music is an end in itself might answer that the rise of human
intelligence mattered because it was necessary to the birth of Bach; a mathematician could single out Euclid; a physicist might cite
Newton or Einstein. Someone with an appreciation of humanity, beyond the individual humans, might answer that this was a critical
moment in the relation of life to the universe – the beginning of humanity’s growth, of our acquisition of strength and understanding,
eventually spreading beyond Earth to the rest of the galaxy and the universe.
The beginnings of human intelligence, or the invention of writing, probably went unappreciated by the individuals who were present
at the time. But such developments do not always take their creators unaware. Francis Bacon, one of the critical figures in the inven-
tion of the scientific method, made astounding claims about the power and universality of his new mode of reasoning and its ability
to improve the human condition – claims which, from the perspective of a 21st-century human, turned out to be exactly right. Not all
good deeds are unintentional. It does occasionally happen that humanity’s victories are won not by accident but by people making the
right choices for the right reasons.
Why is the Singularity worth doing? The Singularity Institute for Artificial Intelligence can’t possibly speak for everyone who cares
about the Singularity. We can’t even presume to speak for the volunteers and donors of the Singularity Institute. But it seems like a
good guess that many supporters of the Singularity have in common a sense of being present at a critical moment in history; of hav-
ing the chance to win a victory for humanity by making the right choices for the right reasons. Like a spectator at the dawn of human
intelligence, trying to answer directly why superintelligence matters chokes on a dozen different simultaneous replies; what matters is
the entire future growing out of that beginning.
But it is still possible to be more specific about what kinds of problems we might expect to be solved. Some of the specific answers
seem almost disrespectful to the potential bound up in superintelligence; human intelligence is more than an effective way for apes to
obtain bananas. Nonetheless, modern-day agriculture is very effective at producing bananas, and if you had advanced nanotechnol-
ogy at your disposal, energy and matter might be plentiful enough that you could produce a million tons of bananas on a whim. In a
sense that’s what nanotechnology is – good-old-fashioned material technology pushed to the limit. This only begs the question of “So
what?”, but the Singularity advances on this question as well; if people can become smarter, this moves humanity forward in ways
that transcend the faster and easier production of more and more bananas. For one thing, we may become smart enough to answer the
question “So what?”
In one sense, asking what specific problems will be solved is like asking Benjamin Franklin in the 1700s to predict electronic circuit-
ry, computers, Artificial Intelligence, and the Singularity on the basis of his experimentation with electricity. Setting an upper bound
on the impact of superintelligence is impossible; any given upper bound could turn out to have a simple workaround that we are too
young as a civilization, or insufficiently intelligent as a species, to see in advance. We can try to describe lower bounds; if we can see
how to solve a problem using more or faster technological intelligence of the kind humans use, then at least that problem is probably
solvable for genuinely smarter-than-human intelligence. The problem may not be solved using the particular method we were thinking
of, or the problem may be solved as a special case of a more general challenge; but we can still point to the problem and say: “This is
part of what’s at stake in the Singularity.”
If humans ever discover a cure for cancer, that discovery will ultimately be traceable to the rise of human intelligence, so it is not
absurd to ask whether a superintelligence could deliver a cancer cure in short order. If anything, creating superintelligence only for the
sake of curing cancer would be swatting a fly with a sledgehammer. In that sense it is probably unreasonable to visualize a signifi-
cantly smarter-than-human intelligence as wearing a white lab coat and working at an ordinary medical institute doing the same kind
of research we do, only better, in order to solve cancer specifically as a problem. For example, cancer can be seen as a special case of
the more general problem “The cells in the human body are not externally programmable.” This general problem is very hard from
our viewpoint – it requires full-scale nanotechnology to solve the general case – but if the general problem can be solved it simultane-
ously solves cancer, spinal paralysis, regeneration of damaged organs, obesity, many aspects of aging, and so on. Or perhaps the real
problem is that the human body is made out of cells or that the human mind is implemented atop a specific chunk of vulnerable brain
– although calling these problems raises philosophical issues not discussed here.
Singling out “cancer” as the problem is part of our culture’s particular outlook and technological level. But if cancer or any gen-
eralization of “cancer” is solved soon after the rise of smarter-than-human intelligence, then it makes sense to regard the quest for
the Singularity as a continuation by other means of the quest to cure cancer. The same could be said of ending world hunger, curing
Alzheimer’s disease, or placing on a voluntary basis many things which at least some people would regard as undesirable: illness, de-
structive aging, human stupidity, short lifespans. Maybe death itself will turn out to be curable, though that would depend on whether
the laws of physics permit true immortality. At the very least, the citizens of a post-Singularity civilization should have an enormously
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
higher standard of living and enormously longer lifespans than we see today.
What kind of problems can we reasonably expect to be solved as a side effect of the rise of superintelligence; how long will it take to
solve the problems after the Singularity; and how much will it cost the beneficiaries? A conservative version of the Singularity would
start with the rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains that have been enhanced
by purely biological means. This scenario is more “conservative” than a Singularity which takes place as a result of brain-computer
interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 op-
erations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singular-
ity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could
benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercial-
ization and distribution.
Human genius is not the only factor in human science, but it can and does speed things up where it is present. Even if intelligence
enhancement were treated solely as a means to an end, for solving some very difficult scientific or technological problem, it would
still be worthwhile for that reason alone. The solution might not be rapid, even after the problem of intelligence enhancement had been
solved, but that assumes the conservative scenario, and the conservative scenario wouldn’t last long. Some of the areas most likely to
receive early attention would be technologies involved in more advanced forms of superintelligence: broadband brain-computer in-
terfaces or full-fledged Artificial Intelligence. The positive feedback dynamic of the Singularity – smarter minds creating still smarter
minds – doesn’t need to wait for an AI that can rewrite its own source code; it would also apply to enhanced humans creating the next
generation of Singularity technologies.
The Singularity creates speed for two reasons: First, positive feedback – intelligence gaining the ability to improve intelligence
directly. Second, the shift of thinking from human neurons to more readily expandable and enormously faster substrates. A brain-
computer interface would probably offer a limited but real version of both capabilities; the external brainpower would be both fast and
programmable, although still yoked to an ordinary human brain. A true Artificial Intelligence, or a human scanned completely into a
sufficiently advanced computer, would have total self-access
At this point one begins to deal with superintelligence as the successor to current scientific research, the global economy, and in fact
the entire human condition; rather than a superintelligence plugging into the current system as an improved component. Often people
instinctively and automatically adopt an “Us Vs. Them” view of this situation – the instinct that people who are different are therefore
on a different side – but if humans and superintelligences are playing on the same team, it would be straightforward for the most ad-
vanced mind at any given time to offer a helping hand to anyone lagging behind. There is no technological reason why humans alive
at the time of the Singularity could not participate in it directly. In our view this is the chief potential benefit of the Singularity to exist-
ing humans; not technologies handed down from above but a chance to become smarter and participate directly in creating the future.
In history up until now, it has taken less and less time for major changes to occur. Life first arose around three and half billion years
ago; it was only eight hundred and fifty million years ago that multi-celled life arose; only sixty-five million years since the dinosaurs
died out; only five million years since the hominid family split off within the primate order; and less than a hundred thousand years
since the rise of Homo sapiens sapiens in its modern form. Agriculture was invented ten thousand years ago; Socrates lived two and
half thousand years ago; the printing press was invented five hundred years ago; the computer was invented around sixty years ago.
You can’t set a speed limit on the future by looking at the pace of past changes, even if it sounds reasonable at the time; history shows
that this method produces very poor predictions. From an evolutionary perspective it is absurd to expect major changes to happen in a
handful of centuries, but today’s changes occur on a cultural timescale, which bypasses evolution’s speed limits. We should be wary of
confident predictions that transhumanity will still be limited by the need to seek venture capital from humans or that Artificial Intelli-
gences will be slowed to the rate of their human assistants (both of which I have heard firmly asserted on more than one occasion).
We can’t see in advance the technological pathway the Singularity will follow, since if we were that smart ourselves we’d already
have done it. But it’s possible to toss out broad scenarios, such as “A smarter-than-human AI absorbs all unused computing power on
the then-existent Internet in a matter of hours; uses this computing power and smarter-than-human design ability to crack the protein
folding problem for artificial proteins in a few more hours; emails separate rush orders to a dozen online peptide synthesis labs, and in
two days receives via FedEx a set of proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which
can build more advanced nanotechnology.” This is not a smarter-than-human solution; it is a human imagining how to throw a magni-
“There’s this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just
don’t notice it. You’ve got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an
airplane, your gate gets chosen by an AI scheduling system. Every time you use a piece of Microsoft software, you’ve got an
AI system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you
see a movie with computer–generated characters, they’re all little AI characters behaving as a group. Every time you play a
video game, you’re playing against an AI system.”
- Rodney Brooks
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
fied, sped-up version of human design abilities at the problem. There are admittedly initial difficulties facing a superfast mind in a
world of slow human technology. Even humans, though, could probably solve those difficulties, given hundreds of years to think about
it. And we have no way of knowing that a smarter mind can’t find even better ways.
If the Singularity involves not just a few smarter-than-usual researchers plugging into standard human organizations, but the transition of
intelligent life on Earth to a smarter and rapidly improving civilization with an enormously higher standard of living, then it makes sense
to regard the quest to create smarter minds as a means of directly solving such contemporary problems as cancer, AIDS, world hunger,
poverty, et cetera. And not just the huge visible problems; the huge silent problems are also important. If modern-day society tends to drain
the life force from its inhabitants, that’s a problem. Aging and slowly losing neurons and vitality is a problem. In some ways the basic
nature of our current world just doesn’t seem very pleasant, due to cumulative minor annoyances almost as much as major disasters. This
may usually be considered a philosophical problem, but becoming smarter is something that can actually address philosophical problems.
The transformation of civilization into a genuinely nice place to live could occur, not in some unthinkably distant million-year fu-
ture, but within our own lifetimes. The next leap forward for civilization will happen not because of the slow accumulation of ordinary
human technological ingenuity over centuries, but because at some point in the next few decades we will gain the technology to build
smarter minds that build still smarter minds. We can create that future and we can be part of it.
If there’s a Singularity effort that has a strong vision of this future and supports projects that explicitly focus on transhuman technolo-
gies such as brain-computer interfaces and self-improving Artificial Intelligence, then humanity may succeed in making the transition
to this future a few years earlier, saving millions of people who would have otherwise died. Around the world, the planetary death rate
is around fifty-five million people per year (UN statistics) - 150,000 lives per day, 6,000 lives per hour. These deaths are not just prema-
ture but perhaps actually unnecessary. At the very least, the amount of lost lifespan is far more than modern statistics would suggest.
There are also dangers for the human species if we can’t make the breakthrough to superintelligence reasonably soon. Albert Ein-
stein once said: “The problems that exist in the world today cannot be solved by the level of thinking that created them.” We agree
with the sentiment, although Einstein may not have had this particular solution in mind. In pointing out that dangers exist it is not
our intent to predict a dystopian future; so far, the doomsayers have repeatedly been proven wrong. Humanity has faced the future
squarely, rather than running in the other direction as the doomsayers wished, and has thereby succeeded in avoiding the oft-predicted
disasters and continuing to higher standards of living. We avoided disaster by inventing technologies which enable us to cope with
complex futures. Better, more sustainable farming technologies have enabled us to support the increased populations produced by
modern medicine. The printing press, telegraph, telephone, and now the Internet enable humanity to apply its combined wisdom to
problem-solving. If we’d been forced to move into the future without these technologies, disaster probably would have resulted. The
technology humanity needs to cope with the coming decades may be the technology of smarter-than-human intelligence. If we have to
face challenges like basement laboratories creating lethal viruses or nanotechnological arms races with just our human intelligence, we
may be in trouble.
Finally, there is the integrity of the Singularity itself to safeguard. This is not necessarily the most difficult part of the challenge,
compared to the problem of creating smarter-than-human intelligence in the first place, but it needs to be considered.
It is possible that the integrity of a human-originating Singularity needs no safeguarding; that any human from Gandhi to Stalin, if
enhanced sufficiently far beyond human intelligence, would end up being wiser and more moral than anyone alive today. It’s also pos-
sible that a mistake in enhancement – hacking the brain incorrectly – could do terrible damage, catastrophic damage, or even irrecov-
erable damage.
An analogous problem exists for Artificial Intelligence, where the task is not enforcing servitude on the AI or coming up with a per-
fect moral code to “hardwire”, but rather transferring over the features of human cognition that let us conceive of a morality improving
over time (see the Singularity Institute’s section on Friendly Artificial Intelligence for more information, online at www.singinst.org).
Safeguarding the integrity of the Singularity is another reason for facing the challenge of the Singularity squarely and deliberately.
Safe human intelligence enhancement is an art that does not presently exist. Likewise the art of ethical Artificial Intelligence. In both
cases, we can best safeguard the integrity of the Singularity by confronting the Singularity intentionally and with full awareness of the
responsibilities involved.
Despite the enormity of the Singularity, sparking the Singularity – creating the first smarter-than-human intelligence – is a problem
of science and technology. The Singularity is something that we can actually go out and do – and do correctly, or alternatively screw
up. It is not a philosophical way of describing something that inevitably happens to humanity. The sweep of human progress and our
technological economy create the potential for the Singularity (just as it takes the entire framework of science to create the potential
for a cancer cure), but it also takes a deliberate effort to run the last mile and fulfill that potential. If someone asks you if you’re inter-
ested in donating to AIDS research, you might reply that you believe that cancer research is relatively underfunded and that you are
donating there instead; you would probably not say that by working as a stockbroker you support the world economy in general and
thereby contribute as much to humanity’s progress toward an AIDS cure as anyone. In that sense, sparking the Singularity is no differ-
ent from any other grand challenge – someone has to do it.
The Singularity Institute’s mid-term mission is solving the problem of reflectivity – creating an AI that thinks about how to think.
Just as Artificial Intelligence is probably the premier unsolved problem of modern science, reflectivity is the premier unsolved problem
in Artificial Intelligence. Humans seem to get a tremendous amount of mileage out of thinking about how to think, but modern AI sys-
tems can’t seem to handle this at all. Even the theoretical foundations, the basic math of probability and decision theory, break down
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
when they try to describe an AI modifying the part of itself that modifies itself. A formal theory of reflectivity, we believe, is how you
would go about building a rigorously safe, self-modifying Friendly Artificial Intelligence.
This is the foundational work that can be done now, with limited funding; human intelligence enhancement would require far more fund-
ing, being on the scale of medical research rather than computer science research (and would also run into severe problems getting approv-
al from ethics boards or the FDA). This is foundational work that must be done now; new basic math takes years or decades to develop.
At this moment in time, there is a tiny handful of people who realize what’s going on and are trying to do something about it. It is not
quite true that if you don’t do it, no one will, but the pool of other people who will do it if you don’t is smaller than you might think. If
you’re fortunate enough to be one of the few people who currently know what the Singularity is and would like to see it happen – even
if you learned about the Singularity just now – we need your help because there aren’t many people like you. This is the one place
where your efforts can make the greatest possible difference – not just because of the tremendous stakes, though that would be far
more than enough in itself, but because so few people are currently involved.
The Singularity Institute is a 501(c)(3) tax-exempt nonprofit which exists to unite the efforts of the Singularity aware: to accelerate
the arrival of the Singularity in order to hasten its human benefits; to close the window of vulnerability that exists while humanity can-
not increase its intelligence along with its technology; to protect the integrity of the Singularity by ensuring that those projects which
finally implement the Singularity are carried out in full awareness of the implications and without distraction from the responsibilities
involved; and to code the damn AI, because someone has to do it. That’s our dream. Whether it actually happens depends on whether
enough people take the Singularity seriously enough to do something about it – whether humanity can scrape up the tiny fraction of its
resources needed to face the future deliberately and firmly.
We can do better. The future doesn’t have to be the dystopia promised by doomsayers. The future doesn’t even have to be the flashy
yet unimaginative chrome-and-computer world of traditional futurism. We can become smarter. We can step beyond the millennia-old
messes created by human-level intelligence. Humanity can solve its problems – both the huge visible problems everyone talks about
and the huge silent problems we’ve learned to take for granted. If the nature of the world we live in bothers you, there is something
rational you can do about it.
Don’t be a bystander at the Singularity. You can direct your effort at the point of greatest impact – the beginning.
For more information about the Singularity, see the website of the Singularity Institute for Artificial Intelligence at: singinst.org
“One consideration that should be taken into account when deciding whether to promote the development of superintel-
ligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day
have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce
or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or
terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may
avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to
face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall
risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.”
- Nick Bostrom
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
Tyler Emerson here, curator of the Singularity Summit and executive director of the Singularity Institute for Artificial Intelligence
(SIAI). I want to tell you about SIAI: what we do, who we are, why we exist, what we want to do, and why you should care.
Please set aside some time to read this, and let me know what you think, regardless of your views: emerson@singinst.org.
What do we do?
In the coming decades, humanity will likely create a powerful AI. We exist to confront this urgent challenge, the opportunity and risk.
That may sound strange to some of you. AI researchers over the past 50 years have often over-promised and under-delivered. You
should be skeptical, certainly, but hopefully not closed to new evidence. In our judgment, we see signs that AI systems of a new gen-
eration of power may come about in the next few decades – systems that understand their own behavior and work to improve them-
selves. This would create new technologies with tremendous value in education, medicine, science, engineering, and many other areas
of human endeavor. However, there may also be substantial dangers – risks that would not be plainly apparent or simple to prevent.
We have three principal aims:
1) Create the science to analyze the likely consequences of this nascent technology.
2) Create a clear vision of a desirable future.
3) Develop a roadmap for guiding the development of this technology to bring about that desirable future.
And three major goals:
1) Foster scientific research on safe AI through research and development, fellowships, grants, and science education.
2) Further the understanding of its implications to society through educational outreach, such as our Singularity Summit.
3) Advance education among students to develop an interdisciplinary community of talented young scientists studying safe AI.
Our present work:
1) In-house research (summarized later)
2) Research fellowships (e.g. Eliezer Yudkowsky)
3) Singularity Summit and monthly salon dinners
4) Educational outreach and awareness-building
5) OpenCog Initiative (early planning stage)
6) AI Impact Initiative (early planning stage)
7) Research grants (once seed funding secured)
the only paid non-researcher; Director of Outreach Bruce Klein volunteers. Ray Kurzweil, author of The Singularity Is Near, is an SIAI
Director. Our advisors include Peter Thiel, president of Clarium and director at Facebook; Dr. Barney Pell, CEO of Powerset; Dr. Nick
Bostrom, director of the Oxford Future of Humanity Institute; Dr. Stephen Omohundro, president of Self-Aware Systems; and others.
We have a few hundred small donors and a few dozen volunteers spread around the world at this time.
Research Program
Our mission is to create a framework for the development of safe AI. One of our paths toward this is research and development.
We have three aims with our research program:
1) Understand the problems underlying the creation of safe AI with powerful general intelligence.
2) Pursue in-house theoretical and experimental research to work on the foundational problems of safe AI.
3) Provide the AI community at large with conceptual, mathematical, and software tools to move their work toward
positive outcomes.
Our aims are explicitly different than contemporary academic and industry AI research communities, in two key ways:
1) Our focus on general intelligence (e.g. reflective thought), rather than narrow AI software, such as chess-playing or
fraud detection.
2) Our focus on beneficial use and safety, which must be foundational, not tacked on at the end of the theory and
development process.
See the first appendix to learn more about our research interests and our full program proposal, as well as the following from Eliezer.
we can consider whether you may be capable of original discoveries. You need to be a math talent who can think in code. You should
have extraordinarily high fluid intelligence, since you will need to learn, unlearn, and do it fast. You should also be emotionally stable.
All else is negotiable!
For further details, please visit: http://www.singinst.org/aboutus/opportunities
Remarkable Slideshow
The ideas of the Singularity and Friendly AI are powerful, but more importantly, relevant today. I am a benefactor and
advisor to the Singularity Institute for Artificial Intelligence because they are making unique contributions to these critical
areas of knowledge.
– Peter Thiel, Founder and President, Clarium Capital
Our ideas have yet to be presented in a way that resonates with a diverse group of people. We need to tell an accurate, compelling
story that connects intellectually and emotionally with hundreds of thousands of people around the world. Have you seen Al Gore’s
film documentary, An Inconvenient Truth? Whether you were persuaded by his case for climate change, it is reasonable to say that his
arguments were strengthened by his extraordinary visuals – whether the beautiful graphics, humorous videos, or remarkable photos.
Funding allowing, we would work with Duarte Design (in the Bay Area) on this project, or a similar group specializing in visual story-
telling. We are especially interested in Duarte since they created Gore’s stunning slideshow visuals and helped him craft his message.
Once created, the Singularity Institute would begin giving this slideshow around the country, focusing especially at top universities,
since many more students need to be inspired to study these subjects, and ultimately devote their careers to them. This would help build
a community of smart young scientists who have the multidisciplinary knowledge needed to work on the relevant problems; and help
build our organizational infrastructure to support them. The compelling presentation and an accompanying website would also create a
stronger case for the feasibility, desirability, and immediate relevance of our mission and goals, which is essential to increasing support.
AI Impact Initiative
The AI Impact Initiative would foster a framework for the safe development of advanced AI. This technology has the potential to im-
pact every aspect of human life. We are in a critical window of opportunity where we have powerful but temporary leverage to influence
the outcome. Only a small group of scientists are aware of the core issues, and it is essential to get a broader range of thinkers involved.
Funding allowing, we would organize an initial meeting to analyze the central issues, with a multidisciplinary group that brought a
broad perspective, ranging from computer science, security, economics, evolutionary biology, cognitive science, political theory, deci-
sion theory, physics, philosophy, ethics, etc. Our near-term goal would be to create documents that clearly expressed the central issues,
and disseminate them to students, scholars, and scientists. Our long-term goal would be to lay the foundation for a new multidisci-
plinary science to study these issues. This would involve creating expository materials, building an international network of scientists
and scholars, organizing workshops, and creating a comprehensive report to provide direction for future research and development.
definitions of optimization, criteria of intelligence within a Bayesian framework, and hidden downsides of randomized algorithms in
Artificial Intelligence.
Theoretical Research
Research Area 1: Mathematical Theory of General Intelligence
Our research in this area would focus on using algorithmic information theory and probability theory to formalize the notion of
general intelligence. Important work in this area has been done by Marcus Hutter (a pioneer in universal artificial intelligence theory),
Jürgen Schmidhuber (codirector of the Dalle Molle Institute for Artificial Intelligence in Switzerland), Shane Legg (Ph.D. student
at the Dalle Molle Institute for Artificial Intelligence), and others, as well as by our team; but this work has not yet been connected
with pragmatic AGI designs. Meeting this challenge would be one of our major goals going forward. Specific focus areas within this
domain include:
1) Mathematical Formalization of the “Friendly AI” Concept. Proving theorems about the ethics of AI systems relies on possessing
an appropriate formalization of the notion of ethical behavior on the part of an AI. This formalization is a difficult research
question unto itself.
2) Implications of Algorithmic Information Theory for the Predictability of Arbitrarily Intelligent AIs. In 2006, Shane Legg made an
interesting, but ultimately failed attempt to prove algorithmic information theoretic limitations on the possibility of guaranteeing
ethical behavior on the part of future AIs. This line of research however has significant potential for future exploration.
3) Formalizing the Concept of General Intelligence. Shane Legg and Marcus Hutter published a paper in 2006 on a formal
definition of general intelligence. Their work is excellent but can be extended; e.g., to connect their ideas with practical
intelligence tests for AGIs.
4) Reflective Decision Theory: Extending Statistical Decision Theory to Strongly Self-Modifying Systems. Statistical decision
theory, as it stands, tells us little about software systems that regularly make decisions to modify their own source code. This
deficit must be remedied if we wish to formally understand self-modifying AIs, their potential dangers, and how to ensure their
long-term safety and positive use.
PA L AC E O F FINE A RTS T H E AT RE • SAN F R ANCI SCO, CA • SE PT E MB E R 8- 9, 20 07
5) Dynamics of Goal Structures Under Self-Modification. Under what conditions would an AGI system’s internal goal structure
remain invariant as the system self-modifies? Suppose one of the system’s top-level goals would be goal-system invariance – it
would still not guarantee invariance. Further conditions are needed, but the nature of these conditions has not been seriously
investigated. This is a deep mathematical issue in the dynamics of computational intelligence, with critical implications for the
development of safe AGIs.
specialized. One delay of progress in AGI is the lack of appropriate tools. Each team must develop their own, which is time-consum-
ing and distracts attention from the actual creation of AGI designs and systems. One of the key roles the Singularity Institute could
play going forward would be in the creation of robust tools for AGI development, to be utilized in-house and by the AGI research
community at large.
1) AGISim, a 3D Simulation World for Interacting with AGI Systems. AGISim is an open-source project in alpha release. It is
usable, but still needs more coding work done. A related task, of significant use to robotics researchers, would be the precise
simulation of existing physical robots within AGISim. AGISim also could play a key role in some of the AGI IQ/ethics evaluation
tasks to be described later.
2) Lojban: A Language for Communicating with Early-Stage AGIs. Lojban is a constructed language with hundreds of speakers,
based on predicate logic. Thus, it is particularly suitable for communication between humans and AGIs. A Lojban parser exists,
but needs to be modified to make it output logic expressions, which would then allow Lojban to be used to converse with logic-
based AGI systems. This would allow communication with a variety of AI systems in a human-usable yet relatively unambiguous
way, which would be valuable for instructing AGI systems, including ethical behavior instruction.
3) Translating Mizar to KIF. Mizar is a repository of mathematical knowledge, available online but in a complex format that is
difficult to feed into AI theorem-proving systems. In six months, a qualified individual could translate Mizar to KIF, a standard
predicate logic format, which would enable its use within theorem-proving AI systems, a crucial step toward AGI systems that can
understand themselves and the algorithms utilized within their sourcecode.
Research Area 5: Design and Creation of Safe Software Infrastructure
Some key areas of tool development are not adequately addressed by any current open-source project, for example, the creation of
programming languages and operating systems possessing safety as built-in properties. Of course, Singularity Institute researchers
would not be able to complete such large, complex projects on their own, but the institute could potentially play a leadership role by
articulating detailed designs, solving key conceptual problems, and recruiting external partners to assist with engineering and testing.
1) Programming Languages Combining Efficiency with Provability of Program Correctness. In the interest of AGI safety, it would
be desirable if AGI software programs could be proved to correctly implement the software designs they represented. However,
there is no language that supports proof-based program correctness checking and is sufficiently efficient in terms of execution to
be pragmatically useful for AGI purposes. Such a programming language framework would require major advances in
programming language theory.
2) Safe Computer Operating Systems. Is it feasible to design a provably correct OS? In principle, yes, but it would require a
programming language that combined efficiency with provable correctness, as well as several interconnected breakthroughs in OS
theory. Creating a version of Unix in a programming language that supported provable correctness would be a start, but there
would be many issues to address. This research would require close collaboration between a mathematician and an experienced
operating systems programmer.
rigorous assessment than any approach now available. We consider it important that work in this area start soon, so that “ethics test-
ing” becomes accepted as a standard part of AGI R&D.
1) Recognizing Situational Entailment Challenge. We want to extend the “Recognizing Textual Entailment” challenge by
defining a “Recognizing Situational Entailment” challenge, in which AI systems are challenged to answer simple English
questions about “simulation world movies” that they are shown. The movies would be generated using the AGISim framework.
An annual workshop to address this challenge could be organized as part of a recognized AI conference.
2) Development of a Suite of Benchmark Learning Tasks within AGISim. Within the context of the AGISim world, we would
develop a set of tasks on which any AGI system could be tested, e.g. playing tag, imitating behaviors, imitating structures built
from blocks, etc. Having a consistent set of benchmark tasks for comparing different AGI approaches is important for
coordination of progress in the field.
3) Development of a Suite of Benchmark Ethics Tests within AGISim. Just as one could test intelligence through AGISim scenarios,
one could also test ethics, by placing the AGI in situations where it must interact with other agents, assessing the ethical sensitivity
of its behaviors. Testing within such scenarios should become a standard part of assessing the nature of any new AGI architecture.
4) Porting of Human IQ Tests to AGIs. To what extent are human IQ tests overly human-centric? Could we create variants of the IQ
tests administered to humans that are more appropriate for AIs? It may be that different variants must be created for different AIs,
e.g. based on the nature of the AIs embodiment and sensory organs. Investigating the variation of IQ questions, based on the
nature of the intelligent system being tested, would be one way to probe the core of intelligence.
By SIAI Research Fellow Eliezer Yudkowsky and SIAI Advisor Dr. Nick Bostrom