You are on page 1of 3

Hans Moravec - A Caveat for SETI (from Mind children)

SETI, an acronym for the Search for Extra-Terrestrial Intelligence,


is a field of study whose potential is so intellectually exciting that it pro-
ceeds steadily despite any hard evidence that its quarry exists. At its
leading edge are impressive spectrum-analyzing receivers connected to
radio telescopes that can tune in and examine millions of frequency
channels at the same time. Systems able to do this and also look to
thousands of distinct directions at once have already been proposed, all
in an effort to find a needle in a haystack—an artificial message in a
universe naturally noisy in radio frequencies.
But if we managed to receive and decode such a message, should
we act on its instructions? The discussion of this question usually centers
on the intent of the senders. They may be benign and, like the Peace
Corps, be doing well by doing good. They may be traders trying to open
new markets, to much the same effect, at least until it comes time to
negotiate the price. They may simply be looking for pen pals. They
may have dark designs on the rest of the universe and be seeking to
inexpensively eliminate some of the more gullible competition. Or,
their motives may be totally incomprehensible. Simply examining the
message is not enough; it is not, in general, possible to deduce the effect
of complicated instructions without actually carrying them out. A
message with nasty intent would surely be disguised, by master
deceivers, to look benign. In Fred Hoyle and John Elliot's classic novel
A for Andromeda and also in Carl Sagan's Contact, an interstellar
message contains plans for a mysterious machine of unknown purpose.
In both books the characters decide, after some debate, to go ahead with
construction despite the risks. In Contact, a major argument is that the
origin of the message, the star Vega, is so close to our solar system that
the senders could rapidly arrive here physically, should their intentions
be malign. Building the machine would be unlikely to make us any
worse off in the long run. If the message were benign, however, it
represents an opportunity not to be missed.
This chapter's notion of an information parasite suggests greater
caution, should SETI ever detect an artificial message. A rogue message
from no one in particular to no one in particular (perhaps a corruption of
some ancient legitimate interstellar telegram) could survive and thrive
like a virus, using technological civilizations as hosts. It might be as
simple as, "Now that you have received and decoded me, broadcast me
in at least ten thousand directions with ten million watts of power. Or
else." It would be a cosmic chain letter and a cosmic joke, except to the
message itself which, like any living creature, would be making a living
by doing what it does. Since we cannot be sure the "or else" is not
backed by real authors with a peculiar sense of right and wrong, we may
decide to play safe and pass the message on as it requests. Perhaps we
did not hear it very well; maybe it said a hundred million watts; maybe it
mutated. Now envisage a universe populated by millions of such
messages, evolving and competing for scarce, gullible civilizations.
he survivability of such a message could be enhanced if it carried
real information. Perhaps it would contain blueprints for a machine that
promises to benefit its hosts. It would be only fair if part of the
machine's action was to rebroadcast copies of the message itself, or to
demand new information from its hosts to be added to the message to
make it more attractive to future recipients. Like bees carrying pollen for
the sake of flowers in return for nectar for themselves, the technological
host civilizations would have a symbiotic relationship with such
messages, which might be criss-crossing the galaxy trading in useful
ideas. But the analogy suggests darker possibilities. Some carnivorous
plants attract bees with nectar, only to trap them. The message may
promise a benefit, but when the machine is built it may show no self
restraint and fiendishly co-opt all of its host's resources in its message
sending, leaving behind a dead husk of a civilization. It is not too hard to
imagine how such a virulent form of a free-living message might
gradually evolve from more benign forms. A "reproduction effort
parameter" in the message (too subtle for the victims to catch and alter)
may get garbled in transmission, with the higher settings resulting in
more aggressive and successful variants.
The Fermi paradox is an observation by the famous physicist
Enrico Fermi, who created the first controlled atomic chain reaction
under the auspices of the Manhattan Project, that if technological
civilizations have even a slight probability of evolving, their presence
should be visible throughout the universe. Our own history and
prospects suggest that we will soon blossom into the universe ourselves,
leaving it highly altered in our wake. In less than a million years we may
have colonized the galaxy. Given the great age of the universe, a few
civilizations that arose before us should have had plenty of time to alter
many galaxies. The sky should be filled with the cosmic equivalent of
roaring traffic and flashing neon signs. But instead we perceive a great
silence.
There are several possible explanations. Evolutionary biologists
make a plausible, though not watertight, argument which notes that at
each stage of our evolution there were an immense number of
evolutionary lines which did not head toward high technology, as
compared with the single one that did. By this argument, we are the
product of a sequence of very improbable accidents, a series unlikely to
have been repeated in its entirety anywhere else. We may be the first and
only technological civilization in the universe. But there are other
explanations for the great silence. At the height of the cold war, a
leading one was that high technology leads rapidly to self-destruction by
nuclear holocaust or worse. But in every single case? Another possibility
is that advanced civilizations inevitably evolve into forms that leave the
physical universe untouched—perhaps they transmute into an invisible
form or escape to somewhere more interesting. I discuss such a
possibility in the next chapter.
A frightening explanation is that the universe is prowled by stealthy
wolves that prey on fledgling technological races. The only civilizations
that survive long would be ones that avoid detection by staying very
quiet. But wouldn't the wolves be more technically advanced than their
prey, and if so what could they gain from their raids? Our autonomous-
message idea suggests an odd answer. The wolves may be simply
helpless bits of data that, in the absence of civilizations, can only lie
dormant in multimillion-year trips between galaxies or even inscribed on
rocks. Only when a newly evolved, country bumpkin of a technological
civilization stumbles and naively acts on one does its eons-old
sophistication and ruthlessness, honed over the bodies of countless past
victims, become apparent. Then it engineers a reproductive orgy that
kills its host and propagates astronomical numbers of copies of itself
into the universe, each capable only of waiting patiently for another
victim to arise. It is a strategy already familiar to us on a small scale, for
it is used by the viruses that plague biological organisms.