Вы находитесь на странице: 1из 3

Volodymyr Golubyev

Terry Soule
CS101
16 December 2009

Rise of the Machines

In relative terms, it wasn't very long ago when switching out punch cards in computers wasn't a big
deal. Living in a world where computing has become so pervasive it is almost impossible to imagine it
otherwise - and yet there was a time before some landmarks that we are used to today weren't even on
the horizon. Shrunken from the size of an average basement to a pocket-sized widget, refined in
efficiency by more and more sophisticated engineering, it would seem as if the sky is the limit for
modern-day programmers. However, due to a curiously different mechanic at work, one landmark
remains to be achieved: artificial intelligence.
Before any points can be made at all, it's important to firstly demystify the meaning of artificial
intelligence: in recent times, AI has been a hot topic among science fiction authors and hollywood
directors alike. More often than not, the depicted self-aware machines are malevolent and wreak havoc
on unsuspecting people, much like in the cult classic "Terminator". In a culture where the media is so
prevalent, it may be a good idea to take a step back and consider if this depiction is entirely accurate; of
course, it is not. It's important to see how the very mention of AI could create a very powerfully
charged image in the eyes of many exposed to the described depiction. Simply put, the common folk is
afraid.
What is Artificial Intelligence then, really? Due to the overwhelming complexity of the concept at
hand, there is no clear answer to this question. Encyclopedia Britannica suggests that human
intelligence consists of a number of diverse abilities such as learning, reasoning, problem solving,
perception, and using language. It stands to reason that an artificial intelligence should also be capable
of these things, and it should be simulating them competently, rather than simply mimicking. Thus an
agreeable definition of an AI could be a system that competently learns, reasons, solves problems, is
capable of perception and using language.
So why doesn't such a system exist today? Some of the difficulties derived from the above definition
should be fairly obvious: learning and reasoning are ambiguous, intangible subjects that are very
difficult identify, objectify and turn into machine code. In some ways, mechanic perception is superior
to human perception (although from a practical standpoint the metal is lagging behind) but matching
the depth of machine analysis to humans is also highly problematic - preparing for every event is
impossible; only a learning machine would be able to fully take advantage of perception. Creative use
of language is another thing that is difficult to achieve with a machine: in evolutionary terms, humans
inherited motives for communication from their most distant ancestors (Clifford). Hand made
computers do not have motives for communication - obscure error messages are about as far as they'll
go, and even that is not voluntary.
Alan Turing, an influential British computer scientist of the 20th century, among other things, has
developed a standard of a sort - a representation of the very basics that computers are capable of - the
Turing machine. A sort of a thought-experiment establishing the very basics of computing, the Turing
machine is a hypothetical device that points to some of the practical limitations of computing. Apart
from the difficulty of objectifying the decidedly human concepts like learning and reason, there also is
a simpler matter of actually putting the artificial intelligence to some sort of a test to prove it's
genuineness. If a computer and it's AI can be thought of as a Turing machine, a Turing test would need
to be passed.
In accord with the Turing machine, a Turing test is a non-ambiguous intelligence test that aims to
determine whether an AI simply appears to be intelligent to a human engaging it in conversation via
text. Not that there isn't any roadblocks here either: a "true" artificially intelligent machine would have
to be a competent, learning entity and thus appearing human, rather than just a very well scripted chat-
bot with no capacity to learn or reason - not to mention that the fact that analog computers are
fundamentally incapable of some of the same tasks that humans are (The Turing Machine).
An AI machine that would seemingly be capable of learning, reasoning, problem solving, perception
language and in addition passing the Turing test would raise some very interesting question in the
realm apparently not so distant from computer science: philosophy. If with enough computing power
those things become possible, where does one draw the line between a human and a machine? A
somewhat archaic notion of divine design would shake slightly when one is faced with a thinking,
living computer, in every sense of the word. Are our own neuronal networks, albeit slightly superior, so
much different we "outclass" analog computer minds? Is it simply arrogance or is our seeming
intellectual superiority justifiable? These very basics become important questions and for the most part,
they are unpleasant questions that most have avoided and the few brave ones have faced during most of
human history. The truth of the matter is, there is no clear answer.
The current situation is that the difficulties of creating a legitimate AI are overwhelming, but the
prospects, with the technology already more or less on hand, are very obscure and puzzling, knee deep
in philosophy. With AI steadily becoming more and more of a realistic goal, it's not too much of a
stretch to consider that we ourselves are, perhaps, unimaginably sophisticated, self-perpetuating Turing
machines. Or maybe we are a class of our own and computers will never match the creativity of the
conditioned free will that humans possess. What's clear is that artificial intelligence is the next
landmark in computing that will make us re-consider some of our most basic assumptions about the
world.

Works Cited
1. "Artificial intelligence (AI)." Encyclopædia Britannica. 2009. Encyclopædia Britannica
Online. 17 Dec. 2009 <http://www.britannica.com/EBchecked/topic/37146/artificial-
intelligence>.
2. Clifford Nass, Li Gong. Speech Interfaces from an Evolutionary Perspective: Social
Psychological Research and Design Implications. Stanford University. Department of
Communication. 17 Dec 2009. <http://www-
siepr.stanford.edu/programs/SST_Seminars/Evolution_and_Speech.Final1.pdf>
3. "The Turing Test." Stanford Encyclopedia of Philosophy. 17 Dec 2009.
<http://plato.stanford.edu/entries/turing-test/>

Вам также может понравиться