Вы находитесь на странице: 1из 8

Matthew Trusnovic

Mrs. Thomas
UWRT 1201 021
September 18, 2015
Reflection: The peer conference showed where I needed to finish the thesis and where I need to
improve on the ideas and concepts of the draft. This included a more in depth explanation of
more of the deeper thoughts of the thesis ad background knowledge. Along with this, the They
Say and I Say are also more developed for the final draft.
Science Non-Fiction
The year is 2029, and humanity is fighting a war with the Artificial Super Intelligence named
Skynet. We created the machines and they turned on us, destroying 3 billion people at once on
what we now call Judgement Day. Our only hope is Kyle Reese, sent back in time to save Sarah
Connor, the mother of the future leader of the resistance. But wait, this is not the script to a new
Terminator movie, this is not a movie directed by James Cameron, and there are no killer robots
out to destroy us. I mean killer robots? Who would even believe this stuff? The realities of what
these questions are not only surprising, but they are approaching fast. As we as a society grow
and change with the new times, we must prepare for what may arise from the emergence of
Artificial Intelligence. What responsibilities do we have both individually and socially with
creating this new life form? This question has bothered me for years and is coming up recently as
a very hot topic in for almost every aspect of society. We all will have to come to terms with it
someday soon.

Peter Bock, the chief science officer at ALISA systems incorporated and former professor
of engineering at George Washington University, defines Artificial Intelligence as the ability of
a human made machine to emulate or simulate human methods for the deductive and inductive
acquisition and application of knowledge and reason. (Bock, The Emergence of Artificial
Intelligence: Learning to Learn).This generally means that an Artificial Intelligence (AI for
short) is a man-made machine that can think with the same process that a normal human being
can for solving problems. This measure of intelligence is not to be confused with consciousness
or even sentience. Animals are considered to be having consciousness, that is the ability to feel
and be aware of oneself, and no life forms besides humanity have been considered to be highly
sentient, that is having the ability to think objectively. These two qualities do not distinguish an
AI, and instead of using these to measure the Artificial Intelligences quality, the relative
intelligence of the system is used. There are three categories into which an Artificial Intelligence
can be grouped into, Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial
Super Intelligence (Urban, The AI Revolution: Road to Superintelligence).
Artificial Narrow Intelligence, abbreviated as ANI, can be described as any intelligent
system that is highly skilled in only one singular task (Urban, The AI Revolution: Road to
Superintelligence). These types of systems are very common in todays world and many people
use these systems every day. GPS, stock traders, and even calculators are examples of ANIs at
work. A GPS is highly skilled at finding the best routes from point A to point B (even if some of
us do not agree with their choice of routes), but it can hardly be asked to give the deeper meaning
behind Animal Farm by George Orwell. Likewise, a calculator may be able to find the exact
values of calculus problems down to the nearest ten-thousandth place but when asked about how

to create a work of art from a blank canvass, all that it will output is the flashing bar to indicate
its ready to do more mundane math.
Artificial General Intelligence, also known as AGI, is defined as an intelligent system
with human level intelligence and variety (Urban, The AI Revolution: Road to
Superintelligence). This type of artificial intelligence has not occurred yet, but it is approaching
fast. Although due to the nature of Artificial Intelligence, this one type of AI will likely only exist
during a very short portion of time. AGIs would operate much like a human would, they would
be as varied in skills and humans are and can go into as much depth as humanity can. This type
of AI is also very widely portrayed in popular culture, most notably in the science fiction genre.
Some examples of AGIs are WALL-E from the Pixar movie of the same name, C-3PO and R2D2 from the Star Wars franchise, and Sonny from the movie I, Robot. All of these AI systems
qualify under the AGI category, although they also seem to possess a certain degree of sentience
and consciousness which may or may not happen in reality. ADD TRANSITION TO NEXT
PARAGRAPH
Artificial Super Intelligence is as obvious as it sounds, an intelligent system that vastly
outperforms a human (Urban, The AI Revolution: Road to Superintelligence). These type of AIs
are abbreviated as ASIs and cover any intelligent system that is any bit more intelligent that a
human being, that covers a system that is only slightly better to one that has an IQ that is
thousands of times greater than a singular humans. These intelligent systems tend to become the
antagonists in pop culture and are shown by examples like Skynet from the Terminator franchise,
and the Master Control Program from Tron. These types of AIs are likely to be created very soon
after AGIs come into existence.

Artificial Intelligence research has been around since around the mid-20th century, but the
original ideas behind giving human-like qualities to something non-human have existed for
centuries. There have been myths and legends around the world about master craftsman building
creations and giving them life, an ancient idea of creating artificial intelligence. These legends
likely inspired those early researchers into trying to create a modern day golem. Over the years
this steady research has gifted society with devices and technologies like GPS and a chess
machine that will always beat a human player, but the possible applications of a true AI in the
future can provide even more advances. One can imagine a world in which no human lives need
to be risked in emergency situations, or all of those in the world having access to an extremely
effective doctor whenever they need one. Raj Reddy insists that the development of these AI
systems will indeed help in times of disaster, as human controlled robots already immensely help
the rescue efforts in modern times (CITE SOURCE 4). This still risks the human rescuers lives,
so by putting an autonomous system with the same intelligence as a human there instead of an
actual person, human lives are not as much endangered. In a similar fashion, IBMs new system
named Watson aims to help healthcare worldwide. Watson seeks to take unstructured data and
create connections between them so any piece of information that one gives it can be crossreferenced and that person can receive a proper diagnosis of their condition.
Now that the benefits of these autonomous systems is shown, the question now is
whether or not they can actually be created. The surprising answer is that they can, and will be
created in the next 20-50 years. This time frame is only predicting the occurrence of an AGI, but
an ASI will follow soon after. This is due to a theoretical event known as an Intelligence
Explosion (Urban, The AI Revolution : Our Immortality or Extinction). This event is based on
the scenario of an AI tasked with designing a better AI and that AI being tasked with the same

task. Each new iteration will only be slightly better, and then it will take less time create a new
version and so on. This will eventually lead to an exponential growth of intelligence until the ASI
is thousands of times more intelligent than the whole of the human race. This event will likely
happen within the current generations lifetime, as the creation of AGI will happen then as well.
This time frame is due to a concept known as Moores Law. This Law states that memory
capacity and speed of a computer will double relatively every 2 years. This trend shows that the
level of capacity and speed of a human brain will be reached at around the year 2025 (Johnson).
The only thing left after that milestone is to actually create the artificial intelligence system itself,
which is no easy task in itself. Peter Bock explains this by creating an AI that will fill that
capacity itself through learning instead of being initially programmed with the knowledge. He
calls this Learning to Learn (Bock, The Emergence of Artificial Intelligence: Learning to
Learn). Through this process, he puts forward that this input of knowledge would take around 12
years to complete, much like a human childhood.
There are three basic camps of thoughts for how the creation of an Artificial Intelligence
may affect the world. The first group adopts the ideology that the creation of an AI will cause
only a good outcome for humanity and society. The positives of an AI is obvious, as said earlier.
These include better healthcare for all, widespread services for those that need help, and even in
rescue efforts during disasters. This is seen by many as ignorantly optimistic and is in truth the
minority of the three groups. Most all of the people who are inside of this group mainly see that
we as a society can control any kind of ASI that is created, with little to no concern about any
negative consequences. This group of thought sees three different outcomes of an AI, an oracle,
a genie, or even as a sovereign body (Urban, The AI Revolution : Our Immortality or
Extinction). The oracle would be able to answer any and all questions given to it and can

answer it to an extreme accuracy. The genie would basically be the same as the oracle, only
except replacing the answer to the question with creating something that solves a certain
problem. The final possibility that this group sees is as a sovereign body, which would be a fair
and logical ruler over possibly everyone. Now that last part may sound like a good thing to some,
safety and efficiency for society, but others may see that as a negative thing, as it may cause
oppression in others eyes, including my own.
That oppression is one of the many reasons why there is a larger amount of people with
the opinion that the creation of a true Artificial Intelligence is a bad thing for us. Examples of
things that could go wrong are shown in popular media from both modern times to classic films.
The best example would be the autonomous system known as Skynet from the Terminator of
films. In these movies, the AI saw itself threatened by the human race as it came to the
conclusion that humans would ultimately destroy it, so Skynet attacked humanity and tried to
exterminate the human race. This may seem like a bit of an extreme case of which may or may
not happen, but it is not all too far-fetched. Many believe that this type of oppression and
violence towards humanity is an all too real possibility, so they resist the push to create a true AI.
While this camp of thought has its merits, and the other one did as well, there is no way of
knowing if either of these extremes will happen in the future.
The final section of thought on the matter of the aftermath of the creation of a true
Artificial Intelligence is that both other groups have merit and that something between good and
bad will happen. Possibly the AI will initially take control over the world, but then rules as a fair
ruler, or all of the worlds problems could be solved at the cost of personal freedoms. The point
of this group is that there is no true way to know what will truly happen once an ASI comes
about. The world, and possibly the universe, has never seen the likes of an intelligent system this

advanced and what kind of decisions it could make with all of the knowledge at its digital
fingertips. This is the group that I personally believe in, since it seems extremely unlikely that we
would immediately die from an AI and also that it is very unlikely that we could control this
system to work solely for us.
There is little we can do to prevent the coming of Artificial Intelligence, but there are
some steps that we can take that I believe we can collectively take to avoid the negative
repercussions the naysayers of AI pose. First, we need to treat this as a living thing and allow it
to grow naturally. This would show it that humanity is not something it may need to lash out
against. Second, anything important that it electronically stored needs to be transferred to hard
copies. This measure is in case an AI does find reason to turn against humanity and I would
rather not leave some nuclear weapon launch codes or how to control the entire military on some
desktop for it to just scoop up and use. Finally, research on this technology should not be
completed in a military lab somewhere or from a corporate research department. If the
technology is to be completed there, the greed and power hunger that dominated those fields will
most definitely corrupt the autonomous system to be something to be feared. These three
precautions should help for whatever may happen when an Artificial Super Intelligence becomes
a reality.
This topic is not something that everyone is interested in, but is something that everyone
should be aware about, as this technology will affect the whole world when it becomes a reality.
The advances are here and the creation of an ASI will happen during our lifetime. While people
are mixed in their predictions as to what will happen when the ASI is developed, from all
positive outcomes to all negative outcomes to somewhere in between, all that is know is that
something drastic is going to happen. There is little we can do but wait and hope for the best, as

Tim Urban nicely sums up, if an ASI comes to being, there is now an omnipotent God on Earth
and the all-important question for us is: Will it be a nice God?.

Вам также может понравиться