Вы находитесь на странице: 1из 20

Running head: DEVELOPING ARTIFICIAL INTELLIGENCE

Literature Review: Developing Artificial Intelligence


Luis Baeza
University of Texas at El Paso

DEVELOPING ARTIFICIAL INTELLIGENCE

Abstract
The use and development of artificial intelligence to enhance the quality of humans lives
is a subject that raises several issues and concerns. The purpose of this literature review will be
not to give a personal opinion on the issue but to present different perspectives from artificial
intelligence researchers and society have. First, the definition and different types of artificial
intelligence will be addressed. Then the effectiveness of the Turing Test to determine intelligence
on a machine will be reviewed. The last two sections of the essay will be used to expose
concerns and controversies involving benefits, potential risks, and the ethics of developing
artificial intelligence.

DEVELOPING ARTIFICIAL INTELLIGENCE

Artificial intelligence is a subject that gives lots to talk about. It is an ongoing issue that
has been around since the 1950s. There are many controversies surrounding the development of
artificial intelligence and how it might affect the future of humankind. It is an issue that affects
every individual, whether it is positively or negatively, in fields like technology, society, and
science. Some people believe that this computer revolution and the growth of artificial
intelligence will ensure humanitys welfare while others think that developing artificial
intelligence further more will bring more dangers than benefits. Thus, the literature review will
help to understand and analyze what artificial intelligence is aiming to accomplish and how
different perspectives about the subject support their ideas by reviewing the following questions:
What is Artificial Intelligence?, Are artificial intelligence tests, such as the Turing Test, be
effective in proving a computer is as smart as a human?, What are the risks and benefits of
developing artificial intelligence to greater levels?, and How will the areas of ethics and the
moral status of intelligent machines be affected by the development of artificial intelligence?.
What is Artificial Intelligence?
A general definition of artificial intelligence (AI) can be given as any kind of intelligence
presented in a particular artifact created by humans; any human-made artifact that can perform
tasks and have characteristics similar to that of human-thinking. Looking at artificial intelligence
as a field of study Stuart J. Russell and Peter Norvig (2009), authors of the book Artificial
Intelligence: A Modern Approach, defined artificial intelligence as the study of agents that exist
in an environment and perceive and act. Artificial intelligence is trying to develop machines that
can behave and think like humans using reason. Different definitions of artificial intelligence
form other researchers led Russell and Norving to come up with the idea that they were all
centered in 2 main points: thought process and reasoning (Russell and Norving, 2009, p. 4). And

DEVELOPING ARTIFICIAL INTELLIGENCE

from these 2 branches four possible goals of artificial intelligence (p. 4) are to be looked at:
creating systems that think like humans, systems that act like humans, systems that think
rationally, and systems that act rationally. Russell and Norving also explain that artificial
intelligence is trying to understand intelligent entities (p. 3). To build artificial intelligence so
that humanity can learn more about itself and have a mutual understanding with artificial
intelligence.
Having settled the goal of artificial intelligence as a field of study, it is also important to
look at it from another perspective: the actual product of AI. As a video made by The School of
Life (2015) establishes, there are three types of Artificial Intelligence: (1) Artificial Narrow
Intelligence, (2) Artificial General Intelligence, and (3) Artificial Superintelligence. Artificial
Narrow Intelligence is what some people call weak AI. It is the kind of artificial intelligence
that uses complex algorithms (The School of Life, 2015) to perform specific tasks. A system
that is intelligent in a way that in can only perform what it was programmed to. For example, if a
machine is programmed to drive a car, it will do specifically that and no more; it will not be able
to handle other forms of transportation like airplanes and such because it simply will not know
how. Basically, any type of artificial intelligence that exists in current technology is weak AI,
ones programmed to do more complex jobs than others but they fall into this category. The
second kind of AI is Artificial General Intelligence, often referred to as Strong AI. This kind of
AI is the one that equals human cognitive processing and thinking; when there is no telling
difference from human versus a computer program in a blinded conversation as it is shown in
The Turing Test suggested by Alan Truing in 1950 (Russell and Norving, 2009, p. 5). According
to The School of Life (2015), some experts doubt strong AIs will ever be created but others are
positive its development is a few decades away and that those under 35 years old will most likely

DEVELOPING ARTIFICIAL INTELLIGENCE

experience it. It is presumed when machines reach human mental capabilities, they will also have
human characteristics such as self-learning, reasoning, auto correction and reparation, and
perception of the world; it will have what AI professionals call recursive self-improvement
(The School of Life, 2015). This is when the third level of artificial intelligence, Artificial
Superintelligence, will be reached. Artificial Superintelligence is described as any AI that
exceeds any human levels of intelligence even slightly (The School of Life, 2015). Creating
systems with recursive self-improvement means that they will get more and more intelligent
every time and will eventually surpass their creators (humans). Developing artificial intelligence
that go beyond human levels is what a lot of people fear as they think machines could take over
humanity and eventually eradicate it like it is seen in science-fiction movies.
Are artificial intelligence tests, such as the Turing Test, be effective in proving a computer is
as smart as a human?

DEVELOPING ARTIFICIAL INTELLIGENCE

As
the original

mentioned in the previous section, the Turing Test carries the name of
creator of this test, Alan Turing. Turing proposed this test in 1950 with
the purpose of identifying whether or not a computer is
intelligent. As seen in Figure 1, the test consists of having a
blind conversation between a computer system and a

human, a
able

judge. The AI tries to imitate human intelligence and if the judge is not
to recognize if he is speaking to a machine or a normal person
after repeating the test several times, then the AI is

Figure 1. Visual representation of the Turing


Test.

considered to be in a human intellectual capacity. This

test is one of the most popular in the fields of artificial intelligence, recognized by many as a
viable test but also ignored by others as they say it is not sufficient prove that a computer might
be intelligent. Adam Turing himself did not directly argued that his test was plausible but rather
claimed it is a good approximation to testing intelligence (Savova & Peshkin, 2007).
Virginia Savova and Leonid Peshkin (2007) support the idea that the Turing Test is a
sufficient condition to acknowledge intelligence. To support this idea Savova and Peshkin bring
new objections and review previous ones for the two arguments opposing the Turing Test. The
first opposing argument exposed is the Chinese Room thought experiment proposed by Searle
in 1980, which says that intelligence is not only the ability to manipulate certain information but
also to relate it with senses and experiences in the real world and that intelligence presupposes
an internal capacity of generalization (Savova & Peshkin, 2007). Savova and Peshkin agree that
the Turing Test does not include multi-sensory real-world experience but their objection puts as
an example blind and/or deaf people, they have different sensory capabilities than a person who
can see or hear but yet that does not interfere when comparing intellectual levels. The second

DEVELOPING ARTIFICIAL INTELLIGENCE

opposing argument is given by Ned Blocks Aunt Bertha thought experiment. In this
experiment, Block argues that the Turing Test can be passed by what Savova and Peshkin
referred to as a look-up table: a computer stored with large amounts of information of a human
to answer a conversation with finite length (Savova & Peshkin, 2007). They approach this
argument by stating that the Turing Test can be properly constructed in a way that does not allow
look-up table machines to pass it by addressing issues such the length of the test.
Passing the Turing Test has been and continues to be one of the primary goals of many
artificial intelligence developers in creating a truly intelligent machine. Currently there exist
many computer systems that conduct conversations through text or audio that can be easily
accessed through the internet. Several of these computer systems, also known as chatbots,
compete in Turing Test contests every year to prove they are capable to trick judges into thinking
they are having a conversation with a human. In 2014 Eugene Goostman, a chatbot developed in
Russia simulating a Ukrainian 13-year-old boy, claimed to have passed the Turing Test. In the
contest, Eugene convinced 10 out of 30 judges that he was human, but the fact that Eugene
represents a Ukrainian boy is already justifying that because English is his second language,
occasional grammatical errors or incoherent sentences are to be found (Bartlett, 2014). This
leaves many professionals in doubt of whether Eugene truly passed the test or not. According to
James Bartlett (2014) [t]he problem for poor Eugene is that the range and complexity of human
conversations are so vast and so nuanced that no artificial intelligence can yet truly pass the
Turing Test. This is saying that Eugene was just another look-up table as the one from Ned
Blocks argument and that he certainly has not passed the Turing Test yet.

DEVELOPING ARTIFICIAL INTELLIGENCE

Whether or not tests like the Turing Test are effective and if current computers can pass it
are not the only questions raised when developing artificial intelligence, there are many
speculations and controversies of what the future of humanity and AI will be.
What are the risks and benefits of developing artificial intelligence to greater levels?
The development of artificial intelligence could bring many benefits to human race but it
also presents many potential risks. When talking about benefits, it is evident that artificial
intelligence can be applied in many fields such as science, healthcare, economics as well as other
particular areas that help humanity perform more and more complex jobs every day. Potential
risks are also an important issue as some may say it could possibly mean the end of human kind.
One of the most important benefits of artificial intelligence is to develop it for healthcare
and medical purposes. Creating computers that can identify and do research on incurable
diseases a lot faster than humans would mean finding cures for so feared diseases such as cancer
or HIV in a matter of months. Andre LeBlanc, an expert in information technology, in a
presentation on AI given in a TEDx talks conference talks about how future AI will be virtual
and not as robots like so many people think. He said If you wanted to find a cure for cancer,
what more effective way to do it than to test on a simulated human being a billion times with a
certain drug? (LeBlanc, 2015) Meaning intelligent computers doing tasks in a virtual reality that
would be impossible for humans to accomplish in order to find these cures. Artificial intelligence
could also help people with disabilities like those who suffer from paraplegia and other
conditions by creating mechanisms that can replace human body parts.
Another benefit of creating strong AIs is found in the branch of economics. In marketing,
specifically in the stock market via internet, artificial intelligence can be used to analyze and
understand information to then take decisions that would generate a lot of profits to companies

DEVELOPING ARTIFICIAL INTELLIGENCE

using it without moving a finger, the computer will do it for them. And this is something that is
already happening as LeBlanc (2015) stated right now there is one company in the US, they do
five hundred million automated trades per month. This is a robot that takes tons of information,
makes decisions every millisecond, and makes hundreds of millions of dollars every year there
is no way that a human could compete with that. Not only could AIs be made to work in the
stock market but also in industries by making new kinds of machinery to enhance production.
The School of Life (2015) said that [t]he immediate priority of superintelligence will be to help
us to create free energy in turn dramatically reducing the price for almost everything. We would
soon be in the era that Googles chief futurologist Ray Kurzweil describes as Abundance. What
this means is that the price of products people use in their everyday lives will drop drastically, for
example, the way the price of mobile data dropped in past years. An era of which working for
money will not be necessary (The School of Life, 2015).
Of course not everything that comes from the development of this technology will be
beneficial to humanity. There are several potential risks that put many in doubt as to whether
artificial intelligence should be developed further more. One of the risks is not being able to
control superintelligent machines or robots. This is not a problem that is feared only by
uninformed or skeptical individuals that believe robots will take over the world as seen in
science-fiction movies, but it is also feared by famous scientists and researchers like Bill Gates
and Stephen Hawking. Although mentioning that uninformed and skeptical people base their
opinions on movies or TV shows does not necessarily mean the stories presented in these media
are not a possible outcome. According with Nick Bilton (2014), a technology columnist and
reporter, Stephen Hawking, one of the smartest people on earth, wrote that successful A. I.
would be the biggest event in human history. Unfortunately, it might also be the last, showing

DEVELOPING ARTIFICIAL INTELLIGENCE

10

the concern about humans trying to control beings far more intelligent than them. When the
point in which machines can be capable of creating a society of their own is reached, there is a
major risk that they will not be needing humans anymore and eventually aim to replace them in
any field possible.
Another danger presented by artificial intelligence is the potential bugs, glitches, or faults
in the system. As Bilton (2014) states, [i]magine how a medical robot, originally programmed
to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who
are genetically prone to the disease, not only glitches in the fields of medicine and healthcare
where peoples lives are threatened could problems be encountered, but also in the field of
economy. Going back to one the benefits mentioned before, if an AI that is programmed to make
critical decisions in the stock market or machinery designed to build expensive products is not
working properly, companies would be losing millions of dollars.
Other problems that might be presented include the use of artificial intelligence for warfare
purposes or robots taking over humans jobs. Using this technology by the military is one of the
issues that scares people the most; countries would find themselves in an arms race to build the
best killing machine, and even today this arms race is already undergoing (Bilton, 2015).
Before it was mentioned that having superintelligent machines to work for humans will bring
humanity to the era of Abundance; while this might be a good thing it is also feared that this time
period will not be what many scientists expect it to be, that it will actually be a time period on
which companies will replace human labor for machines that can perform better than people at
lower costs.
An online-based research survey was conducted to understand societys perspective on
developing Artificial Intelligence. The results showed that the majority of respondents (86.3%)

DEVELOPING ARTIFICIAL INTELLIGENCE

11

had at least some understanding of the subject and surprisingly despite of the issues that
advanced AI may cause in the future 33 out of 46 respondents either agreed or strongly agreed
that the development of artificial intelligence should be continued. All of these issues about
science, medicine, robots replacing humans, and such not only raise questions of how humanity
might be threatened but also raise questions on the field of ethics and morality.
How will the areas of ethics and the moral status of intelligent machines be affected by the
development of artificial intelligence?
With the development of advanced AI that can equal human levels of behavior and
thinking there will be both negative and positive impacts on the ethical lives of human kind.
There are different opinions on whether or not intelligent machines that are able to think and act
like humans should have a moral statuses. There are many factors and principles that might
determine the moral status of a specific AI like the interaction of robot to human and comparing
machines usage and purpose to other creatures that possess intelligence, such as animals.
John P. Sullins (2006) states that in order to evaluate the moral status of any autonomous
machinelike technology, there are three characteristics that must be taken into consideration:
whether or not the robot is significantly self-directed, whether it shows intentional behavior, and
whether it is acting through a position of responsibility. This means that for an intelligent
computer to be considered a moral agent it needs to be able to operate on its own, not controlled
by anyone. And when acting on its own, the machine needs to be aware of its actions and also act
accordingly so that it has a responsibility to some other moral agent(s) (Sullins, 2006). Here
Sullins claims that it is not necessary for a machine to have personhood in order to be attributed
with moral agency, only these characteristics.

DEVELOPING ARTIFICIAL INTELLIGENCE

12

However, Nick Bostrom, a philosopher at the Univeristy of Oxford, presents a different


idea about this issue. His argument is that in order for an AI to have moral status it must have
personhood, more specifically, sentience and sapience (Bostrom, 2011). Sentinence can be
described as the ability to feel pain and suffer while sapience is described as the ability to be selfaware and have reason capabilities (Bostrom, 2011). This leads to the comparison of machines to
animals and even humans with disabilities. For example, animals are only attributed with
sentience which gives them moral status, even if it is lower than humans. On the other hand,
people with disabilities such as diseases that prevent them from reasoning are most commonly
recognized with both sentience and sapience. This suggests that an AI that can feel pain and
reason like a human should have the same moral status as any other person. Bostrom supports
this idea with various arguments/principles, one of them being the Principle of Substrate Non
Discrimination which states that if two beings have the same functionality and the same
conscious experience, and differ only in the substrate of their implementation, then they have the
same moral status (Bostrom, 2011).
Looking at another perspective, there are other people who think AIs should not be
considered moral agents at all. Joanna J. Bryson argues that robots should only be designed to
serve humans and not to be compared as them; to use robots as slaves (2010). In fact this
argument goes even before AI can reach human intelligence; the idea is that AI developers
should not aim to create a computers that can feel or be aware of being used. Bryson states that
[r]obot owners have not obligations, but ensuring that they do not is the responsibility of robot
builders (2010). Having robots as servants means that the developers of artificial intelligence
should take full responsibility to make sure the owners of these servants do not feel obligated to
be ethical to robots. Bryson (2011) also argues that a person must never be found in a situation

DEVELOPING ARTIFICIAL INTELLIGENCE

13

where he or she has to choose between saving a human and saving a robot; that in doing so
humanity is only devaluating itself.
Conclusion
Artificial intelligence is an issue that evidently has a great impact on humanity.
Examinations like the Turing Test are already recognizing modern AIs as smart but there is still
a long way to go before a machine can be called truly intelligent. With the aim of developing
artificial intelligence to levels of human intelligence and superintelligent computers there come
many controversies concerning humankinds welfare. While it might be true that greater levels of
artificial intelligence could bring many benefits to improve the quality of live, it could also come
with many potential risks that will put humanity in danger. Not only problems regarding benefits
and risks are raised but also issues involving the ethics of creating new intelligent beings and
the moral status that these will have in society. Artificial Intelligence is being developing at
exponential rates and super intelligent machines could be expected to arrive in a near future.

DEVELOPING ARTIFICIAL INTELLIGENCE

14

References
Bartlett, J. (2014, June 21). No, Eugene didnt pass the Turing Test but he will soon. The
Telegraph. Retrieved from
http://blogs.telegraph.co.uk/technology/jamiebartlett/100013858/no-eugene-didnt-passthe-turing-test-but-he-will-soon/
Bilton, N. (2014). Artificial Intelligence as a threat. The New York Times. Retrieved from
http://www.nytimes.com/2014/11/06/fashion/artificial-intelligence-as-a-threat.html?_r=0
Bostrom, N., & Yudkowsky, E. (2011). The Ethics of Artificial Intelligence. Retrieved from
http://www.nickbostrom.com/ethics/artificial-intelligence.pdf
Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with
artificial companions: Key social, psychological, ethical and design issue (pp. 63-74).
Amsterdam / Philadelphia. John Benjamins Publishing Company.

DEVELOPING ARTIFICIAL INTELLIGENCE

15

LeBlanc, A. [TEDx Talks]. (2015, January 12). Artificial Intelligence and the future [video file].
Retrieved from https://www.youtube.com/watch?v=xH_B5xh42xc
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Upper Saddle
River, N.J: Prentice Hall/Pearson Education.
Savova, V. & Peshkin, L. (2007). Is the Turing Test Good Enough? The Fallacy of ResourceUnbounded Intelligence. In M. M. Veloso (Ed.), IJCAI (pp. 545-550)
The School of Life. (2015, August 17). Artificial intelligence [video file]. Retrieved from
https://www.youtube.com/watch?v=9TRv0cXUVQw=
Sullins, J. P. (2011). When is a robot a moral agent? In M. Anderson & S. L. Anderson (Eds.),
Machine ethics (pp. 151-161). Cambridge University Press.

Appendix

DEVELOPING ARTIFICIAL INTELLIGENCE

16

DEVELOPING ARTIFICIAL INTELLIGENCE

17

DEVELOPING ARTIFICIAL INTELLIGENCE

18

DEVELOPING ARTIFICIAL INTELLIGENCE

19

DEVELOPING ARTIFICIAL INTELLIGENCE

20

Вам также может понравиться