Вы находитесь на странице: 1из 5

Who should a driverless car choose to kill?

A website created by MIT called the "​Moral Machine​" asks for your opinions in moral dilemmas
relating to driverless cars, where you choose which people deserve to live more.

Who should die, the pregnant woman in the path of a driverless car, or the middle-aged
man the car can choose to swerve into instead? This is the kind of question that MIT's Moral
Machine activity asks its users. It poses different scenarios of what a driverless car should do in
different life-or-death situations. It is an extreme example of artificial intelligence ethics, a
problem that doesn't really exist yet, but is something that we need to think about now.
Artificial intelligence is visible in today's world, even if you don't know it. All over the
apps on our phones and the internet, our data is being used and processed to give us suggestions
and ads that relate most to us. In these cases, artificial intelligence is physically harmless because
users have the choice to ignore suggestions. Additionally, by actively avoiding giving websites
information about you, it is possible to avoid your data being used to target you as a user. While
this use of artificial intelligence does currently raise some concerns about being mentally
harmful, clearly the power is still available to the user to stop visiting these apps and websites.
In the future, however, artificial intelligence may not be possible to ignore. In a world
where artificial intelligence gains the ability to make more important decisions by itself, it could
cause different problems with the way we live. For example, if artificial intelligence could detect
crimes through a camera, and send the authorities our way right after we speed through a yellow
light, then that may make a difference in our actions. This begs the question of what is
acceptable and what is unacceptable behavior for artificial intelligence.
The Trolley Problem
The Moral Machine is based on the trolley problem. In this problem, you are the person
at a railroad switch, with an incoming unstoppable trolley. You can choose to either flip the
switch, killing 1 person but saving 5 others, or you can do nothing, killing 5 others but not killing
the person you would have
directly killed by flipping the
switch. This was originally a
thought experiment, but it
can suddenly become very
real with self-driving cars
making the decision to "pull
the lever".

The Moral Machine


The Moral Machine is asking similar questions about behavior of artificial intelligence.
What is acceptable behavior in the context of self-driving cars, where the car might have to
choose between killing one person and killing another? In the face of decision as important as
life-or-death, is it acceptable that a computer, presumably programmed by a human with their
own biases, is choosing one person over another? It is acceptable that a decision is made
instantly, without any other human input? The following two cases bring up some of the major
issues posed by the Moral Machine.
In this example, the supposed
self-driving car is choosing between killing a
pregnant woman who is crossing illegally, or
killing 4 people and a dog, none of which are
pregnant or are doing anything illegal. If I were
to ask 100 people to make a unanimous
decision on one outcome over another, it is
unlikely a consensus would ever be reached.
This identifies a problem that the Moral
Machine highlights, which is that artificial
intelligence comes with some unsolvable
problems. Of course, unsolvable problems are
an unfortunate a fact of life. The (old) adage "between a rock and a hard place" can still describe
a modern situation like this one.
Now, the supposed self-driving
car must choose between swerving to kill
homeless people or staying on course and
killing normal citizens. This raises the
concern of a machine prioritizing certain
people over other people, based on what
they look like. Creating problems that
don't even exist in the world right now is
why they are even more difficult to solve
than other "unsolvable" problems. There
is no precedent to base a decision on in
this situation, simply because in car
accidents now, whatever happens,
happens, and there isn't really time for complex ethical decision making.
The bigger problem that both of these situations shows is the following: When you put
the decision making power into a computer's hands, which can quickly take into account all the
information about the situation, how should the "loser", who would be killed, be decided? This
raises the question of what kind of attributes are prioritized. Should certain people be prioritized,
like business professionals over homeless people, women and children over men, or younger
people over older people? Should the number of people saved be prioritized, regardless of the
kind of person being saved? Should law-abiders be prioritized over law-breakers (like people
crossing the street illegally)? Different people will have different priorizations, especially in
different situations—there is no answer. Ultimately, the self-driving car will have to combine all
of these considerations into a single result that chooses what to do. This process is clearly what
we should be concerned about.

Artificial Intelligence Regulations


Right now, artificial intelligence isn't regulated or guided by any group. The U.S.
government currently has no laws on artificial intelligence. Simply, it is regulated by those who
create it. This is objectively not a good plan, as companies creating artificial intelligence could
have more important goals in mind than just the ethics of their technology, such as being the first
technology (that all other companies would want to copy), or earning the most money from it.
Clearly, after seeing all the potential decisions that a self-driving car might have to make, it
would be a good idea to have some guidelines or regulations on artificial intelligence.
So, how do we go about solving this issue? There are many things you can do to help
solve this problem. You can ​contact your political representatives to make it clear that this you
think that this is an issue that they should address immediately. You could also contact groups
like the ​IEEE Standards Association or the ​Future of Life Institute​, which are both involved in
creating their own AI principles documents. These documents are being created in hopes of AI
developers using them as guidelines to follow as they design the AI of the future. The input of
the public is always welcomed on these projects. Finally, you can visit the ​Moral Machine
website, and start judging driverless car scenarios for yourself. The choices that users make are
being studied to find the preferences of all different kinds of people.
Works Cited

Rahwan, Iyad. “The Moral Machine Experiment – MIT Media Lab.” MIT Media Lab,
Massachusetts Institute of Technology School of Architecture + Planning, 24 Oct. 2018,
www.media.mit.edu/publications/the-moral-machine-experiment/.

Agrawal, Ajay, et al. “The Obama Administration's Roadmap for AI Policy.” Harvard Business
Review, Harvard Business Publishing, 21 Sept. 2017,
hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy.

“AI Principles.” Future of Life Institute, Future of Life Institute, 2017,


futureoflife.org/ai-principles/.

Feldman, Brian. “The Trolley Problem.” Trolley Problem Meme: What Do You Do?, NEW
YORK MEDIA LLC., 9 Aug. 2016,
pixel.nymag.com/imgs/daily/selectall/2016/08/09/09-trolley.w700.h467.2x.jpg.

(Images of the Moral Machine are credited to this)


“Moral Machine.” Moral Machine, Massachusetts Institute of Technology, 2016,
moralmachine.mit.edu/.

Вам также может понравиться