Академический Документы
Профессиональный Документы
Культура Документы
A website created by MIT called the "Moral Machine" asks for your opinions in moral dilemmas
relating to driverless cars, where you choose which people deserve to live more.
Who should die, the pregnant woman in the path of a driverless car, or the middle-aged
man the car can choose to swerve into instead? This is the kind of question that MIT's Moral
Machine activity asks its users. It poses different scenarios of what a driverless car should do in
different life-or-death situations. It is an extreme example of artificial intelligence ethics, a
problem that doesn't really exist yet, but is something that we need to think about now.
Artificial intelligence is visible in today's world, even if you don't know it. All over the
apps on our phones and the internet, our data is being used and processed to give us suggestions
and ads that relate most to us. In these cases, artificial intelligence is physically harmless because
users have the choice to ignore suggestions. Additionally, by actively avoiding giving websites
information about you, it is possible to avoid your data being used to target you as a user. While
this use of artificial intelligence does currently raise some concerns about being mentally
harmful, clearly the power is still available to the user to stop visiting these apps and websites.
In the future, however, artificial intelligence may not be possible to ignore. In a world
where artificial intelligence gains the ability to make more important decisions by itself, it could
cause different problems with the way we live. For example, if artificial intelligence could detect
crimes through a camera, and send the authorities our way right after we speed through a yellow
light, then that may make a difference in our actions. This begs the question of what is
acceptable and what is unacceptable behavior for artificial intelligence.
The Trolley Problem
The Moral Machine is based on the trolley problem. In this problem, you are the person
at a railroad switch, with an incoming unstoppable trolley. You can choose to either flip the
switch, killing 1 person but saving 5 others, or you can do nothing, killing 5 others but not killing
the person you would have
directly killed by flipping the
switch. This was originally a
thought experiment, but it
can suddenly become very
real with self-driving cars
making the decision to "pull
the lever".
Rahwan, Iyad. “The Moral Machine Experiment – MIT Media Lab.” MIT Media Lab,
Massachusetts Institute of Technology School of Architecture + Planning, 24 Oct. 2018,
www.media.mit.edu/publications/the-moral-machine-experiment/.
Agrawal, Ajay, et al. “The Obama Administration's Roadmap for AI Policy.” Harvard Business
Review, Harvard Business Publishing, 21 Sept. 2017,
hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy.
Feldman, Brian. “The Trolley Problem.” Trolley Problem Meme: What Do You Do?, NEW
YORK MEDIA LLC., 9 Aug. 2016,
pixel.nymag.com/imgs/daily/selectall/2016/08/09/09-trolley.w700.h467.2x.jpg.