Вы находитесь на странице: 1из 50

Applications of Ethical Frameworks to

Autonomous Cars

Anna M. Zappone

Senior Honors Thesis

Symbolic Systems

Stanford University

2017

1
To the Directors of the Program on Symbolic Systems:
I certify that I have read the thesis of Anna Zappone in its final form for submission and have
found it to be satisfactory for the degree of Bachelor of Science with Honors.
Signature:
Date: May 29, 2017
Name: Jerry Kaplan
Name of Reader's Department: Stanford Law School

To the Directors of the Program on Symbolic Systems:
I certify that I have read the thesis of Anna Zappone in its final form for submission and have
found it to be satisfactory for the degree of Bachelor of Science with Honors.
Signature:
Date: June 7, 2017
Name: Damon Horowitz
Name of Reader's Department: Symbolic Systems



Section One: Background

1.1 Technology and Ethics

As real actors in our world, autonomous cars will be coded to take action. The actions

they take in response to different circumstances will depend on what ethical framework the cars

are coded to follow. While the exact impact of the ethical framework chosen is yet to be seen,

real ethical decisions must be coded to determine how these cars act and weight the lives of their

passengers, other passengers, and pedestrians. Yet what does it mean to code ethics? Does

coding involve ethics at all? I hope to show through presenting various ethical frameworks and

the differences that could result for autonomous cars that coding does have real-world ethical

implications.

Ethical issues in engineering are contested matters where the playing out of the issue

would involve harm or well-being related consequences for affected parties (McGinn). Already,

there is discussion around real-world ethical implications in one area of software: the design and

UI for mobile applications. Tristan Harris left Google and has devoted his life to a movement

called Time Well Spent (Swisher). His movement encourages designers and software engineers

to be respectful and thoughtful about how 'addictive' their technology is. Harris finds it

problematic that technology focuses on capturing attention from users, rather than enriching the

user's life in some way. Harris views the current cellphones to be as addictive as a "slot machine"

(Swisher). In his eyes, gamification as basic as "Snapchat streaks" to encourage users to

Snapchat every day is problematic (Zomorodi). The focus should not be on 'addicting' users, or

grabbing their attention, but on creating meaning and a human-focused world. Harris suggests

we use a label called "organic" to certify technology that truly respects our humanity and our

time (Harris). Thus, for Harris, technology that aims to enrich human lives and respect our time

2
is "good" technology, whereas immoral technology purely aims to be addictive at any expense to

our time and meaningfulness.

Harris' Time Well Spent ethical movement has immediate applications for mobile design.

However, different ethical frameworks are needed for technologies beyond mobile design.

Autonomous cars present a paradigm-shift that Time Well Spent does not map perfectly on to.

Autonomous cars are not vying for human attention or aiming to be addictive. Autonomous cars

aim to transport people from place to place more efficiently and safely than human drivers.

Autonomous cars will include code that determines life or death decisions. If a technology is

ethical if it allows for more human interaction and unethical if not, then autonomous cars are an

ethical technology because humans can interact more fully in cars if no one has to focus on

driving. In short, if we evaluated autonomous cars only in terms of the ethical framework of

Time Well Spent, autonomous cars will enrich our lives by allowing humans to spend more time

as they choose. Clearly, this does not say enough - new frameworks are needed.

1.2 Benefits of Autonomous Cars

Improved Safety

The benefits of autonomous cars go beyond saving a driver's time. Uber and Lyft already

allow passengers to avoid the task of driving themselves by paying a driver. In general,

autonomous cars are expected to be far safer than human drivers. Currently, transportation is one

of the most dangerous activities humans partake in with one in eighty-four adults dying from a

car accident (Parker-Pope). An opinion piece in the Wall Street Journal asserts that driverless

cars could eliminate 90% of the deaths and injuries caused by human driver error (Imperfect).

3
Stefan Heck, a Symbolic Systems graduate and Consulting Professor at Stanford interested in

revolutionizing the transportation industry, asserts that our current transportation system is

terribly unsafe. He explains, "We actually kill about 33,000 people per year which costs $300

billion in economic damage. If youre in the 25 to 40 year-old bracket, this is your leading cause

of death - much more than terrorism or diseases that get much more news attention. This is an

epidemic we have not yet solved" (Heck, "2016"). Thus, autonomous cars can provide a way to

revolutionize the transportation industry and save thousands of lives.

While autonomous cars appear to offer an opportunity to greatly improve driving safety,

the chances that autonomous cars will be perfect at driving is next to impossible. Elon Musk

explains on a TED talk, "It's never going to be perfect. No system is going to be perfect, but

if...the car is unlikely to crash in a hundred lifetimes, or a thousand lifetimes, then people are

like, OK, wow, if I were to live a thousand lives, I would still most likely never experience a

crash, then that's probably OK...The thing to appreciate about vehicle safety is this is

probabilistic. I mean, there's some chance that any time a human driver gets in a car, that they

will have an accident that is their fault. It's never zero. So really the key threshold for autonomy

is how much better does autonomy need to be than a person before you can rely on it?" (Musk

and Anderson). In short, the probability of accidents occurring should improve, but accidents

will still happen with autonomous cars.

There are already examples of such mistakes and accidents. In fact, there has already

been a fatality by AC. On May 7th this past summer, Joshua Brown died from an accident while

his Tesla was on Autopilot mode. The Autopilot of the vehicle failed to distinguish a large 18-

wheel truck and trailer crossing the highway (Yadron and Tynan). Tesla released a statement

that while this accident was incredibly tragic, statistically their autonomous cars still remain safer

4
than cars driven by humans. This was Teslas first death in 130 million miles driven by

customers, while among all vehicles, theres a fatality every 94 million miles (Yardon and

Tynan). Elon Musk, the co-founder and CEO of Tesla, has stated that, "the foundation is laid for

cars to be fully autonomous, at a safety level we believe to be at least twice that of a person,

maybe better" (Musk qtd. in Stewart).

It may be tempting to argue that we should wait until autonomous cars are as close to

perfect as possible before deploying them into the real world. Yet even imperfect autonomous

cars can offer more safety. Autonomous cars have the potential to save hundreds of thousands of

lives (Imperfect). While currently autonomous cars may be only slightly better than human

driver - 94 million miles versus 130 million miles per fatality - autonomous cars will continue to

improve. Artificial intelligence focuses on exactly that: learning and improving from training

data. As more training data arrives and the algorithms improve and learn to weight objects more

effectively, autonomous cars will become safer and safer. Many lives would be lost during a wait

for a perfect artificial intelligence that may never arrive.

Financial Opportunity

In addition to the safety benefits, autonomous cars offer a huge opportunity financially.

Companies are already experimenting in autonomous cars for the financial opportunity. Goldman

Sachs predicts that by 2030 driverless cars could make up as much as 60% of US auto sales

(Thompson). 60% of a large industry like auto sales presents a huge financial opportunity. To

give a sense of the 2016 U.S. market size, total new vehicles sales last year topped $995 billion

with a record 17.55 million vehicles sold ("NADA Data", "2016 U.S. Auto Sales"). In total in

2015, Americans spent about $570 billion on new cars (Spector, Bennett, and Stoll). Under

5
Goldman's prediction, autonomous cars could compose over 10 million auto sales and make up

about $342 billion of American spending. Many large tech and auto companies will continue to

push forward to make progress in their autonomous cars, regardless of occasional mistakes.

Furthermore, mistakes are inherently necessary for the autonomous cars to improve. Companies

are motivated to deploy and test their autonomous cars in the real-world to gather more training

data, and ultimately have improved and safer autonomous cars.

1.3 Market Impact

An important factor when analyzing possible ethical frameworks for autonomous cars is

what determines an ethical framework as appealing. The answer to this question depends on the

market and the consumers. Modern consumers are generally self-interested. The majority are

most interested in protecting and securing their own life. This is unlikely to change. Thus,

consumers will most likely prefer autonomous cars that are in their best interest or at the very

least match their personal frameworks of right and wrong. However, the market may experience

a great change with the arrival of AC.

Individual Car-Ownership, Ride-Sharing, and Car-Sharing

Currently, car ownership is very popular, meaning cars are primarily owned by

individuals and families. The Bureau of Transportation Statistics found that the mean numbers of

vehicles per households is 1.9 vehicles ("Household, Individual and Vehicle Characteristics"). It

is possible that this model of individual-ownership continues as autonomous cars are developed

and sold. For example, Tesla currently sells cars to individuals that have many autonomous

functions. However, large shifts may come in car ownership. Car ownership costs about $12,000

6
a year (Heck, Stefan). Heck states, "Most of [the $12,000] we're really spending on owning the

car so that we can park it. We really only utilize the car 2.5% of the time" (Heck, Stefan).

Thus, a large inefficiency exists if we are paying to own a car that remains unused the majority

of our time.

Already, people can use ride-sharing platforms to have a driver arrive when needed on

Uber or Lyft. There services already mark a market shift. Heck explains, "The sharing model

dramatically improves the utilization of the vehicles because instead of vehicles being used 2%

of the time, it could be used as much as 50% of the time, so that's more than a ... 20-fold increase

in that case and that makes the cost per mile actually dramatically cheaper... Look at the

equivalent cost of owning a car versus taking Uber or Lyft or another car ride sharing service. If

you're driving less than 5,000 miles per year today, it's actually much cheaper for you to just take

a car sharing service every time you want to go somewhere then it is for you to own the

car...Most people actually drive 10,000 miles or less, so there's a good chunk of the population

for whom it's already become cheaper" to use ride-sharing exclusively rather than owning a car

(Heck, Stefan). With the advent of autonomous cars, ride-sharing can easily convert to car-

sharing: the driver is no longer required. Thus, autonomous cars may primarily be used through a

car-sharing service, similar to Uber or Lyft in which the rider has no ownership of the vehicle

and pays only for the service. Logistically, car-sharing offers far more efficiency than individual

car ownership. Autonomous cars can continuously and autonomously drive to pick-up another

passenger after bringing you to your location. The concept of 'parking' a car very well may

become obsolete. Any time the autonomous vehicle is not required by one passenger, it can be

autonomously driving to pick-up another passenger. There is never a need for an autonomous car

to remain dormant in a lot or your parking garage when not personally needed, as it can

7
autonomously drive. Heck expects this market shift toward autonomous car-sharing to impact

real-estate with respect to both parking and suburbs. Parking currently composes about 30% of

the space of cities (Heck, "2016"). In fact, there are about four parking spaces for every owned

vehicle (Heck, "2016"). He explains how parking and suburbs, "will shift when they have shared

autonomous cars because it's much easier to drive further out and we don't have to have parking

spaces in the city, so all that [space] is up for grabs" (Heck, Stefan). Musk similarly expects to

see shared autonomous cars. He predicts, "So there will be a shared autonomy fleet where you

buy your car and you can choose to use that car exclusively, you could choose to have it be used

only by friends and family, only by other drivers who are rated five star, you can choose to share

it sometimes but not other times. That's 100 percent what will occur. It's just a question of when"

(Musk and Anderson). The resulting market shift may result in autonomous cars that are

constantly in use on a coordinated platform through which rides are provided when needed.

Market Impact on Ethical Frameworks

The most appealing ethical theory for autonomous cars may be quite influenced by the

market distinction between car-sharing or car-owning. There will most likely be a mix of car-

ownership and car-sharing, as there is today with car-ownership and ride-sharing. Generally, I

would expect a general shift toward car-sharing from car-ownership. Because autonomous cars

have no need to be parked or lay dormant in a garage, individual car ownership may become a

more luxury item than a necessity. In any market split, the ethical decisions made will have far-

reaching consequences for the riders - whether as the owners or as the service consumers of

autonomous cars. Real lives will be impacted in car accidents regardless of the manner of car

ownership. Furthermore, building consumer trust for a large and stable market is important.

8
Building a market for autonomous cars requires consumer adoption. However, Stefan Heck

asserts consumer trust may be less of an issue than we predict. He elaborates, "About thirty

percent of people today if you survey them say 'I don't trust autonomous cars.' The reality is

when they get into one after 15-20 minutes, they actually love it because it's such an amazing

experience. Many of us got used to flying in planes that are flown by autopilot... you kind of

forget about that once you've flown once or twice. I think the consumer shift will actually happen

quite quickly" (Heck, "Stefan"). Thus, building consumer trust may be less of an issue for the

market than predicted. Nonetheless, I will consider the issues of consumer trust and the

stakeholders - the companies and individual consumers - in each of the ethical frameworks

outlined.

1.4 Accountability

Regardless of the market split between car-sharing and car-ownership, corporations must

take seriously their accountability in the ethical framework chosen for their autonomous cars.

While there is not yet a perfect precedent, automobile companies have faced lawsuits in being

"reckless" in their automobile development. An example is the Pinto, produced by Ford in the

1970s, that had an ill-advised placement of its gas tank (De George 1). Ford faced about fifty

lawsuits related to rear-end accidents between 1971 and 1978 (1). In August of 1978, three girls

died after their Pinto was rear-ended by a van and the Pinto's "gas tank erupted in flames" (1). In

the resulting lawsuit called the Winamac case, Ford was charged with "reckless homicide" for

placing the gas tank where and how it did (1-2). Ford claimed correctly that there were no

federal safety standards regarding gas tanks in 1973 (4). Ford additionally claimed that the

Pinto's gas tank safety was comparable to other subcompacts at that time, as all the subcompacts

9
"were unsafe when hit at 50 mph" (4). Nonetheless, "the Ford documents tend to show Ford

knew the danger it was inflicting on Ford owners [in placing the gas tank the way it did]; yet it

did nothing, for profit reason" (4). Ford won the case, but brought to the fore many of the same

questions that face AC.

First, the determination of what constitutes a corporation as being "reckless" with their

consumers' lives is an ongoing conversation in engineering. After outlining the Ford case, De

George suggests that, "a panel of informed people, not necessarily engineers, should decide what

is acceptable risk and hence what acceptable minimum standards are" (8). While not exactly a

panel, a group called the "Partnership on AI" was formed by five tech giants: Google, Facebook,

Amazon, IBM and Microsoft. Apple and Tesla are absent from the partnership (Hern). The five

companies founded the "Partnership on AI" to recommend best practices and come up "with

standards for future researchers to abide by" (Hern). While it is unclear exactly how this

engineering partnership will play out, its goal to "serve as an open platform for discussion and

engagement about AI and its influences on people and society" seems similar to the panel De

George recommends for determining acceptable minimum standards ("Partnership").

Second, the Ford case showcases the question - and the absence - of legal standards in

developing new technologies. In the Ford case, there were no federal safety standards for gas

tanks. Thus, they did not disobey any legal regulations in their Pinto production. Yet do

engineering standards exist even without explicit government specification? Autonomous car

production does not yet have federal safety standards, and are already being deployed. Thus,

determining if an autonomous car is "reckless" - if it meets an unknown minimum standard - is

difficult. A relevant modern example is Uber. After Uber failed to register their autonomous cars

for testing, the California Department of Motor Vehicles banned Uber's autonomous cars from

10
San Francisco (Hagan). Uber's autonomous cars in San Francisco were then redeployed to

Arizona ("Uber Self-Driving"). Three months later in Arizona, one of Uber's self-driving Volvo

SUVs was involved in a "relatively high-impact" and "high-speed" crash (Bergen, "Uber Self-

Driving"). Uber grounded all their autonomous cars across America for the next few days before

redeploying their San Francisco autonomous cars ("Uber Autonomous"). Their relatively quick

redeployment after a serious accident is possible because of the absence of legal standards. In

addition to the crash, there's been much controversy over Uber's autonomous cars running red

lights and crossing bike lines incorrectly in San Francisco. Uber maintains that a human driver

was controlling the cars at the time and insists the traffic violations in San Francisco are the

result of human error by drivers who can take control if needed (Levin). Despite Uber's

statements, there are multiple articles and videos of mistakes made by Ubers autonomous cars

online (Levin). Two anonymous Uber employees spoke to the Times asserting, "that traffic

violations by the companys self-driving cars were caused by problems with the cars mapping

programs, and not, as the company had previously claimed by human error" (Morris).

Furthermore, the employees stated that Uber's "mapping program failed to spot not just one red

light, but at least six" (Morris). In the case of Uber's autonomous cars, the lack of legal standards

permits Uber a large amount of freedom in development - which is coupled with a risk to the

larger general population.

Third, the Ford case asks if equivalency is enough. In the Winamac case, Ford argued

that their subcompact offered similar safety to other subcompacts at the time. A threshold of

around the current safety is a logical, measurable standard that engineers could be held to. With

respect to autonomous cars, it could be that autonomous cars are held only to be around as safe

as the cars currently on the road. Tesla's response to their accountability in their autonomous car

11
death this summer was along those lines, as they had one death in 130 million miles versus the

human average of one death every 94 million miles (Yardon and Tynan). John Hanson, the

spokesman for the Toyota Research Institute, which is developing the automaker's self-driving

technology, suggests we need far more than equivalency for a shift as drastic as autonomous

cars. He asserts, A lot of people say if, I could save one life it would be worth it. But in a

practical manner, though, we dont think that would be acceptable" (Hanson qtd. in Overly). He

suggests the benefits would need to be far greater, asking, "What if we can build a car that's 10

times as safe, which means 3,500 people die on the roads each year. Would we accept that?"

(Hanson qtd. in Overly). The question remains of how safe autonomous cars need to be to gain

human trust, but equivalency seems unlikely to be a satisfactory answer.

1.5 Approaching the Shift to Autonomous Cars Thoughtfully

The shift from human to autonomous driving may seem drastic, and thus far-off. Yet the

shift is an approaching reality due to the safety and financial benefits described in section 1.2.

Cars are already becoming more and more autonomized. Consider the progress in autonomous

parking first introduced about 20 years ago (Turpen). Autoparking uses sensors to identify a spot

large enough to the car to fit and then signals to the driver when this has been identified

(Healey). Parking assist has not been subjected to much ethical scrutiny. This may be due to the

fact that parking is a simpler, clearly defined task with less variables than driving; essentially if

an empty space is found, the car can coordinate movement to enter the space. Identifying the

empty space is the main area of development. Parking assist systems treat blocking objects

universally by using sensors to "measure the distance of the object from the rear bumper"

12
(Youngs). In contrast, autonomous cars will need to be able to actually identify and respond to

objects differently rather than simply avoid all objects in its path.

While the complexity between autonomous driving and autonomous parking differs,

parking assist did appear to cause real-world damage. In 2013, BMW released their park assist

with human driving involved. In 2014, Jonathan Libratore attempted to park using a

malfunctioning park-assist - "causing the vehicle to accelerate over the parking mound and

collide with a parked car on the other side" (Waldron). BMW responded that park-assist does not

control the speed of the vehicle and the driver maintains control of the accelerator and the brake

(Waldron). The authorities did not investigate the incident, so further information regarding the

cause of the malfunction is unknown.

(Credit: Martin County Sheriff's Office in Waldron)

As features progress, autoparking involves less and less human driving. For example,

with autonomous parallel parking, the driver must only pull forward a specific amount for the car

to begin its autonomous parking (Healey). ParkAssist boasts a "99%+ accuracy rate verified

13
through precise global monitoring" in their Park Assist systems that provide a "real-time alert if

even a single smart-sensor goes down" ("Who We Serve"). Volkswagen has released the

following graphic depicting their progress towards "high automation" in autonomous parking:

(Credit: Volkswagen in Turpen)

The autoparking feature is already faster and more precise than human parking. A test report of

four cars with autoparking found that the vehicle's parking time was quicker when human drivers

totally kept their hands off the wheel and allowed the systems "to execute a smooth sweep into

the spot" (Healey). This speed in part is from the precision: the vehicle needs far less moving

back and forth to square-up within the parking space when fully autonomously parking.

Even if your opinion remains that autonomous cars will not, in fact, be safer than human-

driven cars, the financial opportunity is such that autonomous cars will continue to be developed

and deployed. Ford, for example, has already set the goal of releasing their autonomous vehicle

fleet by 2021 (Overly). Rather than focusing on halting AC, we should instead aim to create

autonomous systems that best-match our ethical frameworks and integrate the technology in a

14
livable way for all. Thus, under the assumption that autonomous cars are an impending reality, I

aim to describe what I find the most appealing ethical frameworks autonomous cars could use.

The actions autonomous cars would take in various situations vary depending on the ethical

framework used. I will describe four ethical frameworks: classical utilitarianism, rule

utilitarianism, idealized mimicking, and individualism. I will first give background on the ethical

theory itself, and then describe its possible application to AC. To allow for a fuller understanding

of how these ethical systems would apply, I aim to provide a situation in which the behavior

would differ when coded into AC. My focus is on potential ethical frameworks used for

autonomous cars rather than on "the extreme difficulty of writing a program to predict the effect

of an action in the world" (Gips 3).

Section Two: Utilitarianism

2.1 Background

Utilitarianism was first described by Jeremy Bentham. In utilitarianism, the best action is

the one that maximizes "utility." The definition of utility varies across variations of

utilitarianism. Bentham defined utility as the sum of all pleasures that result from an action

minus the suffering of anyone involved in the action. Thus, utilitarianism judges the morality of

an action purely based on the consequences of the action. As a consequentialist framework,

utilitarianism is a very applicable ethical framework. Bentham introduced a method to calculate

pleasure and pain values known as "hedonic calculus." Pleasure and pain are weighted by

intensity, duration, certainty, and proximity. Notably, all person's interests are weighted equally.

Utilitarianism does face some tension with the law. In classical utilitarianism, breaking laws are

accounting for in calculations because the consequences of law-breaking include "alarm" and

15
"danger." However, it can indeed by justified to break the law if the consequences result in more

pleasure than pain, including the subtraction for the "alarm" that law-breaking causes. Thus,

utilitarianism weighs all person's interests equally, but does not provide any security of following

any distinct rules or laws to protect individuals. In the case of autonomous cars, utility could be

defined to include many things to maximize 'pleasure' to include the value of law or value of

self-security. For application to autonomous cars, we will prioritize the utility of maintaining

human life above all. This is a very specific application of utilitarianism visible in similar cases

like the Trolley Problem thought experiment where the count of lives determines the ethically

correct action.

2.2 Appeal and Drawbacks

Ethical Appeal

The appeal of utilitarianism is straightforward: this framework can strive to maximize

total human life. The maximum possible number of lives that can be saved will be saved under

this applied framework. To simplify: all lives that can be saved by autonomous cars will be

attempted to be saved. To clarify: the application of utilitarianism to autonomous cars cannot

guarantee that autonomous cars will in every individual case maximize lives, but it does

guarantee the autonomous cars will always calculate using probabilities with the goal of

maximizing lives. When applying utilitarianism to machines, "there are great practical

difficulties in predicting the consequences of an action, and hence in deciding which action

maximizes social utility" (McDermott 3). Thus, while the calculations themselves will be based

off a utilitarian model, the actual results may not always be in line with utilitarianism.

16
Nonetheless, autonomous cars will operate fully with the goal of saving the most lives it can, and

the general results will maximize human life.

Market Appeal

The market appeal of utilitarianism matches its ethical appeal: the most possible lives are

saved. However, this may not actually appeal to consumers. A study published in Science found

that, "Although people tend to agree that everyone would be better off if [autonomous cars] were

utilitarian (in the sense of minimizing the number of casualties on the road), these same people

have a personal incentive to ride in [autonomous cars] that will protect them at all costs.

Accordingly, if both self-protective and utilitarian [autonomous cars] were allowed on the

market, few people would be willing to ride in utilitarian [autonomous cars], even though they

would prefer others to do so (Ackerman). While consumers want others to use utilitarian

autonomous cars, they are not willing to take on this risk themselves and want their own personal

life prioritized. Thus, car companies are incentivized to sell cars that prioritize "personal safety

above the safety of others" to appeal to consumers (Ackerman). Thus, in a car-owning market,

utilitarianism is less appealing as people are unlikely to buy such a car for themselves.

However, I do think there is some market appeal for utilitarianism under the car-sharing

model. The concept that no life is prioritized over another is appealing. This somehow matches

our conception of justice: all lives are created equal. Furthermore, if these car-sharing platforms

hold the service provider liable, the provider has an interest in having the most lives saved

possible. For example, if Uber's fleet of autonomous cars prioritizes maximizing total lives, that

will result in fewer deaths and thus, fewer lawsuits for Uber. In addition, there is the potential for

large-scale coordination between utilitarian autonomous cars. Autonomous cars on the same

17
platform could communicate with a common value and prioritization of life to maximize life in a

coordinated effort. Rather than a model in which each car is out for itself, there could be

coordinated fleets of autonomous cars all aiming to maximize life.

While utilitarian autonomous cars are reasonable and may prove beneficial to society at

large, I do envision a large pushback by consumers in their desire for a car with their personal

safety the primary goal. In fact, without government policy explicitly enforcing utilitarian ethics

around autonomous cars, I predict that people would be willing to pay quite a large sum for

autonomous vehicles that operate with their own life as paramount - even at the expense of

others, potentially many others - to heighten their chance of survival in the case of an accident.

Application

When actually applying the utilitarian goal to maximize life to autonomous cars, an

immediate response may be: not driving humans at all would maximize life. It is simplest and

safest if no one takes cars anywhere. Driving heightens the chance of accidents and puts humans

in danger. As Musk stated earlier, there is always an implicit risk in entering a car. By using cars,

we trade the risk for the convenience and benefits cars provide. If our sole goal was to maximize

human life, rather than a multitude of goals including to get places quickly or to travel far

distances, we very well may never use cars. Thus, in a world of autonomous cars, autonomous

cars refusing to drive humans could maximize total life. Yet while we are interested in

maximizing lives, we would still want these cars to drive. While I do not aim to explore the

actual challenges of implementing these ethical systems, I will delve a bit into risk levels to solve

this problem. A base acceptable risk level will need to be determined and then programmed into

18
the car. Beyond the base risk level that allows the autonomous car to drive, the utilitarian car will

always aim to maximize life.

Determining the acceptable risk level seems critical in the acceptance or rejection of

utilitarian cars. If the risk level is set too high, the utilitarian cars would speed, run red lights,

pass cars, and other risky behaviors. Individuals who drive less riskily (rarely speed and pass

other cars) would feel unsafe in a car that does perform these behaviors at a level outside their

comfort zone. If the risk level is too low, the utilitarian cars would not speed or pass cars even

when individuals were in a rush or wanted the car to take more risks. Regardless of what the risk

level is set at, there will be some individual who feel their car is too risk-averse or too risk-

taking, as comfort varies across people. As for actually determining this risk level, there is

currently around 35,000 deaths per year for benefits of driving, which gives us some precedent

for the math involved in risk tradeoffs for this framework ("National Statistics"). A utilitarian car

would maximize lives beyond this calculated risk level in all cases.

Situations of Difference

Risk Assessment Situation

Assume an acceptable risk threshold has been set to allow people to go up to 10mph over

the speed limit. In this case, the acceptable risk threshold would allow a utilitarian

autonomous car to go up to 55mph in a 45mph zone, and then maximize lives beyond this

risk level. If the passenger in the car was running late to a critical meeting or her

daughter's performance, and tried to make the car pass speed up to pass a slow car, the

car would not respond beyond the set risk level. Essentially, there cannot be any

19
exceptions based on an individual's preferences of how to weight lives. The utilitarian

autonomous cars would always aim to maximize lives. The car would be in the best

interest of everyone on the road, but not the individual who personally wants to the car to

speed.

Self-Protective Situation

Consider yourself a passenger in a utilitarian autonomous car. Four pedestrians walk up,

two to the front and two to the back of your car holding weapons of some sort. Thus, you

want to escape and may need to run over these pedestrians. With a utilitarian autonomous

car, you will not be able to escape as it requires hitting two pedestrians. The two lives are

weighted above your single life.

Trolley Problem

Consider an autonomous school bus full of schoolchildren and a utilitarian autonomous

car driving a single adult. Imagine these two cars meet on a bridge that cannot fit both

cars. With utilitarian ethics coded, the individual's car would be coded to drive off the

bridge - purposefully killing the individual and sparing the school bus full of children. In

all cases of unavoidable accidents between two cars, the car with fewer people would be

given directions to self-sabotage in order to spare the car with more people.

Drawbacks

There are significant drawbacks to the utilitarian model when maximizing life is set as

the goal. The main drawbacks are lack of self-security and high irregularity relative to human

20
behavior. In a utilitarian model, autonomous cars will be coded to take whatever action

maximizes life. This action may not align with typical human actions whatsoever. For example,

imagine you are a law-abiding citizen crossing the street under a green walk light. A utilitarian

autonomous car may 'purposefully' run you over to avoid a larger, more costly accident if the car

attempted to stop or swerve for you as the pedestrian. Normal human behavior is to essentially

always swerve to avoid a pedestrian, but a utilitarian autonomous car would maximize lives even

if this required 'abnormal' actions from a human perspective. This drawback could result in

utilitarian cars being portrayed and marketed as "coded to kill." The cars will, in cases where an

accident is imminent, 'purposefully' kill passengers or pedestrians. The lack of self-security

extends beyond when purposefully interacting with cars by crossing a street or entering as a

passenger. Even if you were having a picnic on your front yard, a car could be coded to swerve

and hit you and your family if that made it possible for a school bus with 20 kids to pass by

unharmed. It's impossible to say how often such situations will occur. Yet a lack of self-security

is a large drawback, even if it only occurs in few cases. The negative marketing that the absence

of self-security allows could prove enough to halt utilitarian autonomous cars from being

brought to market. The trust in utilitarian autonomous vehicles will be hard to build as people

prioritize self-security highly.

Another drawback may arise from the irregularity of utilitarian autonomous cars. The

remaining human drivers will have to cope with the high irregularity of the autonomous cars

relative to human behavior. It will be riskier for human drivers to be on the road, as they will

show little ability to predict how the car will react. This tradeoffs with the appeal that the

utilitarian autonomous cars could coordinate with each other. Humans essentially would be left

out of this coordination. They would be blind to the calculations being performed by the

21
utilitarian cars, and would only see that their behavior appears far different from human behavior

that prioritizes bikers, walkers, and children. The utilitarian cars will have the ability to run

lights, speed, and break other laws that humans may not break in those same cases. The

utilitarian car will be calculating off an actual base risk level. This means that the utilitarian cars

would not necessarily obey a law if it found it to be below a certain risk level - regardless of

human norms. For example, this could result in utilitarian cars rarely stopping at stop signs.

While human drivers may totally ignore a stop sign at 3am on an empty street, most human

drivers do slow or stop on principle at stop signs. Utilitarian cars will stop only if it poses a

certain risk level. Thus, the behavior of utilitarian cars appears irregular to humans and could be

quite difficult to predict as a human driver.

Section Three: Rule Utilitarianism

3.1 Background

Rule utilitarianism differs from classical utilitarian in a key way: it allows for rules that

once established, bring about goodness. Classical utilitarians apply the utilitarian principle and

calculations to individual acts, while rule utilitarians "believe that we can maximize utility only

by setting up a moral code that contains rules...Once we determine what these rules are, we can

then judge individual actions by seeing if they conform to these rules" (Nathanson). In short, rule

utilitarianism focuses on the effects of types of actions, rather than individual actions. Thus, we

find high utility rules that should be followed, rather than calculating the highest utility action in

each case. Nathanson analogizes the difference to stop signs and yield signs. Classical

utilitarianism offers a yield sign - always allowing the decision of whether to stop based on that

individual calculation of the situation. Rule utilitarianism offers a stop sign - it requires a stop

22
whether or not pleasure will be maximized in that individual case. Classical utilitarians view this

stop as a waste when there is no danger of incoming cars, but rule utilitarians would argue

stopping at stop signs is a generally 'high utility' rule that generates "greater utility because they

prevent more disutility (from accidents) than they create (from unnecessary stops)"

(Nathanson). Rule utilitarianism does allow departure from rules, but differs from classical

utilitarianism in that it imposes some rules on determining the right action.

3.2 Appeal and Drawbacks

Ethical Appeal

The ethical appeal of rule utilitarianism is that you can include the value of perceived

moral rules. We follow high-utility rules even in cases where the maximum 'good' may result

from us breaking them. The ethical appeal of rules is two-fold. First, these rules allow us create a

standard ethical framework. It sets universal standards for us to hold people to, including

ourselves. It creates a system of trust and shared standards among humans. People struggle to

make ethical judgments, especially in the moment, and having rules provides a framework for

individual decisions. Second, rules can allow us to prioritize life differently. In classical

utilitarianism, lives are all weighted equally. In rule utilitarianism, lives can be prioritized, or act

as exceptions, if there is a rule around such lives. For example, under a rule that an autonomous

car should not run over a pedestrian, a pedestrian's life will appear to be weighted far above the

lives of the passengers on the road. In reality, the pedestrian's life is not weighted more than the

lives of passengers, but because a rule exists protecting the pedestrian, calculations involving the

23
pedestrian's life cannot occur. Thus, rule utilitarianism can allow life to be prioritized by

protecting lives through certain rules.

Market Appeal

Rule utilitarianism allows for some self-security and a sense of regular behavior from

autonomous cars. As a law-abiding citizen crossing a street, having self-security with

autonomous cars is critical. Consumers will not want to live in fear of autonomous cars. Thus, by

providing high-utility rules for the autonomous cars, consumers can have some trust in their

autonomous cars. They will know there are certain actions their autonomous car will never

perform, like 'purposefully' killing a pedestrian. Rule utilitarian autonomous cars will never

appear to be 'purposefully' evil by swerving to kill a single person in an 'inhuman' way that a

classical utilitarian car may do to maximize total life. As a pedestrian and as a passenger, this

offers increased self-security.

In addition to the self-security that rule utilitarian autonomous cars provide, their

behavior will also be far easier to predict for passengers and human drivers. The behavior will

follow specific human-like rules that will allow an easier transition with human drivers still on

the road. Rule utilitarian autonomous cars can follow similar traffic laws (not only when the

calculations are in the favor of the laws). Specific rules can be outlined and marketed for

autonomous cars. This view of autonomous cars as somehow 'less autonomous' because they

follow human-like rules will likely appeal to consumers. Thus, it will be far easier to market

autonomous cars that have well-defined rules than a purely utilitarian car. Consumer trust will be

easier to build when it appears that the autonomous cars follow standards that humans also strive

to follow.

24
Application

When applying rule utilitarianism to autonomous car, the focus is on rules. Thus, what

are the highest utility rules for autonomous cars to follow? High utility rules are calculated with

the goal of maximizing goodness. These ethical principles are not necessarily perfectly in line

with laws. Laws do act as standardized rules for humans to follow and hold each other to, and

presumably were made into laws for a purpose or reason - suggesting they provide some amount

of utility. Thus, there may be some overlap with current driving laws and the high-utility rules

calculated. However, these high-utility rules would be calculated as what generates the most

overall pleasure as a standard rule. The overlap with law does not determine a 'good' rule, but

rather how much benefit we get out of having it a rule. In the case of rule utilitarian autonomous

cars, I would expect utility to be defined as offering self-security and regularity for car behavior.

Accordingly, I expect the rules for autonomous cars to be similar to the high utility rules humans

follow to ensure our self-security and general road standards. As humans, we follow high utility

rules when driving such as:

1. Avoid hitting pedestrians and bikers

2. Avoid hitting people off-road

3. Follow most traffic laws: lights, stop signs, speed limits within a range

Autonomous cars could be programmed to follow similar rules. The rules do not need to be

complete: autonomous cars under rule utilitarianism can still use utilitarian calculations beyond

the pre-determined and explicitly laid out rules. These utilitarian calculations would only occur

around these rules. Calculations involving lives that are protected by rules like pedestrian lives

would be unable to occur, but utilitarian calculations between passengers in autonomous cars

25
would still occur after any relevant rules applying to the situation. While my focus is not on the

coding implementation, these rules most likely would need to be weighted somehow to prioritize

the rules in some order to deal with conflicts of rules. Thus, rule utilitarianism does face a similar

application issue as utilitarian autonomous cars. There would need to be some base risk level that

the utilitarian car calculates its actions using. While we do have the annual death count as

precedent to calculate a general risk level, some will consider the risk level as too risk-averse or

too risk-prone. However, with a rule utilitarian car, this effect may be slightly mitigated because

the risk calculations will be limited by the rules. A rule utilitarian autonomous car could always

stop at stop signs even when there is incredibly low risk if the autonomous car were to run the

stop sign. Thus, human-like rules will be followed, and the utilitarian calculation will occur only

in line with such rules being followed. There are cases in which 'high utility' rules can be

overridden in versions of rule utilitarianism, but we will not delve into these potential exceptions.

Situations of Difference

Risk Assessment Situation

Assume a rule exists that autonomous cars can go up to 10mph above the speed limit.

Thus, a rule utilitarian autonomous car will go up to 55mph in a 45mph zone, and then

maximize lives beyond this risk level. If the passenger in the car was running late to a

critical meeting or her daughter's performance, and tried to make the car go faster to pass

another car, the car would not respond as a rule exists circumventing any calculations of

utility. Essentially, there cannot be any exceptions based on an individual's preferences.

The autonomous cars would always follow the set rules and aim to maximize lives after

26
such rules. The car would be in the best interest of everyone on the road, but not the

individual who personally wants to the car to speed.

Self-Protective Situation

Consider yourself a passenger in a rule utilitarian autonomous car. Four pedestrians walk

up, two to the front and two to the back of your car holding weapons of some sort. Thus,

you want to escape and may need to run over these pedestrians. With a rule utilitarian

autonomous car, you are never allowed to hit pedestrians. Thus, even if only one

pedestrian stood in front of your car with a weapon, you would be unable to escape. Your

autonomous car would be unable to move and get you out of the dangerous situation.

Trolley Problem

Consider an autonomous school bus full of schoolchildren and a rule utilitarian

autonomous car driving a single adult. Imagine these two cars meet on a bridge that

cannot fit both cars. With rule utilitarian ethics coded, the individual's car would most

likely still be coded to drive off the bridge - purposefully killing the individual and

sparing the school bus full of children. But, if bikers or pedestrians were in the path that

the individual's cars needed to take off the bridge, the rule of never killing a pedestrian or

biker would not allow the autonomous car to purposefully veer into them. Thus, the

individual car's may stay on the bridge and hit the school bus, causing a far higher count

of death, but allowing those not in cars to remain secure.

Drawbacks

27
The main drawback of rule utilitarianism is exactly in contrast to classical utilitarianism:

the most possible lives are not saved. More human lives will be lost in this framework, primarily

with the goal and benefit of making humans more comfortable with autonomous cars. From

Rawls' Veil of Ignorance, it is hard to say if we would prioritize following rules that offer human

comfort at the expense of potentially many lives. As someone who is most often a biker or

walker, the sense of self-security is incredibly appealing. Yet if we can do better, must we?

While it may appear that a rule utilitarian autonomous car does not 'purposefully' kill pedestrians,

more people will actually die with rule utilitarian autonomous cars than with classical utilitarian

autonomous cars.

Section Four: Idealized Mimicking

4.1 Background

Idealized Mimicking is based off Stefan Heck's idea of creating autonomous cars that

mimic our best human drivers. In terms of ethics, this most aligns with an agent-based

conception of the nature of what is good and moral. The justification behind idealized mimicking

is that the car can make the morally correct decision using typical human actions as its

framework. While utilitarianism and rule utilitarianism determine the correct action by

maximizing utility, in this case, the moral action is determined by how agents act in the real-

world. This is a focus on "orthopraxy" where actions are the mode of judging ethics. This

contrasts with "orthodoxy" which emphasizes the underlying belief and thoughts to judge ethics.

As the goal of these framework is not to judge ethics, but to make ethical decisions, an

orthopractic way of determining moral action would be to use current moral actions. Idealized

mimicking does not aim to get at the underlying thoughts and beliefs beyond how we weight

28
human lives, but rather aims to mimic our actions to align with our morality. By adopting this

framework, there seems to be some hope or trust in a 'universal' or generalized human morality

that our actions somehow reflect. Moral universalism is a position that there is some system of

ethics that applies to all people regardless of our individual situation and culture. If there is

universal morality and the correct action is determined by actions rather than thoughts, using

typical human actions would allow us to weight lives and objects in an ethical way. Thus in

idealized mimicking, the correct or moral action is not determined by the consequences or by any

specific moral rules, but by how typical humans would act.

4.2 Appeal and Drawbacks

Ethical Appeal

Idealized mimicking is not a standard ethical framework to be applied to autonomous

cars, but rather has been created to best reflect how humans ethically judge and weight things in

the real world. From an ethical perspective, idealized mimicking may resonate with people

despite the fact that it doesn't flow from a philosophical ethical framework beyond the hope of

reflecting some sort of universal morality and allowing us to judge ethics by real-world actions.

While idealized mimicking does not provide a specific set of standards or rules to be followed, it

aligns with the human conception of morality in that the car will replicate the ethical decisions of

the best human drivers. The consequences of idealized mimicking - namely that it's an

improvement of current safety with minimal adjustments required from other drivers - may be

enough to justify this framework for consequentialists.

29
Market Appeal

With idealized mimicking, there could be extremely seamless integration of autonomous

cars. Ideally, the behavior of cars would appear identical to the best human drivers. Their

behavior would cause the least alarm and surprise among consumers and remaining human

drivers on the road. These cars would leave the roads generally as they are, but with great safety

improvements. While utilitarian cars would quantitatively save more lives once deployed, it

would come at the expense of consumer trust. In fact, building consumer trust may be necessary

to fully maximize lives as autonomous cars require market buy-in and adoption. Iyad Rahwan, an

associate professor at the Massachusetts Institute of Technology Media Lab who has studied the

social dilemmas presented by autonomous vehicles, explains that if people are "not comfortable

with the trade-offs that cars are making, then we risk people losing faith in the systems and

perhaps not adopting the technology" (Overly). Thus, if adoption levels were the same between

utilitarian autonomous cars and idealized mimicking autonomous cars, utilitarian cars would

have a far better impact in terms of lives saved. However, given that consumers will less likely to

trust utilitarian autonomous cars that appear to act irregularly, idealized mimicking may actually

save the most lives by promoting more widespread adoption and acceptance of autonomous cars.

Application

Stefan Heck asserts that over 90% of accidents are caused by human errors (Heck,

"2016"). As stated, these accidents have a large impact with over 35,000 deaths per year. Heck

explains, "Airplane travel had the same statistics back in 1956. There was a crash over Long

Island and we got serious about upgrading airplane travel We automated our planes, we have

autopilots, we have extremely high qualifications for pilot training. We got this down to a rate,

30
that if you adjust for the miles we fly, would be 800 people per year rather than 33,000 people

per year" (Heck, "2016"). Thus, Heck suggests that with a similar revolution from Silicon Valley

to automate and improve, we can drastically decrease the accidents and deaths from

transportation. Christian Gerdes, the Chief Innovation Officer at the U.S. Department of

Transportation states, "we know that 15 percent of drivers cause 85 percent of accidents"

(Poeter). Because the vast majority of accidents are caused by human error, Heck recommends

we model autonomous cars based off the best human drivers. He asserts, "with autonomous

systems that don't get distracted, that don't get on their cell phones and that don't think about

what's going on at work while they're driving, we can actually reduce most of those accidents"

(Heck, "2016"). Heck writes, "distracted driving is a top reason our roads are increasingly

dangerous and crowded. According to the National Safety Council, traffic fatalities in the US are

rising at the fastest rate in 50 years, with 68% of crashes caused by distracted driving" (Heck,

"Using"). Thus, applying this rate of 68% to the 35,000 of people killed annually in accidents,

autonomous cars could save 23,800 lives simply by not "getting distracted."

Beyond the safety application, Heck founded Nauto, an urban transportation data and

hardware system. In applying idealized mimicking, the algorithms for autonomous cars would

need to learn how human drivers weight objects in varying situations and apply similar

weighting for decisions. This would require data from the best human drivers in a variety of

situations. Stefan Heck's company, Nauto, is already collecting such data with their hardware.

Nauto currently sells hardware for vehicles currently on the road to provide a comprehensive

view of driving activity. The visual hardware is installed on human-driven cars to provide safety

insights. Nauto's dual camera uses artificial intelligence to detect distractions and potential

accidents, as well as to provide real-time feedback to the human driver. The company notes

31
every instance of distracted driving and warns the driver when in high-risk situations. In the

future, Nauto's data from these devices "could prove to be valuable for automakers attempting to

develop safe driving systems and eventually driverless cars" (Bhuiyan).

(Credit: Nauto.com)

In terms of limits, idealized mimicking autonomous cars would not follow any explicit

rules as the rule utilitarian framework provides. When on the road and presented with a variety

of new situations, the absence of rules will allow autonomous cars more flexibility and less strict

limits. This lack of limits was a serious drawback for utilitarian autonomous cars. However, the

behavior of idealized mimicking autonomous cars would likely be similar enough to human

drivers that the absence of rules would not pose the same threat as utilitarian autonomous cars

would pose given how irregular their behavior could be relative to human drivers. Idealized

mimicking autonomous cars would likely still behave as though the driver was human, making

the absence of rules less risky.

Situations of Difference

Risk Assessment Situation

32
An idealized mimicking autonomous car would pass another car in any case where a

good human driver would make a similar decision. This decision may not actually align

with the correct risk levels of passing a car, as it would in the utilitarian frameworks.

Thus, the car will always pass when the best human drivers would also pass a car, but this

says little of the actual risk involved in this action. If the passenger in the car was running

late to a critical meeting or her daughter's performance, and tried to make the car pass

another car, the car would only respond how an ideal human driver would respond -

which would most likely be to not pass the other car. Thus, there would not be an

exception based on the individual's preferences of the risks the car should take.

Self-Protective Situation

Consider yourself a passenger in an idealized mimicking autonomous car. Four

pedestrians walk up, two to the front and two to the back of your car holding weapons of

some sort. Thus, you want to escape and may need to run over these pedestrians. With an

idealized mimicking autonomous car, it is unclear if you will be able to escape. The best

human drivers probably weight pedestrian lives quite highly, in some cases above their

own lives as they may swerve to avoid pedestrians who are jay-walking. Thus, if the

autonomous car weights the pedestrians' lives above your own, you will be unable to

escape. If the autonomous car weights your single life above two pedestrians, you will be

able to drive away to safety.

Trolley Problem

33
Consider an autonomous school bus full of schoolchildren and an idealized mimicking

autonomous car driving a single adult. Imagine these two cars meet on a bridge that

cannot fit both cars. Under idealized mimicking, the individual's car would most likely

not drive off the bridge - as humans rarely purposefully self-sacrifice in this way even

though the ultimate death count would be diminished. Thus, both the individual's car and

school bus would not drive off the bridge. The cars would crash in an accident that causes

many deaths. Thus, by mimicking the behavior of the best human drivers, a large

accident would occur that causes a far higher count of death. The idealized mimicking

autonomous cars behave similar to humans and do not purposefully kill their passengers,

but far more damage is caused under the idealized mimicking model.

Drawbacks

The main drawback is exactly the same as rule utilitarianism: the most possible lives may

not be saved. While idealized mimicking autonomous cars may integrate seamlessly, the

algorithms will be making a choice to mimic the best human drivers rather than to save as many

humans as possible. The autonomous cars will have access to legitimate risk data that could

allow them to make better decisions than the best human drivers. However, simply because that

behavior is not what an ideal human driver would do, the autonomous car will be unable to truly

perform at its best. Thus, the question remains: if we can do better, must we?

Section Five: Individualism

5.1 Background

34
Individualism focuses on free choice and empowering individuals. It's an ethical and

political framework that "stresses human independence and the importance of individual self-

reliance and liberty. It opposes most external interference with an individual's choices, whether

by society, the state or any other group or institution" (Mastin). The Founding Fathers

established an individualistic framework in America by recognizing and protecting "the

individual's rights to life, liberty, property, and the pursuit of happiness" (Biddle). Individualism

maintains that the individual's life belongs to her. As the owner of her life, she has "an

inalienable right to live as she sees fit, to act on her own judgment, to keep and use the product

of her effort, and to pursue the values of her choosing" (Biddle). Thus, the choice of moral

values falls to the individual. Individual conscience is enough to determine ethics, and the

individual "should be the final authority and arbiter of morality" ("Ethical Individualism").

Reason, in short, is the moral rule. There is no overarching "objective authority" that should be

applied to ascertain morality beyond one's own rationality (Mastin). To clarify: self-interest is

not necessarily a moral good, but individuals are not bound to "any socially-imposed

morality...individuals should be free to choose to be selfish or not" (Mastin). Individual reason,

then, is enough to determine the ethical decision without any imposed ethical frameworks like

utilitarianism, government policy, or religion.

5.2 Appeal and Drawbacks

Ethical Appeal

Individualism is ethically appealing as it taps into basic human self-interest and matches

the American framing of ethics. Americans strongly value freedom and the concept of owning

35
ourselves. We focus much more on fostering individual identities than on collective identities.

Self-interest, which is arguably a universal part of being human, is accepted and celebrating in

Western culture. Thus, having cars act as people individually desire is in line with our sense of

justice. Beyond the cultural precedent, an individualistic autonomous car is self-empowering. If

an individual so desires, her autonomous car can offer far more self-security than classical and

rule utilitarian autonomous cars. An individual entering an autonomous car as a passenger will

feel reassured that the car can always act to protect her life over any pedestrians, bikers, or

passengers in other autonomous cars. Furthermore, it allows each individual to determine his or

her own sense of justice. Consumers can choose themselves how self-interested and how risky of

an autonomous car they prefer. Standards will not be set by opaque calculations based on an

ethical framework that not all identify with, but by individuals themselves as they see fit.

Market Appeal

Individualism may be most appealing to consumers. Western consumers typically expect

that what they purchase is in their best interest. By virtue of trading their money for a good, like

an autonomous car, they expect the good to benefit themselves. Thus, there exists a strong

precedent that if one purchases a car, it should act as the consumer wishes. An autonomous car

that prioritizes your own life above all else aligns with Western individualism. In fact, Mercedes-

Benz has already stated that their autonomous cars will prioritize occupant safety over all,

including pedestrians. Christoph von Hugo, their Manager of Driver Assistance Systems and

Active Safety, stated that "all of Mercedes-Benzs future Level 4 and Level 5 autonomous cars

will prioritize saving the people they carry" (Taylor). He continues, "If you know you can save at

least one person, at least save that one. Save the one in the car... If all you know for sure is that

36
one death can be prevented, then thats your first priority" (Taylor). This commitment to self-

protection is appealing to consumers interested in the protection of their own lives or their family

members' lives.

While individualism is definitely in each single consumer's best interest, it may not be in

society's general best interest or in the companies' best interest. First, in terms of our general best

interest, a fleet of cars all protecting their individual passengers at varying levels will be able to

coordinate movements on a platform to some extent, but ultimately each car will be driving with

its own best interest as paramount. In a utilitarian framework, the autonomous cars could

coordinate fully with a common goal of maximizing life. In an individualism framework, the

autonomous cars will each have distinct goals set by the individual - that may often be at odds.

Thus, the potential for massive coordination is diminished with individualism. Individualism

may also be at odds with the companies' best interest. The companies producing autonomous

cars will likely be held liable for the deaths resulting from their autonomous cars. Audi and

Volvo, for example, have already stated that they will assume full legal responsibility for any

crashes or fatalities arising from their self-driving cars (Taylor). By not maximizing life and

instead prioritizing the priorities of the passenger, these companies will be exposing themselves

to more deaths and more liability. Thus, individualism may appeal to individual consumers, but

it does not have the total market appeal it is often assumed to have.

Application

In applying individualism, autonomous cars would be coded to prioritize as the passenger

sees fit. Risks and frameworks could be decided as the individual sees fit. As described earlier, a

study found that while individuals prefer utilitarian autonomous cars for others, they would

37
choose cars in their own self-interest for themselves. Thus, I will assume that people under this

framework will generally choose cars in their self-interest - although the option would exist to

have a utilitarian car. Individual choice and autonomy would be prioritized over any potential

large-scale, collective benefits. A potential application could be that individuals actually set the

risk level of their car, and can set the risk level higher when running late for a meeting or doctor's

appointment. Another potential scenario could be that varying autonomous car brands have

different risk levels that consumers buy based on what most aligns with their risk profile. The

concept of collective risk base level used in the utilitarian models - set by a precedent of current

risk - would be negated in favor of individuals each determining their own risk levels for the

autonomous car they are riding in. This, of course, poses risks for others as cars are not operating

in singular vacuums. One car operating at an extremely high risk level would impact other cars

on the road through increased probabilities of crashes and accidents.

Prioritization by the autonomous car would depend on what features the automaker put

under individual control. Some options could include risk level, weighting of lives from children

to professionals to one's own life, and propensity to break laws. The strong focus on individual

control and choice would be evident. Individualistic autonomous cars would most likely have

settings that allow the passenger to override the autonomous functions of the car. For example,

Alphabet's autonomous driving company, Waymo, has human test drivers that can always retake

control from the driverless software. Currently, the human test drivers retake control once about

every 5,000 miles (Mui). While this level of retaking control will most likely be far lower by the

time autonomous cars are sold and marketed to the masses, individualistic autonomous cars will

likely allow human control in many aspects. Applied individualism will provide as many

controls as possible for the human passengers in an effort to most empower the consumer. This

38
may result in what appears to be a blend of autonomous and human driving. In fact, its

application may actually appear similar to idealized human drivers as the autonomous car will

handle cases of distractions and other human faults, while allowing the human to determine

higher level decisions.

Situations of Difference

Risk Assessment Situation

Assume the passenger is able to set the risk level of her individualistic autonomous car. If

the passenger in the car was running late to a critical meeting or her daughter's

performance, and tried to make the car pass another car, the car would do as the

individual desired regardless of the actual risk. The individual's preferences of how to

value risk, and thus how to weight lives, would be enough for the autonomous car to

change its behavior. The car could be set to be in the best interest of the individual who

personally wants to the car to speed rather than in the interest of everyone on the road.

Notably, an individual could also set his or her car to be aligned with any of the

frameworks above if he or she preferred that.

Self-Protective Situation

Consider yourself a passenger in an individualistic autonomous car. Four pedestrians

walk up, two to the front and two to the back of your car holding weapons of some sort.

Thus, you want to escape and may need to run over these pedestrians. With an

individualistic autonomous car, you will be able to escape. You will be free to override

39
your car's autonomous functions or as the car is in your self-interest, it may 'choose' to

run over the pedestrians to protect you without human interference.

Trolley Problem

Consider an individualistic autonomous school bus full of schoolchildren and an

individualistic autonomous car driving a single adult. Imagine these two cars meet on a

bridge that cannot fit both cars. The actions would depend on the preferences of each of

the two cars. If both cars prefer to prioritize self-interest, the individual's car and the

school bus would be coded to each drive in their passenger's self-interest respectively.

While the chances that the individuals car survives a crash with the bus may be low, the

chances of surviving driving off the bridge would be lower. Thus, both the individual's

car and school bus would not drive off the bridge. The cars would crash in an accident

that causes many deaths. By protecting the passengers in the car, a large accident would

occur that causes a far higher count of death. The passengers are able to have self-

security, but far more damage is caused under the individualistic model. If either car was

chosen by the individual to have a utilitarian focus, that car would drive off the bridge to

allow the other car to survive.

Drawbacks

Just as with rule utilitarianism and idealized mimicking, total life will not be maximized

in this framework (assuming not all cars are chosen to act with equal life prioritization). Another

drawback of individualism is far less coordination is possible between autonomous cars. In

utilitarianism, all autonomous cars can coordinate around a common goal of maximizing life. In

40
rule utilitarianism, autonomous cars can share similar rules and coordinate around maximizing

life after these rules. In idealized mimicking, autonomous cars can share similar weightings and

predict what the best human drivers would do in situations. Individualism is unique in that each

car may weight and act totally in the interest of its passenger without any focus on collective

coordination. Through applications of game theory, we know there are situations in individuals

will pursue what's in their best interest rather than cooperate, even though it would result in a

better outcome for both parties if they both cooperated. Essentially, the result of both parties

operating individually in their respective best interest results in a worse case than if both parties

had coordinated. In game theory, this is described as the prisoner's dilemma. The prisoner's

dilemma displays why two 'rational' individual may not cooperate even if it is in their best

interest to both cooperate. The dilemma, from Wikipedia, is:

If A and B both betray each other, each serves 2 years in prison.

If A betrays and B remains silent, A is free and B will serve 3 years in prison.

If A remains silent and B betrays, A will serve 3 years in prison and B is free.

If A and B both remain silent, both will serve only 1 year in prison.

If both A and B were to cooperate and remain silent, they would both have a better outcome of 1

year in prison. However, the result is that two purely self-interested agents are rationally led to

betray each other. Therefore, with A and B each rationally pursuing their own self-interest, they

will each choose to betray the other - resulting in 2 years of prison each. I provide this example

of game theory applications to show how coordinated autonomous cars could provide a better

outcome than autonomous cars each purely calculating for self-interest. In short, while people

will feel secure in the sense that their car could aim to best protect them, individualism will result

41
in more deaths overall due to a lack of coordination. Personal safety and preferences are

prioritized over public good.

Section Six: Conclusion


I have described what my research suggests are the most appealing ethical frameworks

for autonomous cars: classical utilitarianism, rule utilitarianism, idealized mimicking, and

individualism. I elucidated the trade-offs between each of the frameworks with respect to its

ethical and market appeal and its potential drawbacks. While I cannot predict how autonomous

cars will unfold, it is clear from my research that the market will have an impact on the most

appealing framework. Both consumers and the companies behind shared platforms have stakes in

which ethical frameworks become the norm. Consumers must have enough trust in autonomous

cars to want to adopt the new technology. Companies will try to maximize life, minimize their

liability, and attract consumers. While the four ethical frameworks do result in different

behaviors, as evidenced in the situations of difference, autonomous cars may transition between

these ethical frameworks over time. For example, autonomous cars may start with idealized

mimicking or individualism to gain consumer trust and fast adoption. Over time, they may

become more and more utilitarian as they gain acceptance and the companies that own the

autonomous cars want to minimize their liability and death count from autonomous cars. In fact,

different companies will likely use different ethical frameworks. Nauto seems likely to use

idealized mimicking as their ethical framework from their data collection of human driving and

risk assessment. In contrast, Mercedes-Benz will be using individualism with the self-security of

the passenger as paramount in their autonomous cars. The future of autonomous cars is unclear -

42
but I hope we approach this shift thoughtfully with real respect given to the potential scenarios

that may result.

43
Works Cited

"2016 U.S. Auto Sales Set a New Record High, Led by SUVs." Los Angeles Times. Associated

Press, 04 Jan. 2017. Web. 13 May 2017.

Ackerman, Evan. "People Want Driverless Cars with Utilitarian Ethics, Unless They're a

Passenger." IEEE Spectrum: Technology, Engineering, and Science News. IEEE

Spectrum, 23 June 2016. Web. 24 May 2017.

ArXiv. "Why Self-Driving Cars Must Be Programmed to Kill." Technology Review. MIT

Technology Review, 22 Oct. 2015. Web. 31 Oct. 2016.

Bergen, Mark, and Eric Newcomer. "Uber to Suspend Autonomous Tests After Arizona

Accident." Bloomberg.com. Bloomberg, 25 Mar. 2017. Web. 27 Apr. 2017.

Bhuiyan, Johana. "Before Tackling Driverless Technology, This Startup Wants to Make Human

Drivers Safer." Recode. Recode, 03 May 2017. Web. 28 May 2017.

Biddle, Craig. "Individualism vs. Collectivism: Our Future, Our Choice." The Objective

Standard. The Objective Standard, n.d. Web. 29 May 2017.

De George, Richard T. "Ethical Responsibilities of Engineers in Large Organizations: The Pinto

Case." 1.1 (1981): 1-14. Web. 27 Apr. 2017.

"Ethical Individualism." Blackwell Reference Online. Blackwell Publishing Inc, 2004. Web. 29

May 2017.

Hagan, Shelly. "Uber Launches Self-Driving Cars in Arizona After California Ban."

Bloomberg.com. Bloomberg, 21 Feb. 2017. Web. 27 Apr. 2017.

Harris, Tristan. "How Better Tech Could Protect Us from Distraction." TED. TED Conferences,

Dec. 2014. Web. 24 May 2017.

44
Healey, James R., and Cars.com Kelsey Mays. "Which Cars Park Themselves Best? Challenge

Results." USA Today. Gannett Satellite Information Network, 06 Dec. 2012. Web. 03

June 2017.

Heck, Stefan. "2016 State of the Valley Conference: Stefan Heck Keynote Speech." YouTube.

Joint Venture Silicon Valley, 26 Feb. 2016. Web. 28 May 2017.

Heck, Stefan. "The Real Dilemma - 15% of Human Drivers Cause 86% of Accidents. Autonomy

Must Learn from BEST Humans @NAUTODriver Https://t.co/H17wXbDN3N." Twitter.

Twitter, 25 Feb. 2017. Web. 28 May 2017.

Heck, Stefan. "Stefan Heck, Consulting Professer at Stanford Discusses the Impact of

Autonomous Vehicles." YouTube. Silicon Valley Forum, 24 Mar. 2015. Web. 28 May

2017.

Heck, Stefan. "Using Visual Context and AI to Detect Distracted Driving." Medium. Nauto, 03

May 2017. Web. 28 May 2017.

Hern, Alex. "'Partnership on AI' Formed by Google, Facebook, Amazon, IBM and Microsoft."

The Guardian. Guardian News and Media, 28 Sept. 2016. Web. 27 Apr. 2017.

"Home." Partnership on Artificial Intelligence to Benefit People and Society. N.p., n.d. Web. 27

Apr. 2017.

"Household, Individual, and Vehicle Characteristics." Bureau of Transportation Statistics.

United States Department of Transportation, 2001. Web. 24 May 2017.

"Imperfect Self-Driving Cars Are Safer Than Humans Are." The Wall Street Journal. Dow Jones

& Company, 14 Aug. 2016. Web. 07 Feb. 2017.

"Learn about Autonomous Cars." ORi. Open Roboethics Initiative, 29 Jan. 2015. Web. 01 Nov.

2016.

45
Lelinwalla, Mark. "Google's Driverless Cars Are Too Safe At This Point." Tech Times. TECH

TIMES, 1 Sept. 2015. Web. 31 Oct. 2016.

Lemov, Michael R. "Driverless Cars, Ethics and Public Safety." The Hill. Capitol Hill Publishing

Corp., 25 Oct. 2016. Web. 01 Nov. 2016.

Levin, Sam. "Witness Says Self-driving Uber Ran Red Light on Its Own, Disputing Uber's

Claims." The Guardian. Guardian News and Media, 21 Dec. 2016. Web. 07 Feb. 2017.

Lin, Patrick. "Is Tesla Responsible for the Deadly Crash On Auto-Pilot? Maybe." Forbes. Forbes

Magazine, 1 July 2016. Web. 01 Nov. 2016.

Lomas, Natasha. "Uber Self-driving Test Car Involved in Crash In Arizona." TechCrunch.

TechCrunch, 25 Mar. 2017. Web. 27 Apr. 2017.

Lomas, Natasha. "Ubers Autonomous Test Cars Return to the Road in San Francisco today."

TechCrunch. TechCrunch, 27 Mar. 2017. Web. 27 Apr. 2017.

Lomas, Natasha. "Ubers Autonomous Test Cars Return to the Road in San Francisco today."

TechCrunch. TechCrunch, 27 Mar. 2017. Web. 27 Apr. 2017.

Mastin, Luke. "Introduction to Individualism." The Basics of Philosophy. Philosophy Basic,

2008. Web. 29 May 2017.

McDermott, Drew. "Why Ethics Is a High Hurdle for AI." North American Conference on

Computers and Philosophy (2008): n. pag. Yale CS. Yale University, 29 Feb. 2008. Web.

8 May 2017.

McGinn, Robert. "Perspectives on the Engineering Profession in the U.S." Canvas. Stanford

University, 11 Apr. 2017. Web. 05 June 2017.

"Moral Machine." Moral Machine. MIT Media Lab, n.d. Web. 24 Oct. 2016.

46
Morris, David Z. "Uber's Self-Driving Systems, Not Human Drivers, Missed At Least Six Red

Lights In San Francisco." Uber's Self-Driving Cars Missed Six Red Lights In San

Francisco | Fortune.com. Fortune, 26 Feb. 2017. Web. 27 Apr. 2017.

Mui, Chunka. "Waymo Is Crushing The Field In Driverless Cars." Forbes. Forbes Magazine, 08

Feb. 2017. Web. 29 May 2017.

Musk, Elon, and Chris Anderson. "The Future We're Building -- and Boring." TED. TED

Conferences, Apr. 2017. Web. 24 May 2017.

"NADA Data." NADA: National Automobile Dealers Association. National Automobile Dealers

Association, n.d. Web. 14 May 2017.

Nathanson, Stephen. "Act and Rule Utilitarianism." Internet Encyclopedia of Philosophy.

Northeastern University, n.d. Web. 26 May 2017.

"National Statistics." FARS Encyclopedia. National Highway Traffic Safety Administration,

2015. Web. 24 May 2017.

Overly, Steven. "The Big Moral Dilemma Facing Self-driving Cars." The Washington Post. WP

Company, 20 Feb. 2017. Web. 28 May 2017.

Poeter, Damon. "How Autonomous Cars Will Create 'Better Humans'." PC. PC Mag, 13 Mar.

2016. Web. 28 May 2017.

Rayadmin. "Reverse Park-Assist Systems." J.D. Power Cars. N.p., 24 Feb. 2012. Web. 05 June

2017.

Spector, Mike, Jeff Bennett, and John D. Stoll. "U.S. Car Sales Set Record in 2015." The Wall

Street Journal. WSJ, 05 Jan. 2016. Web. 13 May 2017.

Stewart, Jack. "Tesla's Self-Driving Car Plan Seems Insane, But It Just Might Work." Wired.

Conde Nast, 24 Oct. 2016. Web. 13 May 2017.

47
Swisher, Kara. "Full Transcript: Time Well Spent Founder Tristan Harris on Recode Decode."

Recode. Recode, 07 Feb. 2017. Web. 24 May 2017.

Taylor, Michael. "Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over

Pedestrians." Car and Driver. HEARST Communications, Inc, 7 Oct. 2016. Web. 28

May 2017.

"A Tragic Loss." TESLA. Tesla, 30 June 2016. Web. 01 Nov. 2016.

Turpen, Aaron. "How Self-parking Car Technology Works: The First Step to Autonomous

Vehicles." New Atlas - New Technology & Science News. New Atlas, 29 Nov. 2016.

Web. 13 May 2017.

Waldron, Ben. "Parallel Parking Too Close for Comfort." ABC News. ABC News Network, 15

Jan. 2014. Web. 03 June 2017.

We Need a "Moral Operating System". Perf. Damon Horowitz. TED. TED Conferences, LLC,

May 2011. Web. 24 Oct. 2016.

"Who We Serve." Park Assist. Parksmart, n.d. Web. 03 June 2017.

Yadron, Danny, and Dan Tynan. "Tesla Driver Dies in First Fatal Crash While Using Autopilot

Mode." The Guardian. Guardian News and Media, 30 June 2016. Web. 07 Feb. 2017.

Zomorodi, Manoush. "Will You Do a Snapchat Streak With Me?" WNYC. New York Public

Radio, 8 Mar. 2017. Web. 24 May 2017.

48

Вам также может понравиться