Вы находитесь на странице: 1из 7

COVID-19: the newest battleground for privacy versus safety

Introduction
Over the last view months COVID-19 has had the world in its grip. The disease that has had a
tremendous impact on the world started in China. China, with its enormous population, has
taken huge measures to counter the virus. They are using technology on a wide scale.
Disinfecting robots, smart helmets, thermal camera-equipped drones and advanced facial
recognition software are all used in the struggle against the COVID-19. 1
People’s locations and travel history are being closely monitored by both cameras on the
streets and chips in phones. If people are deemed a risk based on their data, they are given a
red sign on a special application on their phones. Chinese must show this code on airfields,
train-, metro- and bus stations, and in some cities even to go into supermarkets and public
toilets. China already has a mixed reputation when it comes to the privacy treatment of its
citizens. Critics like Maya Wang, a researcher at Human Rights Watch, fear that China will
keep this system in place when the crisis has been dealt with. 2
Right now, the world might be very happy with these sorts of measures, but it poses moral
dilemmas that can be placed in a larger worldwide context. These are dilemmas that citizens
all around the world have started questioning since the emergence of the digital age. Carl M
Cannon, who wrote books on this topic formulated the following way: “How can the
government protect its citizens without simultaneously trampling on their civil rights and
eviscerating the very concept of privacy?”3 When Edward Snowden stepped forward in 2013
this debate about privacy versus safety took off. 4 The context at that point was different: it
was concerned with terrorist attacks. Although Cannon’s question touches the root of the
problem, it might be considered too naive. It suggests that governments always are looking
out for the best interest of the citizens these are put in place to govern, whilst not having the
desire to control other parts of the lives of their civilian population. History has given several
examples in which this has not been the case.5 That is why I pose the question slightly more
realistic: What weighs heavier, the health of the citizens of the world or the privacy of the
individual? And how can we still reap the benefits of AI algorithms when it’s placed against
privacy? With the emergence of COVID-19, these issues are accelerated and are becoming
more prominent. We must start asking these questions as of now. Philosophical debates in AI
tend to look at futuristic robots and their emotions and decision making. 6 But narrow AI is
concretely influencing our lives and is a current issue that needs the necessary attention.

1
Jakhar, Pratik. “Coronavirus: China's Tech Fights Back.” BBC News, BBC, 3 Mar. 2020,
www.bbc.com/news/technology-51717164.

2
Wang, Maya. “China’s Neighbors Respect Rights to Combat Coronavirus”. 13 Mar. 2020

3
Cannon, Carl M., et al. “The Personal Privacy vs. Public Security Dilemma.” RealClearPolitics, 26 July 2018,
www.realclearpolitics.com/articles/2018/07/26/the_personal_privacy_vs_public_security_dilemma.html.

4
SNOWDEN, EDWARD. PERMANENT RECORD. PICADOR, 2020.

5
Roberts, David. The totalitarian experiment in twentieth century Europe: understanding the poverty of great
politics. Routledge, 2006.
The direct inducement of this paper might be China’s surveillance system, but it will not be
limited to China. Surveillance systems do not limit themselves to China, and it is, therefore,
important to look at this problem from a global point of view. To answer the questions above,
I will first look at the contemporary side of the issue. This will be done by approaching the
question from both a Kantian and utilitarian view. Then, after putting it into a philosophical
context, the main issues will be raised: what are the dangers should we allow for these Big
Brother kinds of surveillance systems. Thereby, I will show how giving up privacy now to
solve the crisis may have far-reaching consequences for our future privacy concern. Finally, I
will conclude that algorithms can only be justifiable if they are transparent. And I suggest that
our only chance of succeeding in enforcing transparency will involve the internet.
Individual responsibility vs national enforcement
Why would big surveillance systems have to deal with COVID-19? Until this point, certain
Western countries have not been using the same systems and, instead, rely on their citizens’
responsibility to curb the spread of the virus. This, in turn, gives citizens a moral dilemma as
portrayed in the table below. It should be noted that these weights have no real meaning
themselves. But they provide a simple way of modeling the current situation. If someone else
stays inside, it advantageous for me to go outside and continue living “normally” without
curbing my social appetite. I get to continue work, go to the supermarket or beach on a sunny
day. However, for the health of the population, it would be best if we were to both stay
inside, thereby minimizing the spread of the disease and allowing for the virus to be curbed in
the shortest amount of time. Since “continue living like nothing is going on” is a dominant
strategy, we end up with a balance that is not the Nash equilibrium. This is called the
prisoner's dilemma and has been used in several AI ethical cases. 7 8

Some western countries enforce the nash equilibrium by imposing a lock-down on their
civilians. However, a lock-down is not economically feasible for longer periods of time.
Sooner or later, part of the country will be necessitated to open up again, as we are seeing in
China right now. Also, other people will simply continue to go outside because they practice
vital jobs.
The Chinese government might choose to track humans, since people will go outside, with or
without COVID-19.
It should be noted that in this paper the focus will not be on the moral dilemmas a lockdown
may bring. Instead, I will focus on the potential risks and benefits when using big data and AI
to reduce the spread of COVID-19.

6
Bostrom, Nick, and Eliezer Yudkowsky. "The ethics of artificial intelligence." The Cambridge handbook of artificial
intelligence 1 (2014): 316-334.
7
Gogoll, Jan, and Julian F. Müller. "Autonomous cars: in favor of a mandatory ethics setting." Science and engineering
ethics 23.3 (2017): 681-700.
8
Nash wrote a piece on the prisoner’s dilemma. In this case stay inside will do subtract your life quality by 20% whereas
going outside only by 10%. Since the 10% detoriation is better for both if the other stays inside(referred to as the dominant
strategy), all the people will go outside, but this will result in a situation that is worse for everybody, because there a big
chances they will both get ill, hence the 30% detoriation. Nash showed that dominant strategies do not always (often not)
lead to the optimal outcome. This is referred to as a prisoner’s dilemma.
Risks and benefits now
Currently, millions of people in China are being heavily monitored and tracked. They are
losing privacy, but as the Chinese government argues, are being kept safe in return. 9 Whether
this is ethical comes down to the classical philosophical debate between utilitarianism and
Kantianism. The viewpoint of utilitarianism dates back to the end of the 18th century.
Bentham (1780), argued that ““it is the greatest happiness of the greatest number that is the
measure of right and wrong.”” He proposed that the ultimate goal of governments should be
to maximize the sum of individual utilities. The advantages of utilitarianism include that it
provides a very clear oversight of how society is to prioritize in difficult situations. It is
possible to make large problems transparent via the use of weights and numbers. However,
the concrete assignment of these weights and values is difficult in practice. For instance, the
question of how do we put a value on the life of a human and how do we assign a value to
one’s privacy may arise. Also, at the end of the day, utilitarianism only looks at the outcome
(fewer people who die) and therefore but does not look at a fundamental level of whether
something is right or wrong. It is then difficult to take basic human rights into account. This
is dealt with differently by Kantians (Kant himself, along with contractarians like Locke and
Rawls) who believed morality should be concerned with the protection of autonomy. 10
Actions are always universalized, which means that if someone allows for an action, the
world should always allow for this action. Difficulties with the Kantian view include that it
can be very narrow and limiting. For example, it suggests one would not be allowed to lie to
an SS-officer if one were giving shelter to Anne Frank in World War 2. 11 Because then if
Anne Frank were to hear the SS she might escape, but the SS might not go inside and thus
find Anne Frank trying to escape. According to Kant the responsibility of Anne Frank’s death
would then lie upon the lier’s shoulders. But if one looks closer at the Anne Frank case, it
feels instinctively feels wrong to tell the truth. This is because Kantianism does not account
for context too well.

In 2010, medical researchers argued that privacy concerns had a negative impact on
healthcare research.12 They believed that restrictions concerning data analysis of large
populations was slowing down research. Now the same argument can be applied to COVID-
19. By tracking a large population and having access to their medical records, governments
might prevent the disease from spreading or even use the information to find a cure.
Analyzing this from a utilitarian viewpoint it seems certain that the benefits outweigh the
considerations. It is difficult to put a weight on the life of thousands of humans. However,
according to the Italian prime minister, the world is facing the biggest crisis since World War

9
DiDonato, Valentina. “Coronavirus Is the Most Severe Crisis since World War II, Says Italian Prime Minister.”
Cnn, 22 Mar. 2020, cnnphilippines.com/world/2020/3/22/italian-prime-minister-coronavirus-most-severe-crisis-
world-war-II.html.

10
Miller, J. Joseph. "The greatest good for humanity: Isaac Asimov's future history and utilitarian calculation
problems." Science Fiction Studies (2004): 189-206.
11
Varden, Helga. "Kant and lying to the murderer at the door... One more time: Kant's legal philosophy and lies to murderers
and Nazis." (2010).
12
Wartenberg, Daniel, and W. Douglas Thompson. "Privacy versus public health: the impact of current confidentiality
rules." American Journal of Public Health 100.3 (2010): 407-412.
213. If not now, when would privacy ever be subordinate to the lives of humans? This aligns
with the view of the Chinese population itself.14 (maybe explain further)
If we agree that the life of humans outweighs the current loss in privacy, then the AI systems
in China are ethically responsible according to utilitarianism.

However, if we take a Kantian perspective it is easy to critique this argument. By using his
universalizability principle, it can be argued that if the world approves of the maxim of the
Chinese government to track its citizens, then this action is universalized. Every government
should be able to track its citizens’ data whenever they want. This, however, leads to a
contradiction: no one would argue that a government should be able to track their citizens all
the time.15 That view, in turn, might be attacked by saying that this is a special time and that
therefore we should allow the Chinese government to track its citizens. But here lie the real
dangers, which many ethics point to. If we allow for tracking, there is no guarantee that
governments will continue doing this once the crisis has passed. 161718 It is becoming clear that
old theories from Kant and Bentham cannot give clear answers to modern privacy issues.

Future risks
If we allow governments to continue using excessive surveillance systems we could be in real
trouble, Harari points out in his essay on COVID-19. 19 Developments in technology are
moving with ever-increasing speed. If the surveillance systems are the match, then the new
biotech developments are the kerosene-soaked rag and AI is the atomic bomb with the drunk
President holding his finger over the button. 20 Harari poses a simple thought experiment to
prove his point. He identifies one of the problems as being that no one really knows how they
are being surveilled. He poses a simple, yet insightful thought experiment. What if we would
demand each citizen to wear a biometric bracelet that would monitor temperature and heart
rate? The government, using AI, could quickly identify who is ill and could act upon it. This
way we could stop an epidemic within days. The idea of a swift termination of the epidemic
sounds wonderful at a first glance. But as Harari points out this brings various risks with it.
Just like a cough, a rise in body temperature or heartrate, emotions are simply biological
phenomena and before we know it computers might know us better than we know ourselves.
This would allow governments or organizations to hack a human being. Hacking someone
means the government would understand what’s happening inside you on the level of both
the body, the brain and the mind. That way the you (the hacker) can predict what people will
13
AFP, “UN chief says coronavirus worst global crisis since World War II.” 1 Apr. 2020,.
14
OKANO-HEIJMANS, Maaike “Vergis je niet: COVID-19crisis is ook een digitale pandemie.” 2020.
15
Buchanan, Allen. "Categorical imperatives and moral principles." Philosophical Studies 31.4 (1977): 249-260.
16
Kharpal, Arjun, “Use of surveillance to fight coronavirus raises concerns about government power after pandemic ends” 26
Mar. 2020,.

17
Mozur, Paul, et al. “In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags.” The New York
Times, The New York Times, 2 Mar. 2020, www.nytimes.com/2020/03/01/business/china-coronavirus-
surveillance.html.

18
Kuo, Lily. “'The New Normal': China's Excessive Coronavirus Public Monitoring Could Be Here to Stay.” The
Guardian, Guardian News and Media, 9 Mar. 2020, www.theguardian.com/world/2020/mar/09/the-new-normal-
chinas-excessive-coronavirus-public-monitoring-could-be-here-to-stay.

19
Harari, Yuval Noah. “Yuval Noah Harari: the World after Coronavirus: Free to Read.” Financial Times, 20 Mar.
2020, www.ft.com/content/19d90308-6858-11ea-a3c9-1fe6fedcca75.

20
Lewis, Michael. The Big Short. Penguin Books Ltd., 2015.
do. Since you understand how they feel, you can manipulate them, or even control or replace
them. This has been the central thesis of Harari’s writings. 21
Whilst this argument might give the reader an Orwellian 1984 feeling, these scenarios are
certainly not unthinkable. If anything, we have seen that privacy scandals don’t limit
themselves to the cooperate world (Cambridge Analytica) but stretch to government
organizations such as the NSA.2223 When watching the episode of Black Mirror’s nosedive,
in which everything in one’s life is decided by one’s social score, many saw it as a dystopian
futuristic world view. Yet some months later, the news came that China had started
implementing such a social score. A Telegraph reporter consequently claimed the following:
"I was banned from most forms of travel; I could only book the lowest classes of seat on the
slowest trains. I could not buy certain consumer goods or stay at luxury hotels, and I was
ineligible for large bank loans. “24

Still, there are some fundamental issues with the argument of Harari. First off, Harari starts
with the premise that no one knows how they are being surveilled. This, however, will almost
always be an issue with AI algorithms. Nevertheless, this does not mean we should conclude
that these algorithms cannot be used. Rather, as Bostrom and Yudkowsky argue, AI
algorithms should be transparent. For me, here lies the actual solution. Where Bentham and
Kant offer us rigorous options, they are unrealistic and all-defining. Harari, in turn, raises
many issues but does not offer an actual solution. Humans should know when they are being
tracked and why. If civilians know this, they at least know when it is used for good purposes
and can act upon it when it is not. This should, in turn, lead to governments being more
responsible with its citizen's privacy.25
An obvious response might be that it is difficult to enforce this level of transparency in
western democracies and let alone in countries like China. However, throughout world
history, seemingly unthinkable things have been achieved resulting in incredible results.
Because technology caught up and the population rallied behind it, things like democracy
have been made possible. As Harari points out in his book Homo Sapiens, civilizations need
the right instruments and technologies for certain systems to work. You cannot have
communism without trains, and you cannot support liberal democracy in the feudal age. 26 Yet
the internet is empowering more citizens every day. The internet has been used to connect
people worldwide and give citizens open access to the never-ending and growing global pool
of information. This has invoked both communication and critical thinking. These two

21
Harari, Yuval N. “Homo Deus: a Brief History of Tomorrow.” Signal, and Imprint of McClelland & Stewart, 2017.

22
Ludlam, Scott. “Don't Waste the Cambridge Analytica Scandal: It's a Chance to Take Control of Our Data | Scott
Ludlam.” The Guardian, Guardian News and Media, 23 Mar. 2018,
www.theguardian.com/commentisfree/2018/mar/23/dont-waste-the-cambridge-analytica-scandal-its-a-chance-to-
take-control-of-our-data.

23
SNOWDEN, EDWARD. “PERMANENT RECORD.” PICADOR, 2020.

24
Vincent, Alice. “Black Mirror Is Coming True in China, Where Your 'Rating' Affects Your Home, Transport and
Social Circle.” The Telegraph, Telegraph Media Group, 15 Dec. 2017, www.telegraph.co.uk/on-
demand/2017/12/15/black-mirror-coming-true-china-rating-affects-home-transport/.

25
Hollyer, James R., B. Peter Rosendorff, and James Raymond Vreeland. "Democracy and transparency." The Journal of
Politics 73.4 (2011): 1191-1205.

26
Harari, Yuval N., et al. “Sapiens: a Brief History of Humankind.” Harper Perennial, 2018.
features will be vital in empowering humans and the fight for transparent AI.27 Because it
makes it so that European citizens can fight for transparency in AI systems all the way in
China. Therefore, we cannot give up on AI, but in fact, must focus on making better AI.

A second critique on Harari is his overemphasize on how humans are machines. Emotions to
him can be explained by an increase in temperature or heartrate. If governments are able to
measure temperatures, heart rates on a large scale, they might be able to track our emotions
and hack into our life. Now as Damasio, a prominent neuroscientist points out in The
Strange Order of Things, emotions are much more complicated. 28 We are not nearly close to
tracking emotions just with galvanizing features. Humans have difficulties detecting the
emotions of someone else, and they have access to an evolutionary system that has been
developed over millions of years. 29 That means that we are likely very far away from
governments hacking our emotions and thoughts based on physical features alone. Then, it
becomes unfair to take these issues into account on whether governments should be allowed
to infringe on its citizen’s privacy.
While I agree with Harari that we should consider these possibilities, however, it does not
warrant not using AI in times of major world crises like the one the world is facing at this
moment.

Conclusion
Throughout this paper, I have tried answering the following questions: What weighs heavier,
the health of the citizens of the world or the privacy of the individual? And how can we still
reap the benefits of AI algorithms when it’s placed against privacy? By trying to answer the
first question from a utilitarian point of view I was left frustrated. It remains unclear how
weights should be assigned to the lives of humans in comparison to their privacy. The reader
might agree that in times of a crisis, the lives of humans outweigh privacy. But this brings us
back to the Kantian view. How can citizens all over the world be sure that governments will
not keep these systems in place when COVID-19 has been dealt with?
The Kantian view offers a clear answer: if we allow these mass surveillance systems now, we
are claiming they are morally justifiable, and governments should be allowed to use them
whenever and wherever also in future circumstances. Now although this gives a clear answer
it is too limiting. This should mean governments can always use surveillance systems even
for small issues and infringe on people’s privacy.

Throughout the last weeks, several authors have been making noise about this topic,
including Harari, in his new paper on COVID-19. Harari claims, that even though currently
the world might be content with the use of AI in combatting COVID-19, this euphoria will
not last when future risks are considered. Suppose, for example, that this leads to
governments being able to track our emotions and perhaps feelings. After all, as Harari
suggests, these are just biological features like heart rate and temperature. The exact same
things China might collect en masse to fight COVID-19. I am glad Harari raises these
justified concerns. However, I am scared the world might be blindfolded by these, as of yet,
dystopian futuristic views. As mentioned earlier, medical researchers have said that these

27
Harrison, Tina, Kathryn Waite, and Gary L. Hunter. "The internet, information and empowerment." European Journal of
Marketing 40.9-10 (2006): 972-993.

28
Damasio, Antonio R. “The Strange Order of Things: Life, Feeling, and the Making of Cultures.” Vintage Books,
2019.

29
Damasio, Antonio R. “The Strange Order of Things: Life, Feeling, and the Making of Cultures.” Vintage Books, 2019.
views are limiting the world in its medical advancements. The world is facing the biggest
crisis since World War 2 and rather than only looking at problems, we should be looking for
solutions. AI is here to stay, and if used properly, will save and help millions of lives inside
and outside the context of COVID-19. The only way way to make these AI systems
responsible, is in my view to equip them with transparency. Tranparant AI will naturually
limit governments in what they should do with their collected data and what data they should
collect at all. Now I forsee a vital role for the internet to bring about this transparent wave.
The internet allows citizens all over the world to fight for the same, universal AI rights. That
is why humans all over the world should start empowering themselves by demanding the
desperately needed transparency.

Вам также может понравиться