Вы находитесь на странице: 1из 8

International Studies Review (2018) 0, 1–8

ANALYTICAL ESSAY

Lethal Artificial Intelligence and Change: The


Future of International Peace and Security
DENISE GARCIA
Northeastern University

The development artificial intelligence and its uses for lethal purposes
in war will fundamentally change the nature of warfare as well as law-
enforcement and thus pose fundamental problems for the stability of
the international system. To cope with such changes, states should adopt
preventive security governance frameworks based upon the precautionary
principle of international law, and upon previous cases where prevention
brought stability to all countries. Such new global governance frameworks
should be innovative as current models will not suffice. The World Eco-
nomic Forum has advanced that the two areas that will bring most benefits
but also biggest dangers to the future are robotics and artificial intelli-
gence. Additionally, they are also the areas in most urgent need for inno-
vative global governance.
Leading scientists working on artificial intelligence have argued that the
militarization and use of lethal artificial intelligence would be a highly
destabilizing. Here I examine twenty-two existing treaties that acted under
a “preventive framework” to establish new regimes of prohibition or con-
trol of weapons systems that had been deemed to be destabilizing. These
treaties achieved one or all of three goals: prevented further militarization,
made weaponization unlawful, and stopped proliferation with cooperative
frameworks of transparency and common rules. As a result of my find-
ings, it is clear that there is a significant emerging norm in regards to all
weapons systems: the utilization of disarmament and arms regulations as
a tool and mechanism to protect civilians. The development of lethal au-
tonomous weapons systems would severely jeopardize this emerging norm.
I show under what conditions lethal autonomous weapons systems will be
disruptive for peace and security and show alternative governance struc-
tures based upon international law with robust precautionary frameworks.

Keywords: artificial intelligence, autonomous weapons, war, killer


robots

Introduction
The development of artificial intelligence is already resulting in major social and
economic changes. To understand change in world politics, the 2017 International
Studies Association Annual Meeting posed the following question in its call for pa-
pers: what are the markers of change? In other words, when and how do we know
change is occurring? The World Economic Forum has argued that the two areas
of development in which we are witnessing profound and fast-paced change are
Garcia, Denise. (2018) Lethal Artificial Intelligence and Change: The Future of International Peace and Security. International
Studies Review, doi: 10.1093/isr/viy029
© The Author(s) (2018). Published by Oxford University Press on behalf of the International Studies Association.
All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
2 Lethal AI

robotics and artificial intelligence. Technological advances in robotics and artificial


intelligence have the capacity to impart considerable benefits and dangers on the
future, and, as such, there is an urgent need for innovative global governance in
these areas.
I explore the potential rise of “lethal artificial intelligence” by the development
of more autonomy in weapons systems that will be enabled by advancements in
artificial intelligence (AI). I argue that the militarization of artificial intelligence
will create significant problems for the stability of the international system. States
should adopt preventive security governance frameworks grounded on the precau-
tionary principle of international law (Garcia 2016). Precaution should be based on
previous cases where prevention brought stability to all countries. Leading artificial
intelligence scientists have argued that the militarization of artificial intelligence
will be a disrupting change in the way wars are waged. They contend that such
weapons represent the third major transformation in warfare. The first major shift
was promoted by the invention of gunpowder, and the second was brought by the
invention of nuclear weapons.1 The scientists warn that the use of AI weapons will
change the future of peace and security in a harmful way. They recommend that the
prevention of a lethal autonomous arms race could be sought by a ban on offensive
autonomous weapons. These scientists’ warnings should be a wake-up call; they are
keenly aware of what could go wrong (Bieri and Dickow 2009; Sharkey 2010).
It is useful to determine how change occurred in previous areas regarding the
militarization of other technologies and under what conditions states developed
preventive frameworks of regulation or prohibition to avoid disruptive change. I
examine twenty-two existing treaties that worked under a “preventive framework.”
Taken together, the treaties set prohibitions or regulations on targeted weapons sys-
tems. They achieved at least one of the following three objectives: prevented added
militarization, made weaponization unlawful, and/or contained proliferation with
cooperative frameworks of transparency and shared norms.2
This article proceeds as follows: First, based on evidence from the database that
I created, I observe that there is an emerging norm regarding all weapons systems,
based on the impetus to protect civilians: disarmament and arms regulations should
be utilized as a tool and mechanism to preserve human lives and to prevent un-
necessary harm.3 I then proceed to discuss how the development of lethal artifi-
cial intelligence by the rise of more autonomy in weapons systems (AI weapons)
would hamper this emerging norm, by discussing the architecture of peace and
security that shapes international relations in the twenty-first century via three do-
mains (O’Connell 2008). I contend that making artificial intelligence lethal will
disrupt the existing United Nations Charter–based legal and political architecture
of peace and security. This edifice of peace and security is the result of the in-
creasing codification of international law through treaties, as well as of the legal-
ization of world politics (Goertz et al. 2016). The first domain is the regulation of
war and the prohibition of the use of force, with the accompanying global norm
of the peaceful settlement of disputes between states and, increasingly, other ac-
tors (Avant et al. 2010). The second domain is composed of the dense network of
global norms that constrain and guide the behavior of states (Bailliet and Larsen
2015), as a result of treaties on transparency, confidence-building, and security
mechanisms, to foster peace and security. International humanitarian law (IHL)
provides a legal framework that regulates the use of force, a critical component of
the first domain. At the same time, the Geneva Conventions and its additional pro-
tocols have contributed immensely to the emergence of norms constraining state

1
Future of Life Institute, accessed on 4 March 2016, at http://futureoflife.org/open-letter-autonomous-weapons/.
2
Please contact author for her database of the treaties mentioned.
3
Others have argued similarly, such as Maresca and Maslen (2008) and Doswald-Beck (2012).

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
DENISE GARCIA 3

behavior against civilians (Roff 2015). The third domain is about cooperation in
cultural, economic, social, and environmental matters. These affect all of humanity
and include problems that must be solved collectively. Throughout this analysis, I
examine under what conditions AI weapons will create disruptive change (Suchman
2007) and show alternative governance structures based on international law with
precautionary frameworks to avoid disruptive change. I conclude that even if AI
weapons are able to comply with the principles of IHL, they would have a detrimen-
tal effect on the commonly agreed upon rules of international law that are based
on three domains of peace and security, which I will now discuss.

The Three Domains of Peace and Security


The first domain of peace and security consists in the prohibition of the use of force
in the conduct of international relations. It is a norm first codified in the United
Nations Charter Article 2.4, which is carried out by the charter’s mechanisms for the
peaceful settlement of disputes (through international and regional courts) and by
international organizations.
The revolution catalyzed by AI weapons will give rise to two important questions
before the international community (Singer 2009, 14): Shall a lethal AI weapons
arms race be prevented before it starts? Shall AI weapons be empowered to kill
without human oversight? The answers to these questions represent a predicament
to international law: how will this revolution in warfare impact the existing United
Nations Charter peace framework? If the use of unmanned aerial vehicles, known as
drones (Schwarz 2017), serves as an indicator of things to come, a few countries are
already employing them in situations that could be peacefully settled using a law
enforcement framework (Kreps and Kaag 2012; Knuckey 2014; Kreps and Zenko
2014). Impediments to the use of force in international relations have been eroded
to the point where the use of force is employed without legal justification under
existing international law.
AI weapons will signify diminished ceilings for war to start. More violence will
ensue as a result. The erosion of the nonuse of military force and the peaceful set-
tlement of disputes norms will make peace and security precarious. This is especially
likely because the technologically advanced countries will have an advantage on the
ones that cannot afford AI weapons.
The second domain, efforts to sustain peace and security in the twenty-first cen-
tury, is based on the rules of international protection of human rights law (HRL)
and IHL. AI weapons will disrupt the regulation of war and conflict under the rules
of the UN Charter.
The development of AI weapons will disrupt the observance of the hu-
man rights and IHL legal architectures. For both IHL and HRL, the main
requirement is accountability for actions during violent conflict (Bills 2014;
Hammond 2015). Taken together, HRL and IHL serve as the basis for the
protection of life and prevention of unnecessary and superfluous suffering
(Heyns 2013).
The universal IHL and HRL global norms form the common legal code, which
has been broadly ratified (Haque et al. 2012; Teitel 2013). It is critical to determine
how to protect civilians and whether the development of new weapons and tech-
nologies will imperil existing legal frameworks, which are already dwindling due to
the wars in Syria and beyond.
The combined development of HRL and IHL spawned an era of “Humanitarian
Security Regimes.” There are altruistically motivated regimes that protect civilians
or control or restrict certain weapons systems. These regimes embrace humanitar-
ian perspectives that seek to prevent civilian casualties and guard the rights of vic-
tims and survivors of conflict and, ultimately, to reduce human suffering and pro-
hibit superfluous harm.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
4 Lethal AI

The relevance of the concept of humanitarian security regimes to the weaponiza-


tion of artificial intelligence rests on two factors (Garcia 2015). First is that new
regimes can form anew in areas previously considered impenetrable to change.
These new regimes can be motivated to protect human security. The 1997 Nobel
Peace Prize was awarded to Jody Williams and the International Campaign to Ban
Landmines (ICBL) in recognition of the new role played by civil society to create
a new human security treaty that prohibits the use and all aspects of landmines, a
weapon previously in widespread use (Borrie 2009, 2014).
Second, change is possible within the realm of national security (i.e., weapons) as
a result of the attempts to stem humanitarian tragedies. States may be led to reeval-
uate what is important to their national interests and be duty-bound by a clear hu-
manitarian impetus or reputational concerns (Gillies 2010) vis-à-vis the weaponiza-
tion of AI.
The key humanitarian principles, now customary, that have been driving disarma-
ment diplomacy in the last century are the prohibition of unnecessary suffering by
combatants, the outlawing of indiscriminate weapons, and the need to distinguish
between civilians and combatants (Henckaerts and Doswald-Beck 2005). Here is it
worth nothing that, although the United States continues to, at times, violate these
principles, the latter comprise the norm nonetheless. This is evidenced in the in-
creasing legal challenges to the killing of civilians. Moreover, just because some
violate it in some cases does not mean it is not a norm. Humanitarian concerns
have always been part of the equation in multilateral disarmament diplomacy. It is
only in recent years, however, that they have assumed center stage and become the
driving force, evident in the 1997 Ottawa Convention on Landmines and the 2008
Convention on Cluster Munitions. Such concerns can also be said to have been
at the core of the overwhelming international support for the Arms Trade Treaty
(Erickson 2013).
Finally, it is essential to note that global rule making, which once was fundamen-
tally anchored on consensus, may indeed be in decline (Krisch 2014). To many areas
of international law, consensus is still central; however, there are very good reasons
to question its relevance. Consensus is outdated and no longer reflects the reality
and urgency of current challenges. Indeed, global governance may be leading to
a new norm of lawmaking, one based on majority rather than consensus.4 For in-
stance, the Arms Trade Treaty abandoned consensus negotiations out of frustration
and an inability to create a legally binding document covering legal arms transfers.
The Arms Trade Treaty represents a significant shift in negotiation tactics as it broke
free from the constraints of consensus negotiations and instead was shifted to a vote
in the UNGA, where approving such treaties requires only a majority.
The stabilizing legal and political framework that sustains peace and security com-
prises several norms, such as transparency, and mechanisms: confidence-building,
alliances, arms control agreements, nuclear weapons free zones (NWFZs), joint op-
erations, disarmament, conflict resolution, peacekeeping, and reconciliation. New
AI weapons will require a totally new and expansive political and legal structure
(Adler and Greve 2009).
To maintain transparency at the global level could be difficulty with AI weapons,
which will be hard to scrutinize due to the algorithms and large data that will be
used (Roff 2014). There is a legal responsibility regarding the creation of new
weapons under Article 36 of the 1977 Additional Protocol to the Geneva Conven-
tions, which states: “In the study, development, acquisition or adoption of a new
weapon, means or method of warfare, a High Contracting Party is under an obliga-
tion to determine whether its employment would, in some or all circumstances, be
prohibited by this Protocol or by any other rule of international law applicable to
the High Contracting Party.” Article 36 stipulates that states conduct reviews of new
4
I thank Robin Geiss for this insight.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
DENISE GARCIA 5

weapons to determine compliance with IHL. However, only a handful of states carry
out weapons reviews regularly, which makes this transparency mechanism insuffi-
cient as a tool for creating security frameworks for future arms and technologies.
The third domain of peace and security comprises the initiatives and programs
in cultural, economic, social, and environmental matters that affect all of humanity
and tackle problems that can only be solved collectively. AI has enormous poten-
tial to be used for the common good of humanity. Therefore, this third framework
is based upon the UN Charter, Article 1.3: “To achieve international co-operation
in solving international problems of an economic, social, cultural, or humanitar-
ian character, and in promoting and encouraging respect for human rights and
for fundamental freedoms.” The challenge of AI can be tackled collectively and
peacefully—hence the need for a ban on weaponization.
Recently, states agreed unanimously, under the auspices of the UN, on the
new UN Sustainable Development Goals and on the Paris Agreement on Climate
Change. Taken together, they represent a robust map to holistically solve some of
the worst economic, social, and environmental problems facing humanity today.
The attention and resources of the international community should be drawn to-
ward such initiatives immediately. AI presents a similar opportunity to tackle a com-
mon problem together in a way that has been demonstrated to work before.
UN Charter Article 26 constitutes the normative prescription for the nondiver-
sion of human and financial resources away from social and economic development
toward weapons inventions that could be harmful for peace and security.

Prevention to Avoid Disruptive Change


AI weapons present a dangerous challenge to the world and to international
law, but there are steps that the world can take to mitigate some of these major
concerns. The adoption of “preventive security governance” as a strategy could
raise the capacity to keep peace and international order. This could be achieved
by the codification of new global norms based on existing international law that will
clarify expectations and universally agreed-upon behavior. This is needed because
now we have no relevant rules, as the extant rules will probably not suffice for the
challenges ahead (Garcia 2016). The precautionary principle of international law
includes three domains: to prevent harm, to shift the burden of proof to supporters
of a probably harmful activity, and the promotion of transparent decision-making
that includes those who would be affected. The precautionary principle calls for
action to be taken before harm is done. Artificial intelligence presents such a case:
there is scientific certainty and consensus regarding the danger of its weaponiza-
tion. There is already strong consensus in the international community that not all
weapons are acceptable and that some have the potential to be so harmful that they
should be preemptively banned. Such is the case with the prohibition on blinding
laser weapons, a class of weapons that was banned while still in development. In the
same way, the weaponization of AI can be halted before its full deployment in the
battlefield, and international efforts can instead be focused on its peaceful uses.
Here it is also important to address the incentives that countries would have to
ban these types of weapons, namely that they will be much more widely available
than other types of weapons and that their industries have a stake in preventing
their products from being involved in civilian casualties. The issue of proliferation
is one that should make states more willing to preemptively ban such weapons.
Proliferation of AI weapons will happen faster and at lower cost than conventional
weapons or weapons of mass destruction. Industry, in this case, also has a vested
interest in ensuring that their systems not be associated with mass atrocities or illegal
forms of warfare. Their inclusion in a ban should therefore be welcome and could
very well be a determining factor for its success. An example of this can be seen in
the Chemical Warfare Treaty, which the industry helped negotiate.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
6 Lethal AI

States should focus all of their attention on maintaining and strengthening the
architecture of peace and security based upon the UN Charter. Nothing else has
the capacity to bring the international community together at this critical juncture.
Many times before, states have achieved the prohibition of superfluous and un-
necessary armaments. In my research, I have found that individual states can be-
come champions of such causes and unleash real progress in disarmament diplo-
macy. It is the work of such champion states that brought to fruition extraordinary
new international prohibition treaties for landmines and cluster munitions and the
first treaty to set global rules for the transfer of conventional arms. The 1997 treaty
prohibiting mines and the 2008 treaty that banned cluster munitions were success
stories because they prohibited weapons that indiscriminately harmed civilians. The
2013 Arms Trade Treaty represents a novel attempt, at the global level, to imprint
more transparency and accountability on conventional arms transfers. The pres-
ence of an “epistemic community”—a group of scientists and activists with common
scientific and professional language and views that are able to generate credible
information—is a powerful tool for mobilizing attention toward action. In the case
of AI weapons, the International Committee for Robot Arms Control serves such
purpose. The launch of a transnational campaign is another key element to sum-
mon awareness at several levels of diplomatic and global action (Carpenter 2014).
The Stop Killer Robots Campaign is in place and is attracting an unprecedented
positive response from around the world.

Conclusions
An AI weapons global race will imperil everyone. Nuclear weapons serve as a his-
toric model, alerting us to what can result: an imbalanced system of haves and have-
nots and a fragile balance of security. States will be better off by preventing the
development and deployment of these systems, as this is an arms race that has
the potential to proliferate much more widely and rapidly than nuclear weapons
ever did. It would therefore leave everyone less secure since more states and non-
state actors will likely be able to buy or replicate such technologies. As with nuclear
weapons, AI weapons would create a new arms race, only this one would be much
more widespread, as its technology is cheaper and much easier to develop indige-
nously.
Preventive security governance frameworks must be put in place with principled
limits on the development of AI weapons that have the potential to violate interna-
tional law (Garcia 2014, Johnson 2004). Such preventative frameworks could pro-
mote stability and peace. Previously, states have reaped gains in terms of national
security from preemptive actions to regulate or control dangerous weapons. The
prevention of harm is a moral imperative (Lin 2010, 2012). In the case of AI-enabled
weapons, even if they comply with IHL, they will have a disintegrating effect on the
commonly agreed rules of international law (Dill 2014; O’Connell 2014).
AI weapons will make warfare unnecessarily more inhumane because attribution
is necessary to hold war criminals to account, and these weapons make that so much
harder. Nations today have one of the greatest opportunities in history to promote
a better future by devising preventive security frameworks that will preventatively
prohibit the weaponization of artificial intelligence and ensure that AI is only used
for the common good of humanity. This is about a more prosperous future for
peace and security.

Acknowledgments
I would like to warmly thank Maria Virginia Olano Velasquez for her invaluable
research assistance for this article and J. Andrew Grant for his guidance.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
DENISE GARCIA 7

References
ADLER, EMANUEL, AND PATRICIA GREVE. 2009. “When Security Community Meets Balance of Power: Overlap-
ping Regional Mechanisms of Security Governance.” Review of International Studies 35: 59–84
AVANT, DEBORAH D., MARTHA FINNEMORE, AND SUSAN K. SELL. 2010. Who Governs the Globe? Cambridge: Cam-
bridge University Press.
BAILLIET, CECILIA MARCELA, AND KJETIL MUJEZINOVIC LARSEN. 2015. Promoting Peace through International Law.
Oxford: Oxford University Press.
BIERI, MATTHIAS, AND MARCEL DICKOW. 2009. “Lethal Autonomous Weapons Systems: Future Challenges.”
In Killer Robots: Legality and Ethicality of Autonomous Weapons, edited by Armin Krishnan. Farnham, UK:
Ashgate.
BILLS, GWENDELYNN. 2014. “LAWS unto Themselves: Controlling the Development and Use of Lethal Au-
tonomous Weapons Systems.” George Washington Law Review 83 (1): 176–209.
BORRIE, JOHN. 2009. Unacceptable Harm: A History of How the Treaty to Ban Cluster Munitions Was Won. New
York: UNIDIR.
──. 2014. “Humanitarian Reframing of Nuclear Weapons and the Logic of a Ban.” International Affairs
90 (3): 625–46.
CARPENTER, R. CHARLI. 2014. “Lost” Causes: Agenda Vetting in Global Issue Networks and the Shaping of Human
Security. New York: Cornell University Press.
DILL, JANINA. 2014. Legitimate Targets? Social Construction, International Law and US Bombing. Cambridge:
Cambridge University Press.
DOSWALD-BECK, LOUISE. 2012. “International Humanitarian Law and New Technology.” In American Society
of International Law: Proceedings of the Annual Meeting, 107–16.
ERICKSON, JENNIFER L. 2013. “Stopping the Legal Flow of Weapons: Compliance with Arms Embargoes,
1981–2004.” Journal of Peace Research 50 (2): 159–74.
GARCIA, DENISE. 2014. “The Case Against Killer Robots—Why the United States Should Ban Them.” Foreign
Affairs, May 10. Accessed April 4, 2016. www.foreignaffairs.com/articles/141407/denise-garcia/the-
case-against-killer-robots.
──. 2015. “Humanitarian Security Regimes.” International Affairs 91 (1): 55–75.
──. 2016. “Future Arms, Technologies, and International Law: Preventive Security Governance.” Euro-
pean Journal of International Security 1 (1): 94–111.
GILLIES, ALEXANDRA. 2010. “Reputational Concerns and the Emergence of Oil Sector Transparency as an
International Norm.” International Studies Quarterly 54: 103–26.
GOERTZ, GARY, PAUL F. DIEHL, AND ALEXANDRU BALAS. 2016. The Puzzle of Peace: The Evolution of Peace in the
International System. Oxford: Oxford University Press.
HAQUE, ADIL, LILLIAN MIRANDA, ANNA SPAIN, AND MARKUS WAGNER. 2012. “New Voices I: Humanizing Con-
flict.” In American Society of International Law, Proceedings of the Annual Meeting, 73–84.
HAMMOND, DANIEL N. 2015. “Autonomous Weapons and the Problem of State Accountability.” Chicago
Journal of International Law 15 (2): 652–88.
HENCKAERTS, JEAN-MARIE, AND LOUISE DOSWALD-BECK. 2005. Customary International Humanitarian Law. Cam-
bridge: Cambridge University Press.
HEYNS, CHRISTOF. 2013. Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. UN
Document A, HRC/23/47.
JOHNSON, REBECCA. 2004. “The NPT in 2004: Testing the limits.” Disarmament Diplomacy 76.
KNUCKEY, SARAH. 2014. Drones and Targeted Killings: Ethics, Law, Politics. New York: IDebate Press.
KREPS, SARAH, AND JOHN KAAG. 2012. “The Use of Unmanned Aerial Vehicles in Asymmetric Conflict: Legal
and Moral Implications.” Polity 44 (2): 260–85.
KREPS, SARAH, AND MICAH ZENKO. 2014. “The Next Drone Wars.” Foreign Affairs 93 (2): 68–80.
KRISCH, NICO. 2014. “The Decay of Consent: International Law in an Age of Global Public Goods.” Ameri-
can Journal of International Law 108: 1–40.
LIN, PATRICK. 2010. “Ethical Blowback from Emerging Technologies.” Journal of Military Ethics 9 (4): 313–
32.
──. 2012. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.
MARESCA, LOUIS, AND STUART MASLEN. 2008. The Banning of Anti-personnel Landmines: The Legal Contribution
of the International Committee of the Red Cross, 1955–1999. Cambridge: Cambridge University Press.
O’CONNELL, MARY ELLEN. 2008. The Power and Purpose of International Law. Oxford: Oxford University Press.
──. 2014. “21st Century Arms Controls Challenges: Drones, Cyber Weapons, Killer Robots, and WMDs.”
Washington University Global Studies Law Review 13 (3): 515–34.
ROFF, HEATHER. 2014. “The Strategic Robot Problem.” Journal of Military Ethics 13 (3): 211–27.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018
8 Lethal AI

ROFF, HEATHER 2015. “Lethal autonomous weapons and proportionality”, Case Western Reserve: Journal of
International Law 47: 37–52.
SCHWARZ, ELKE 2017. “Pursuing Peace: The strategic limits of drone warfare”. An INS special forum:
intelligence and drones. Intelligence and National Security 32 (4): 422–25.
SHARKEY, NOEL. 2010. “Saying ‘No!’ to Lethal Autonomous Targeting.” Journal of Military Ethics 9 (4):
369–83.
SINGER, PETER W. 2009. Wired for War. New York: Penguin Books.
──. 2010. “The Ethics of Killer Applications: Why Is It So Hard to Talk about Morality When It Comes
to New Military Technology?” Journal of Military Ethics 9 (4): 299–312.
SUCHMAN, LUCY. 2007. Humanmachine reconfigurations: Plans and situated actions, 2nd ed., Cambridge Uni-
versity Press.
TEITEL, RUTI. 2013. Humanity’s Law. Oxford: Oxford University Press.

Downloaded from https://academic.oup.com/isr/advance-article-abstract/doi/10.1093/isr/viy029/5018660


by University of Massachusetts/Amherst user
on 03 June 2018

Вам также может понравиться