Академический Документы
Профессиональный Документы
Культура Документы
ANALYTICAL ESSAY
The development artificial intelligence and its uses for lethal purposes
in war will fundamentally change the nature of warfare as well as law-
enforcement and thus pose fundamental problems for the stability of
the international system. To cope with such changes, states should adopt
preventive security governance frameworks based upon the precautionary
principle of international law, and upon previous cases where prevention
brought stability to all countries. Such new global governance frameworks
should be innovative as current models will not suffice. The World Eco-
nomic Forum has advanced that the two areas that will bring most benefits
but also biggest dangers to the future are robotics and artificial intelli-
gence. Additionally, they are also the areas in most urgent need for inno-
vative global governance.
Leading scientists working on artificial intelligence have argued that the
militarization and use of lethal artificial intelligence would be a highly
destabilizing. Here I examine twenty-two existing treaties that acted under
a “preventive framework” to establish new regimes of prohibition or con-
trol of weapons systems that had been deemed to be destabilizing. These
treaties achieved one or all of three goals: prevented further militarization,
made weaponization unlawful, and stopped proliferation with cooperative
frameworks of transparency and common rules. As a result of my find-
ings, it is clear that there is a significant emerging norm in regards to all
weapons systems: the utilization of disarmament and arms regulations as
a tool and mechanism to protect civilians. The development of lethal au-
tonomous weapons systems would severely jeopardize this emerging norm.
I show under what conditions lethal autonomous weapons systems will be
disruptive for peace and security and show alternative governance struc-
tures based upon international law with robust precautionary frameworks.
Introduction
The development of artificial intelligence is already resulting in major social and
economic changes. To understand change in world politics, the 2017 International
Studies Association Annual Meeting posed the following question in its call for pa-
pers: what are the markers of change? In other words, when and how do we know
change is occurring? The World Economic Forum has argued that the two areas
of development in which we are witnessing profound and fast-paced change are
Garcia, Denise. (2018) Lethal Artificial Intelligence and Change: The Future of International Peace and Security. International
Studies Review, doi: 10.1093/isr/viy029
© The Author(s) (2018). Published by Oxford University Press on behalf of the International Studies Association.
All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
1
Future of Life Institute, accessed on 4 March 2016, at http://futureoflife.org/open-letter-autonomous-weapons/.
2
Please contact author for her database of the treaties mentioned.
3
Others have argued similarly, such as Maresca and Maslen (2008) and Doswald-Beck (2012).
behavior against civilians (Roff 2015). The third domain is about cooperation in
cultural, economic, social, and environmental matters. These affect all of humanity
and include problems that must be solved collectively. Throughout this analysis, I
examine under what conditions AI weapons will create disruptive change (Suchman
2007) and show alternative governance structures based on international law with
precautionary frameworks to avoid disruptive change. I conclude that even if AI
weapons are able to comply with the principles of IHL, they would have a detrimen-
tal effect on the commonly agreed upon rules of international law that are based
on three domains of peace and security, which I will now discuss.
weapons to determine compliance with IHL. However, only a handful of states carry
out weapons reviews regularly, which makes this transparency mechanism insuffi-
cient as a tool for creating security frameworks for future arms and technologies.
The third domain of peace and security comprises the initiatives and programs
in cultural, economic, social, and environmental matters that affect all of humanity
and tackle problems that can only be solved collectively. AI has enormous poten-
tial to be used for the common good of humanity. Therefore, this third framework
is based upon the UN Charter, Article 1.3: “To achieve international co-operation
in solving international problems of an economic, social, cultural, or humanitar-
ian character, and in promoting and encouraging respect for human rights and
for fundamental freedoms.” The challenge of AI can be tackled collectively and
peacefully—hence the need for a ban on weaponization.
Recently, states agreed unanimously, under the auspices of the UN, on the
new UN Sustainable Development Goals and on the Paris Agreement on Climate
Change. Taken together, they represent a robust map to holistically solve some of
the worst economic, social, and environmental problems facing humanity today.
The attention and resources of the international community should be drawn to-
ward such initiatives immediately. AI presents a similar opportunity to tackle a com-
mon problem together in a way that has been demonstrated to work before.
UN Charter Article 26 constitutes the normative prescription for the nondiver-
sion of human and financial resources away from social and economic development
toward weapons inventions that could be harmful for peace and security.
States should focus all of their attention on maintaining and strengthening the
architecture of peace and security based upon the UN Charter. Nothing else has
the capacity to bring the international community together at this critical juncture.
Many times before, states have achieved the prohibition of superfluous and un-
necessary armaments. In my research, I have found that individual states can be-
come champions of such causes and unleash real progress in disarmament diplo-
macy. It is the work of such champion states that brought to fruition extraordinary
new international prohibition treaties for landmines and cluster munitions and the
first treaty to set global rules for the transfer of conventional arms. The 1997 treaty
prohibiting mines and the 2008 treaty that banned cluster munitions were success
stories because they prohibited weapons that indiscriminately harmed civilians. The
2013 Arms Trade Treaty represents a novel attempt, at the global level, to imprint
more transparency and accountability on conventional arms transfers. The pres-
ence of an “epistemic community”—a group of scientists and activists with common
scientific and professional language and views that are able to generate credible
information—is a powerful tool for mobilizing attention toward action. In the case
of AI weapons, the International Committee for Robot Arms Control serves such
purpose. The launch of a transnational campaign is another key element to sum-
mon awareness at several levels of diplomatic and global action (Carpenter 2014).
The Stop Killer Robots Campaign is in place and is attracting an unprecedented
positive response from around the world.
Conclusions
An AI weapons global race will imperil everyone. Nuclear weapons serve as a his-
toric model, alerting us to what can result: an imbalanced system of haves and have-
nots and a fragile balance of security. States will be better off by preventing the
development and deployment of these systems, as this is an arms race that has
the potential to proliferate much more widely and rapidly than nuclear weapons
ever did. It would therefore leave everyone less secure since more states and non-
state actors will likely be able to buy or replicate such technologies. As with nuclear
weapons, AI weapons would create a new arms race, only this one would be much
more widespread, as its technology is cheaper and much easier to develop indige-
nously.
Preventive security governance frameworks must be put in place with principled
limits on the development of AI weapons that have the potential to violate interna-
tional law (Garcia 2014, Johnson 2004). Such preventative frameworks could pro-
mote stability and peace. Previously, states have reaped gains in terms of national
security from preemptive actions to regulate or control dangerous weapons. The
prevention of harm is a moral imperative (Lin 2010, 2012). In the case of AI-enabled
weapons, even if they comply with IHL, they will have a disintegrating effect on the
commonly agreed rules of international law (Dill 2014; O’Connell 2014).
AI weapons will make warfare unnecessarily more inhumane because attribution
is necessary to hold war criminals to account, and these weapons make that so much
harder. Nations today have one of the greatest opportunities in history to promote
a better future by devising preventive security frameworks that will preventatively
prohibit the weaponization of artificial intelligence and ensure that AI is only used
for the common good of humanity. This is about a more prosperous future for
peace and security.
Acknowledgments
I would like to warmly thank Maria Virginia Olano Velasquez for her invaluable
research assistance for this article and J. Andrew Grant for his guidance.
References
ADLER, EMANUEL, AND PATRICIA GREVE. 2009. “When Security Community Meets Balance of Power: Overlap-
ping Regional Mechanisms of Security Governance.” Review of International Studies 35: 59–84
AVANT, DEBORAH D., MARTHA FINNEMORE, AND SUSAN K. SELL. 2010. Who Governs the Globe? Cambridge: Cam-
bridge University Press.
BAILLIET, CECILIA MARCELA, AND KJETIL MUJEZINOVIC LARSEN. 2015. Promoting Peace through International Law.
Oxford: Oxford University Press.
BIERI, MATTHIAS, AND MARCEL DICKOW. 2009. “Lethal Autonomous Weapons Systems: Future Challenges.”
In Killer Robots: Legality and Ethicality of Autonomous Weapons, edited by Armin Krishnan. Farnham, UK:
Ashgate.
BILLS, GWENDELYNN. 2014. “LAWS unto Themselves: Controlling the Development and Use of Lethal Au-
tonomous Weapons Systems.” George Washington Law Review 83 (1): 176–209.
BORRIE, JOHN. 2009. Unacceptable Harm: A History of How the Treaty to Ban Cluster Munitions Was Won. New
York: UNIDIR.
──. 2014. “Humanitarian Reframing of Nuclear Weapons and the Logic of a Ban.” International Affairs
90 (3): 625–46.
CARPENTER, R. CHARLI. 2014. “Lost” Causes: Agenda Vetting in Global Issue Networks and the Shaping of Human
Security. New York: Cornell University Press.
DILL, JANINA. 2014. Legitimate Targets? Social Construction, International Law and US Bombing. Cambridge:
Cambridge University Press.
DOSWALD-BECK, LOUISE. 2012. “International Humanitarian Law and New Technology.” In American Society
of International Law: Proceedings of the Annual Meeting, 107–16.
ERICKSON, JENNIFER L. 2013. “Stopping the Legal Flow of Weapons: Compliance with Arms Embargoes,
1981–2004.” Journal of Peace Research 50 (2): 159–74.
GARCIA, DENISE. 2014. “The Case Against Killer Robots—Why the United States Should Ban Them.” Foreign
Affairs, May 10. Accessed April 4, 2016. www.foreignaffairs.com/articles/141407/denise-garcia/the-
case-against-killer-robots.
──. 2015. “Humanitarian Security Regimes.” International Affairs 91 (1): 55–75.
──. 2016. “Future Arms, Technologies, and International Law: Preventive Security Governance.” Euro-
pean Journal of International Security 1 (1): 94–111.
GILLIES, ALEXANDRA. 2010. “Reputational Concerns and the Emergence of Oil Sector Transparency as an
International Norm.” International Studies Quarterly 54: 103–26.
GOERTZ, GARY, PAUL F. DIEHL, AND ALEXANDRU BALAS. 2016. The Puzzle of Peace: The Evolution of Peace in the
International System. Oxford: Oxford University Press.
HAQUE, ADIL, LILLIAN MIRANDA, ANNA SPAIN, AND MARKUS WAGNER. 2012. “New Voices I: Humanizing Con-
flict.” In American Society of International Law, Proceedings of the Annual Meeting, 73–84.
HAMMOND, DANIEL N. 2015. “Autonomous Weapons and the Problem of State Accountability.” Chicago
Journal of International Law 15 (2): 652–88.
HENCKAERTS, JEAN-MARIE, AND LOUISE DOSWALD-BECK. 2005. Customary International Humanitarian Law. Cam-
bridge: Cambridge University Press.
HEYNS, CHRISTOF. 2013. Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. UN
Document A, HRC/23/47.
JOHNSON, REBECCA. 2004. “The NPT in 2004: Testing the limits.” Disarmament Diplomacy 76.
KNUCKEY, SARAH. 2014. Drones and Targeted Killings: Ethics, Law, Politics. New York: IDebate Press.
KREPS, SARAH, AND JOHN KAAG. 2012. “The Use of Unmanned Aerial Vehicles in Asymmetric Conflict: Legal
and Moral Implications.” Polity 44 (2): 260–85.
KREPS, SARAH, AND MICAH ZENKO. 2014. “The Next Drone Wars.” Foreign Affairs 93 (2): 68–80.
KRISCH, NICO. 2014. “The Decay of Consent: International Law in an Age of Global Public Goods.” Ameri-
can Journal of International Law 108: 1–40.
LIN, PATRICK. 2010. “Ethical Blowback from Emerging Technologies.” Journal of Military Ethics 9 (4): 313–
32.
──. 2012. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.
MARESCA, LOUIS, AND STUART MASLEN. 2008. The Banning of Anti-personnel Landmines: The Legal Contribution
of the International Committee of the Red Cross, 1955–1999. Cambridge: Cambridge University Press.
O’CONNELL, MARY ELLEN. 2008. The Power and Purpose of International Law. Oxford: Oxford University Press.
──. 2014. “21st Century Arms Controls Challenges: Drones, Cyber Weapons, Killer Robots, and WMDs.”
Washington University Global Studies Law Review 13 (3): 515–34.
ROFF, HEATHER. 2014. “The Strategic Robot Problem.” Journal of Military Ethics 13 (3): 211–27.
ROFF, HEATHER 2015. “Lethal autonomous weapons and proportionality”, Case Western Reserve: Journal of
International Law 47: 37–52.
SCHWARZ, ELKE 2017. “Pursuing Peace: The strategic limits of drone warfare”. An INS special forum:
intelligence and drones. Intelligence and National Security 32 (4): 422–25.
SHARKEY, NOEL. 2010. “Saying ‘No!’ to Lethal Autonomous Targeting.” Journal of Military Ethics 9 (4):
369–83.
SINGER, PETER W. 2009. Wired for War. New York: Penguin Books.
──. 2010. “The Ethics of Killer Applications: Why Is It So Hard to Talk about Morality When It Comes
to New Military Technology?” Journal of Military Ethics 9 (4): 299–312.
SUCHMAN, LUCY. 2007. Humanmachine reconfigurations: Plans and situated actions, 2nd ed., Cambridge Uni-
versity Press.
TEITEL, RUTI. 2013. Humanity’s Law. Oxford: Oxford University Press.