Вы находитесь на странице: 1из 17

GOVERNING ARTIFICIAL INTELLIGENCE

Ryan Hagemann
Senior Director for Policy
AI in the Obama Administration
Government:
- Consumer Privacy Bill of Rights
- Office of Science and Technology Policy (OSTP)
- National AI Strategy
- “Preparing for the Future of Artificial Intelligence”
- Big Data and other Emerging Technology Reports

The Academy, Industry, and Civil Society:


- “Federal Robotics Commission” (Ryan Calo)
- “National Algorithmic Technology Safety Administration” (Andrew Tutt)
- Data Protection Commissions
- “Algorithmic Transparency” (EPIC, etc.)
- Asilomar AI Principles (Future of Life Institute)
- OpenAI (Elon Musk)
AI in the Trump Administration
Government:
- “AI in Government Act” (Sens. Gardner & Schatz)
- FTC Hearings on Competition Policy in the 21t Century
- “FUTURE of AI Act” (Sens. Cantwell & Young)
- OSTP
- Ongoing developments on the National AI Strategy (recent NSF RFC)
- Engagement with industry and civil society (Michael Kratsios)
- “General Data Protection Regulation” (EU)
- “California Consumer Privacy Act” (California)

The Academy, Industry and Civil Society:


- “Soft Law’ governance (Marchant, Allenby, Thierer, Hagemann, etc.)
- “Algorithmic Accountability” (ITIF, CDI, Niskanen, CFJ)
- Industry Standards and Best Practices
- Google, IBM, Facebook, etc.
- Information Technology Industry Council, Chamber of Commerce, etc.
- The Partnership on AI
Principles of Effective Technological Governance
1. Affirm the Regulator’s Hippocratic Oath: “First, Do No Harm.”
2. Prioritize outcomes-based governance rules.
3. Where possible, promote self-regulatory frameworks.
4. Where necessary, embrace “Soft Law” governance.

“Soft Law” = Instruments or arrangements that create substantive


expectations that are not directly enforceable. (Gary Marchant and
Braden Allenby)
Sources: Ryan Hagemann, Jennifer Skees, and Adam Thierer, “Soft Law for Hard
Problems: The Governance of Emerging Technologies in an Uncertain Future,” Colorado
Technology Law Journal (forthcoming), available at
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3118539; Gary E. Marchant and
Braden Allenby, “New Tools for Governing Emerging Technologies,” Bulletin of the Atomic
Scientists, Vol. 73, No. 108 (2017).
The Framework for Global Electronic
Commerce
Promulgated by the Clinton Administration in 1997 to govern the
emerging Internet.
1. The private sector should lead;
2. Governments should avoid undue restrictions on the Internet;
3. Where governmental involvement is necessary, it should support and
enforce a predictable, minimalist, consistent, and simple legal
environment for commerce; and
4. Government should recognize the unique qualities associated with
the Internet.

Source: White House, A Framework for Global Electronic Commerce, 1 July 1997,
https://clintonwhitehouse4.archives.gov/WH/New/Commerce/summary.html.
“Algorithmic Transparency”
- Proposals are often amorphous and unclear, imprecise or non-specific, technically
infeasible, or generally ill-defined
- “As more decisions become automated and processed by algorithms, these processes become more
opaque and less accountable. The public has a right to know the data processes that impact their
lives so they can correct errors and contest decisions made by algorithms. Personal data
collected from our social connections and online activities are used by the government and companies
to make determinations about our ability to fly, obtain a job, get security clearance, and even determine
the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of
secret profiling and discrimination as well as undermine our privacy and freedom of association.”
(EPIC)
- “[K]nowledge of the algorithm is a fundamental human right.” (Marc Rotenberg, EPIC)

- These proposals often rely on calls for establishing algorithmic transparency in order
“to establish democratic accountability over innovation,” or argue that
regulations mandating a “right to explanation” should be required in order to deploy
AI systems.

Sources: “Algorithmic Transparency: End Secret Profiling,” Electronic Privacy Information


Center, https://www.epic.org/algorithmic-transparency/; Tae Wan Kim and Bryan
Routledge, “Algorithmic Transparency, A Right to Explanation and Trust,” Carnegie Mellon
University (June 2017).
“Algorithmic Accountability”
In contrast to algorithmic transparency, “algorithmic accountability” prioritizes outcomes-based
regulatory mechanisms that address known, observable harms utilizing existing rules and
institutions.
Propose definitions:
- “The principle that an algorithmic system should employ a variety of controls to ensure
the operator (i.e., the party responsible for deploying the algorithm) can verify it acts in
accordance with its intentions, as well as identify and rectify harmful outcomes.” (New
and Castro)
- “[Algorithmic transparency] will not tell you much, because the machine’s ‘thought process’ is
not explicitly described in the weights, computer code, or anywhere else. Instead it is subtly
encoded in the interplay between the weights and the neural network’s architecture. … A better
solution is to make [AI] accountable,” which would include “explainability, confidence
measures, procedural regularity, and responsibility.” (Levey and Hagemann)
- “[ensuring] harms can be assessed, controlled, and addressed.” (World Wide Web
Sources: Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic
Foundation)
Accountability,” Center for Data Innovation (Washington, D.C.: 21 May 2018),
http://www2.datainnovation.org/2018-algorithmic-accountability.pdf; Curt Levey and Ryan
Hagemann, “Algorithms with Minds of Their Own,” Wall Street Journal, 12 Nov. 2017,
https://ww.wsj.com/articles/algorithms-with-minds-of-their-own-1510521093.
Was there unfair

The Regulator’s Neural Network


consumer injury?

YES NO
- Describes specific pathways by which
Did the operator have sufficient
No penalty
regulators can determine (1) whether a
controls to verify its algorithm
worked as intended? harm has transpired and (2) an what
level of punishment is appropriate
- One big outstanding question, however
YES NO …
Did the operator Did the operator
identify and rectify identify and rectify
harmful outcomes? harmful outcomes?

… How do you determine what


constitutes an “unfair consumer
Low or no penalty Medium penalty High penalty
injury” in the first place?

Source: Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic
Accountability,” p. 26.
Taxonomy of Informational Injuries I
Type of Information Definition Informational Injury Definition
“Result in harm for consumers when
information they consider sensitive and
“Personal information that can be perceived would prefer to keep private becomes
Observable Information Autonomy Violations
first-hand by other individuals.” public through involuntary means and
tend to be harms that are ‘reputational
or interpersonal’ in nature.”

“Information inferred or derived from


“Occurs when personal information is
observable or observed information” that “is
used to deny a person access to
produced when observable or observed
Computed Information Discrimination something, such as employment,
information is manipulated through
housing, loans, or basic goods and
computation to produce new information that
services.”
describes and individual in some way.”

“Information collected about an individual


based on a third party’s observation or Autonomy Violations or
Observed Information (See above)
provided by the individual, but does not allow Discrimination
someone else to replicate the observation.”

“Results when a consumer suffers a


“Information that a third party associates with
financial loss or damage as a result of
Associated Information an individual” that “does not provide any Economic Harm
the misuse of PII,” such as in the case
descriptive information about an individual.”
of “identity theft, fraud, or larceny.”

Source: Daniel Castro and Alan McQuinn, “Comments submitted to the Federal Trade
Commission RE: Informational Injury Workshop,” pp. 2-11.
Taxonomy of Informational Injuries II
Potential for Informational
Level Description
Injury

Level 0 No collection and use None

Level 1 Collection and no use Low

Level 2 Collection and use (no human) Low

Level 3 Collection and use (human) High

Source: Daniel Castro and Alan McQuinn, “Comments submitted to the Federal Trade
Commission RE: Informational Injury Workshop,” p. 11.
Taxonomy of Informational Injuries II
Potential for Informational
Level Description
Injury

Level 0 No collection and use None

Level 1 Collection and no use Low

Level 2 Collection and use (no human) Low

Level 3 Collection and use (human) High

Source: Daniel Castro and Alan McQuinn, “Comments submitted to the Federal Trade
Commission RE: Informational Injury Workshop,” p. 11.
Was there unfair
consumer injury?

YES NO
Did the operator have sufficient
No penalty
controls to verify its algorithm
worked as intended?

YES NO
Did the operator Did the operator
identify and rectify identify and rectify
harmful outcomes? harmful outcomes?

Low or no penalty Medium penalty High penalty


Was there unfair
consumer injury?

YES NO
Did the operator have sufficient
No penalty
controls to verify its algorithm
worked as intended?

YES NO
Did the operator Did the operator
identify and rectify identify and rectify
harmful outcomes? harmful outcomes?

Low or no penalty Medium penalty High penalty


Was there unfair
consumer injury?

YES NO
Did the operator have sufficient
No penalty
controls to verify its algorithm
worked as intended?

YES NO
Did the operator Did the operator
identify and rectify identify and rectify
harmful outcomes? harmful outcomes?

Low or no penalty Medium penalty High penalty


Was there unfair
consumer injury?

YES NO
Did the operator have sufficient
No penalty
controls to verify its algorithm
worked as intended?

YES NO
Did the operator Did the operator
identify and rectify identify and rectify
harmful outcomes? harmful outcomes?

Low or no penalty Medium penalty High penalty


References
• Curt Levey and Ryan Hagemann, “Algorithms With Minds of Their Own,” Wall Street Journal, 12 Nov. 2017,
https://www.wsj.com/articles/algorithms-with-minds-of-their-own-1510521093.
• Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic Accountability,” Center for Data
Innovation (Washington, D.C.: 21 May 2018), http://www2.datainnovation.org/2018-algorithmic-accountability.pdf.
• Ryan Hagemann, Comments submitted to the Office of Science and Technology Policy in the Matter of: A Request
for Information on Artificial Intelligence, Docket No. 2016-15082, submitted 22 July 2016,
https://niskanencenter.org/wp-content/uploads/2016/07/CommentsArtificialIntelligencePolicyOSTP.pdf.
• Ryan Hagemann, “2017 Policy Priorities: Embracing the Ghost in the Machine,” Niskanen Center, 17 Nov. 2016,
https://niskanencenter.org/blog/2017-policy-priorities-embracing-ghost-machine/.
• Ryan Hagemann, “The Creeping Hysteria Over Artificial Intelligence,” Niskanen Center, 17 Apr. 2017,
https://niskanencenter.org/blog/creeping-hysteria-artificial-intelligence/.
• Ryan Hagemann, “Why, Robot?,” Niskanen Center, 7 Sep. 2016, https://niskanencenter.org/blog/why-robot/.
• Daniel Castro and Alan McQuinn, Comments submitted to the Federal Trade Commission RE: Informational Injury
Workshop, Project No. 175413, Information Technology and Innovation Foundation (Washington, D.C.: 27 Oct.
2017), http://www2.itif.org/2017-informational-injury-comments.pdf.
Thank You!
rhagemann@niskanencenter.org
@RyanLeeHagemann

Вам также может понравиться