Вы находитесь на странице: 1из 8

COMPSCI 108

Harvard College/GSAS: 160419


Fall 2016-2017
Meeting Time: Tuesday, Thursday 11:30am - 1:00p
Class Location: Maxwell Dworkin 119
Section Location and Time: Tuesdays 5-6pm in Maxwell Dworkin 323,
Tuesdays 6-7pm in Pierce 320
Course Description: For centuries, people have imagined smart machines
in fictional stories. Computer systems now communicate in speech and text,
learn, negotiate, and work in teams (with people and other systems). These
intelligent-systems capabilities raise questions about the impact of such systems
on people and societies. This course introduces the basic techniques of AI in the
context of (science) fiction imaginings and ethical challenges. It examines the
roles of design and of policy in reducing potential negative consequences. The
course presumes a basic programming ability, but is accessible to concentrators
in the humanities and social sciences as well as science and engineering.
Professor: Barbara J. Grosz, grosz@eecs.harvard.edu
• Office Hours: Mondays, 4-5pm or by appointment (contact jessjackson@
seas.harvard.edu), MD 249
TFs: Sebastian Gehrmann, gehrmann@seas.harvard.edu, Office Hours: Thurs-
days, 5-6pm or by appointment, MD 2nd floor lobby
Ronni Gura Sadovsky, ronni.sadovsky@gmail.com, Office Hours: Mondays,
3-4pm or by appointment, Emerson 103
Course Overview
CS108 provides a broad introduction to Artificial Intelligence (AI), situated in
the context of AI’s current and potential future uses and the design and ethical
challenges they raise. It aims to provide students with a basic understanding of
how AI technologies work, their strengths and weaknesses, and to enable them
to distinguish fact from fiction in discussions of AI systems and their potential
societal impacts. It examines ethical dimensions of AI systems’ capabilities,
considering both anticipated benefits and potential negative impacts, along with
the roles of system design and societal policies as ways to address potential
negatives.
CS108 welcomes concentrators from all fields (including the social sciences and
humanities), and is accessible to all those with basic programming experience.
The course also is a route into more technical and advanced AI courses. It

1
complements the two other introductory AI courses, CS 181 and 182, by ex-
amining AI capabilities in societal contexts and focusing on ethical and design
issues. Its content is thus broader in scope–incorporating philosophical inquiry
and social science scholarship–while less deep in AI modeling, representation,
and algorithm implementation than CS 182 or CS 181.
The expected outcomes of CS108 include fluency with basic AI techniques, ability
to analyze AI technology in several key areas, ability to identify ethical issues
and an understanding of ways to approach ethical challenges through system
design and through policy.
Course Policies, Grades, and Assignments
Syllabus
Note that the syllabus is still under development; readings may be
modified as the semester progresses.
There are three categories of readings on this syllabus, indicated as follows:
• Reading: required reading for the class session listed
• ******Reading: required reading for which students are expected
to submit a question by midnight before the class meeting
• Background Reading(s): optional readings, typically providing
additional technical background or ethical perspectives
Three science fiction movie/TV episodes are also listed on the syllabus. Enrolled
students will be provided access to these videos and are required to view them
before the class session listed.
Sept 1: Overview: What is AI? What is ethics? What is design? Course goals,
structure.
Background Reading:
• J. H. Moor. “What is Computer Ethics.” Metaphilosophy, 16:4, 1985.
• C. Bloch. Golem: Legends of the Ghetto of Prague. Whitefish: Kessinger
Publishing, 1997.
• G. Scholem. “The Golem of Prague and the Golem of Rehovoth.” Obser-
vations, 41:1, 1966.
Speech and Natural Language Processing
Sept 6: Speech and dialogue systems: Fiction and Theory
Sci Fi videos:
• Black Mirror TV season 2, episode 1: Be Right Back.
• Devin Coldewey: Matt Lauer, Message to the Future Today: hologram
memoirs, May 12, 2014.
Reading:
• J. H. Moor. “The Nature, Importance, and Difficulty of Machine Ethics.”
IEEE Intelligent Systems, 18:21, 2006.

2
Background Reading:
• Roger Ebert. “Finding My Own Voice.” Blog. August 12, 2009.
Sept 8: Introduction to ethics and (computational) linguistic theory
Reading:
• D. Jurafsky and D. H. Martin. “Chapter 1: Introduction.” Speech and
Natural Language Processing Second Edition. Prentice Hall: Upper Saddle
River, 2008. (Section 1.6 is optional).
Sept 13: Named entity recognition (guest lecturer: Prof. Sasha Rush)
Reading:
• D. Jurafsky and D. H. Martin. “Chapter 20: Information Extraction.”
(Introduction, Section 20.1 and Introduction to Section 20.2 only (9 pages)).
Speech and Natural Language Processing, Third Edition. Prentice Hall:
Upper Saddle River, 2015.
Assignment 1: Test the limits of Siri, Cortana or an equivalent system and
analyze results.
Sept 15: Dialogue Systems and Chatbots
Reading:
• J. Vlahos. “Barbie Wants to Get to Know Your Child.” New York Times
Magazine Sept. 16, 2015. http://www.nytimes.com/2015/09/20/magazine/
barbie-wants-to-get-to-know-your-child.html?_r=0.
• A. Sosman. “I Used an ‘Anti-Turing Test’ to Prove Facebook’s A.I.
Has a Human Helper.” The Huffington Post. Blog. November 17, 2015.
http://www.huffingtonpost.com/arik-sosman/facebook-m-anti-turing-
test_b_8559098.html.
• G. Mone. “The Edge of the Uncanny.” Communications of the ACM
59:9, 2016. http://cacm.acm.org/magazines/2016/9/206247-the-edge-of-
the-uncanny/fulltext.
Background Reading:
• D. Jurafsky and D. H. Martin, Chapter 24: “Dialogue and Conversational
Agents.” Speech and Natural Language Processing, Third Edition. Prentice
Hall: Upper Saddle River, 2015.
Sept 20: Statistical NLP: Information extraction and question-answering sys-
tems
Reading:
• D. Ferruci et al. “Building Watson: An Overview of the DeepQA Project.”
AI Magazine, 31:3, 2010.
Assignment 2: Empirical investigation of natural-language processing tech-
niques.

3
Sept 22: The Turing Test and AI Inducement Contests (guest lecturer: Prof. Stu-
art Shieber)
******Reading:
• A. M. Turing. “Computing machinery and intelligence.” Mind, 59:236,
1950.
Agent Decision Making
Sept 27: AI modeling of agents: types of agents; representing beliefs, actions,
and plans; search
Sci Fi Movie: R. Scott. Blade Runner. Burbank: Warner Home Video, 1982.
This can only be viewed from a campus location.
Reading:
• S. L. Darwall. “Theories of Ethics.” In Contemporary Debates in Applied
Ethics. 2nd ed. A. I. Cohen, C. H. Wellman (eds.). Chichester, UK:
Wiley-Blackwell, 2014, pp. 13-32. Note: Focus on pages 22-31; you can
skim pages 13-15 and skip pages 16-21.
Assignment 3: Op-ed: analysis of an ethical challenge for autonomous agents
and argument for a position on this challenge.
Sept 29: Basic models of intentions and decision theoretic reasoning
******Reading:
• M. Bratman. “What is Intention?” Intentions in Communication. P.
Cohen, J. Morgan, and M. Pollack (eds.). Cambridge: MIT Press, 1990.
Background Reading:
• _M. Bratman. Plans and Resource-Bounded Practical Reasoning“ _Com-
putational Intelligence, 1988
Oct 4: Markov Decision Processes for Sequential (Single Agent) Decision-Making
Reading:
• P. Lin. “Why Ethics Matter for Autonomous Cars.” Autonomes Fahren:
Technische, rechtliche und gesellschaftliche Aspekte. M. Maurer, J. C.
Gerdes, B. Lenz, H. Winner (eds.). 2015. Springer
Background Technical Reference:
• A. Pfeffer. “Markov Decision Processes.” (CS181 Lecture 3)
Oct 6: Reinforcement learning
Reading: none
Background Technical Reference:
• R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction.
Cambridge, MA: A Bradford Book, MIT Press. 1998.
Assignment 4: Empirical investigations of MDP modeling and reinforcement
learning.

4
Oct 11: Decision making in multi-agent domains
Reading:
• Asimov, Isaac. “Runaround.” I, Robot. Garden City: Doubleday Company,
1950.
******Reading:
• M. Tambe et al. “Asimovian Multiagents: Applying Laws of Robotics to
Teams of Humans and Agents.” ProMAS 2006, LNAI 4411, 2007.
Oct 13: Decision-making, adjustable autonomy and computer ethics
Reading:
• J. H. Moor. “Are There Decisions Computers Should Never Make?” Ethical
Issues in the Use of Computers, D.G. Johnson and J.W. Snapper (eds).
1985.
Background Reading:
• M. Anderson and S. L. Anderson. “Machine Ethics: Creating an Ethical
Intelligent Agent.” AI Magazine, 28:4, 2007.
Computer-Human Decision-making: Recommender, Negotiation,
and Search- Advisor Systems
Oct 18: The Facebook “emotion contagion” experiment
Reading:
• D. Watts. “Lessons Learned from the Facebook Study.” Chronicle of Higher
Education, July 9, 2014.
• V. Goel. “As Data Overflows Online, Researchers Grapple With Ethics.”
New York Times, August 12, 2014.
Background Reading (the study itself):
• A. D. I. Kramer, J. E. Guillory, and J. T. Hancock. “Experimental
evidence of massive-scale emotional contagion through social networks.”
In Proceedings of the National Academy of Sciences, June 17, 2014.
Assignment 5: Empirical investigation and analysis of recommendations from
Amazon and Facebook newsfeed and ethical assessment of factors influencing
choices.
Oct 20: Recommender systems
Reading:
• J.A. Konstan and J.T. Riedl. “Recommended for you.” IEEE Spectrum,
49:10, 2012.
Background Readings:

5
• J.A. Konstan and J.T. Riedl. “Recommender Systems: from algorithms to
user experience.” User Modeling and User-Adaptated Interaction (2012)
22:101-123.
• 2720049 M. D. Ekstrand, J. T. Riedl and J. A. Konstan. “Collaborative
Filtering Recommender Systems.” Foundations and Trends in Human-
Computer Interaction, 4:2, 81-173, 2010. (First 3 sections)
Oct 25: Negotiation systems
Reading:
• Jonathan Gratch, David DeVault, Gale Lucas, Stacy Marsella. “Negotiation
as a Challenge Problem for Virtual Humans.” 15th International Conference
on Intelligent Virtual Agents. Delft, The Netherlands. 2015.
Background Readings:
• R. Lin and S. Kraus. “Can Automated Agents Proficiently Negotiate with
Humans?” Communications of the ACM, 53:1, 2010.
• David Traum, Jonathan Gratch, Stacy Marsella, Jina Lee, Arno Hartholt.
Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal
Virtual Agents. 8th International Conference on Intelligent Virtual
Agents. Tokyo, Japan, September 2008.
Assignment 6: Investigation of search advising under varying information
conditions and ethical assessment of different conditions.
Oct 27: Search advising algorithms
Reading:
• M.L. Weitzman. “Optimal Search for the Best Alternative.” Econometrica
47:3, pp. 641-654 (1979).
*****Project assignment distributed, with component requirements for proposal,
presentation, report*****
Nov 1: Computer ethics for recommender and persuasion systems
Reading:
• M. Guerini, F. Pianesi, and O. Stock. “Is it morally acceptable for a
system to lie to persuade me?” In Proceedings of the Twenty-Ninth AAAI
Conference on Artificial Intelligence, January 25-26, 2015.
• S. V. Shiffrin. “Chapter 1: Lies and the Murderer Next Door.” Speech
Matters: On Lying, Morality, and the Law. Princeton University Press,
Princeton, 2014. (Only read pages 5-21).
Computer-Human Teamwork
Nov 3: Robotics today (guest lecturer: Prof. Julie Shah, MIT)
******Reading, choose one of:
• Neil M. Richards and William D. Smart. “How Should the Law Think
About Robots?” (May 10, 2013). Available at SSRN: [ssrn.com] or [doi.org]

6
• Heather Knight. “How Humans Respond to Robots: Building Public
Policy through Good Design.” Brookings Center for Technology, Innova-
tion. http://www.brookings.edu/research/reports2/2014/07/how-humans-
respond-to-robots.
Nov 8: Design
Reading:
• T. Brown. “Design Thinking,” Harvard Business Review, June 2008.
• B. Tuttle. “The 5 Big Mistakes That Led to Ron Johnson’s Ouster at JC
Penney.” Time Magazine, April 9, 2013.
• Harvard Business Review Staff. “Retail Isn’t Broken. Stores Are.” Harvard
Business Review, December 2011.
Background Reading:
• D. Engelbart. “Augmenting Human Intellect: A Conceptual Framework.” _
SRI Summary Report/AFOSR-3223_. 1962.
*****Project Proposals Due*****
Nov 10: AI and economics
******Reading:
• D. Parkes and M. Wellman. “Economic reasoning and artificial intelligence.”
Science, 349:7, 2015.
Nov 15: Designing Transitions in Control
******Reading (question on one paper or combining papers):
• S. Zilberstein. “Building Strong Semi-Autonomous Systems.” In Proceed-
ings of the 29th AAAI Conference on Artificial Intelligence, 2015.
• E. Horvitz. “Principles of Mixed-Initiative User Interfaces.” In CHI ’99:
Proceedings of the SIGCHI conference on Human Factors in Computing
Systems, 1999.
• M. Casner, E. L. Hutchins, and D. Norman. “The Challenges of Partially
Automated Driving.” In Communications of the ACM , 59:5, 2016.
Background Reading:
• S. T. Iqbal, E. Horvitz, Y. C. Ju, and E. Mathews. “Hang on a Sec!:
Effects of Proactive Mediation of Phone Conversations While Driving.”
In CHI ’11: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems , 2011.
• E. Horvitz, C. Kadie, T. Paek, D. Hovel. “Models of attention in computing
and communication: from principles to applications.” In _Communica-
tions of the ACM _46:3, 2003.
• S. Green, J. Heer, and C. Manning. “Natural Language Translation at the
Intersection of AI and HCI.” Communications of the ACM, 58(9), 46-53,
2015.

7
Nov 17: Ex Machina and the role of emotions in intelligence
Movie: Ex Machina. This can only be viewed from a campus location.
Reading:
• M. Piercy. He, She and It. New York: Fawcett, 1991. Excerpt, pp. 108-123.
Nov 22: open project discussions
Reading: none
Nov 24: Thanksgiving
Nov 29: interim reports on course projects
Dec 1: interim reports on course projects
Dec 12: Final projects are due at 8 pm

Вам также может понравиться