Вы находитесь на странице: 1из 48

ARTIFICIAL INTELLIGENCE

Monica Isip
Lecture 1: Introduction of Artificial Intelligence

What is Artificial Intelligence?


“The science and engineering of making intelligent
machines, especially intelligent computer programs.”
-John McCarthy, father of Artificial Intelligence
Artificial Intelligence is a way of making a computer, a
computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans
think.
What is AI?
Artificial Intelligence is concerned with the design of
intelligence in an artificial device.
Term coined by McCarthy in 1956.

What is intelligence?
Human?
- Behave as intelligently as a human
- Behave in the best possible manner
- Thinking?
- Acting?
Definitions of AI
• what to look at:
– thought processes/reasoning vs. behaviour
– human-like performance vs. ideal performance
thought / reasoning

systems that think systems that think


like humans like rationally
Turing Test Laws of Thought/Logic
human-like ideal
performance performance(
Cognitive Science Rational Agents rationality)
systems that act systems that act
like human rationally
behaviour
Acting humanly: The Turing Test approach
The Turing Test
• proposed by Alan Turing (1950)
• was designed to provide a satisfactory operational
definition of intelligence.
• Turing defined intelligent behavior as the ability to achieve
human-level performance in all cognitive tasks, sufficient
to fool an interrogator.
• The test he proposed is that the computer should be
interrogated by a human via a teletype, and passes the
test if the interrogator cannot tell if there is a computer or
a human at the other end.
The Turing Test
• The computer would need to possess the following capabilities:
- natural language processing to enable it to communicate
successfully in English (or some
other human language);
- knowledge representation to store information provided
before or during the interrogation;
- automated reasoning to use the stored information to
answer questions and to draw new
conclusions;
- machine learning to adapt to new circumstances and to
detect and extrapolate patterns.
Total Turing Test
• includes a video signal so that the interrogator can test
the subject's perceptual abilities, as well as the
opportunity for the interrogator to pass physical objects
"through the hatch."
• To pass the total Turing Test, the computer will need:
- computer vision to perceive objects, and
- robotics to move them about.
Thinking humanly: The cognitive modelling approach
• Some way of determining how humans think.
• two ways to do this:
- through introspection—trying to catch our own
thoughts as they go by
- through psychological experiments.
• Once we have a sufficiently precise theory of the mind, it
becomes possible to express the theory as a computer
program.
• If the program's input/output and timing behavior matches
human behavior, that is evidence that some of the
program's mechanisms may also be operating in humans.
Example
• Newell and Simon, who developed GPS, the "General
Problem Solver" (Newell and Simon, 1961), were not
content to have their program correctly solve problems.
They were more concerned with comparing the trace of its
reasoning steps to traces of human subjects solving the
same problems.
• This is in contrast to other researchers of the same time
(such as Wang (I960)), who were concerned with getting
the right answers regardless of how humans might do it.
The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to try to construct precise and
testable theories of the workings of the human mind.
Thinking rationally: The laws of thought approach
• The Greek philosopher Aristotle was one of the first to
attempt to codify "right thinking," that is, irrefutable
reasoning processes. His famous syllogisms provided
patterns for argument structures that always gave correct
conclusions given correct premises.
Example: "Socrates is a man; all men are mortal; therefore
Socrates is mortal."
• These laws of thought were supposed to govern the
operation of the mind, and initiated the field of logic
By 1965, programs existed that could, given enough time
and memory, take a description of a problem in logical
notation and find the solution to the problem, if one exists.
(If there is no solution, the program might never stop
looking for it.) The so-called logicist tradition within artificial
intelligence hopes to build on such programs to create
intelligent systems.
Two main obstacles to this approach:
• First, it is not easy to take informal knowledge and state it in
the formal terms required by logical notation, particularly when
the knowledge is less than 100% certain.
• Second, there is a big difference between being able to solve a
problem "in principle" and doing so in practice. Even problems
with just a few dozen facts can exhaust the computational
resources of any computer unless it has some guidance as to
which reasoning steps to try first
Although both of these obstacles apply to any attempt to build
computational reasoning systems, they appeared first in the logicist
tradition because the power of the representation and reasoning systems
are well-defined and fairly well understood.
Acting rationally: The rational agent approach
• An agent is just something that perceives and acts.
• AI is viewed as the study and construction of rational agents.
Two Advantages:
• First, it is more general than the "laws of thought" approach,
because correct inference is only a useful mechanism for
achieving rationality, and not a necessary one.
• Second, it is more amenable to scientific development than
approaches based on human behavior or human thought,
because the standard of rationality is clearly defined and
completely general.
Practical Impact of AI
• AI components are embedded in numerous devices
Example: copy machines.
• AI systems are in everyday use:
- detecting credit card fraud
- configuring products
- aiding complex
- advising physicians.
• Intelligent tutoring systems provide students with
personalized attention
Example Systems:

Autonomous Land Vehicle in a Neural Network


1989 - Dean Pomerleau at CMU creates ALVINN
the system drove a car coast-to-coast under computer
control for all but 50 of the 2850 miles.
Deep Blue
1997 - The Deep Blue chess program beats the current
world class champion, Gary Kasparov, in a widely
followed match
Machine Translation
 Immediate translator between people speaking languages would
be remarkable achievement of enormous economic and cultural
benefit.
 Universal translation is one of 10 emerging technologies that will
affect our lives and work 'in revolutionary ways' within a decade,
Technology Review says.
 Meanwhile, the US military is giving a simpler one-way translation
device is a rugged road test in Iraq... US forces are using the
Praselator to communicate with injured Iraqis, prisoners of war,
travelers at checkpoints, and for other peacekeeping duties.
 Carnegie Mellon is working on its own Speechlator for use in
doctor-patient interviews.
Autonomous Agents
 In space exploration, robotic space probes autonomous monitor
their surroundings, make decisions and act to achieve their goals.
 Mars Exploration Rover Mission
Mars Rover
 NASA's Mars rovers successfully completed their primary three-
month missions in April.
 The Spirit rover is exploring a range of martian hills that took two
months to reach. It is finding curiously eroded rocks that may be
new pieces to the puzzle of the region's past.
 Spirit's twin, Opportunity, is also negotiating sloped ground. it is
examining exposed rock layers inside a crater informally named
“Endurance.”
Internet Agents
The explosive growth of the internet has also led to
growing interest in internet agents to:
- monitor users' tasks
- seek needed information
- learn which information is most useful
Approaches to AI
• Strong AI: aims to build machines that can truly reason
and solve problems which is self aware and whose
overall intellectual ability is indistinguishable from that of a
human being
– Human like
– Non-human like
Excessive optimism in the 1950's and 1960's concerning
strong AI has given way to an appeciation of the extreme
difficulty of the problem.
• Weak AI: deals with the creation of some form of
computer-based artificial intelligence that cannot truly
reason and solve problems, but can act as if it were
intelligent.

• Weak AI holds that suitably programmed machines can


simulate human cognition
• Strong AI maintains that suitably programmed machines
are capable of cognitive mental states.
• Applied AI: aims to produce sommercially viable “smart”
systems-such as:
Example: a security system that is able to recognise the
faces of people who are permitted to enter a particular
building. Applied AI has already enjoyed considerable
success.
• Cognitive AI: computers are used to test theories about
how the human mind works
Example: theories about how we recognise faces and
other objects, or about how we solve abstract problems
FOUNDATION OF AI / CONTRIBUTES TO AI
• AI is a science and technology based on disciplines such
as:
- Philosophy
- Mathematics
- Psychology
- Computer Engineering / Computer Science
- Linguistics
- Sociology
- Biology
- Neuron Science
• From over 2000 years of tradition in philosophy, theories of
reasoning and learning have emerged, along with the view point
that the mind is constituted by the operation of a physical system.
• From over 400 years of mathematics, we have formal theories of
logic, probability, decision making, and computation.
• From psychology, we have the tools with which to investigate the
human mind, and a scientific language within which to express the
resulting theories.
• From linguistics, we have theories of the structure and meaning
of language.
• Finally, from computer science, we have the tools with which to
make AI a reality.
Programming Without and With AI
Programming Without AI Programming With AI
-A computer program without AI can -A computer program with AI can
answer the specific questions it is answer the generic questions it is
meant to solve. meant to solve.
-Modification in the program leads to -AI programs can absorb new
change in its structure. modifications by putting highly
independent pieces of information
togethe. Hence you can modify even a
minute piece of information of program
without affecting its structure.
-Modificationis not quick and easy. It -Quick and Easy program modification.
may lead to affecting the program
adversely.
What is AI Techniques?
• Real World, the knowledge has some unwelcomed
properties:
– Its volume is huge, next to unimaginable.
– It is not well-organized or well-formatted.
– It keeps changing constantly.
• AI Technique is a manner to organize and use the
knowledge efficiently in such a way that:
– It should be perceivable by the people who provide it.
– It should be easily modifiable to correct errors.
– It should be useful in many situations through it is incomplete or
inaccurate.
Applications of AI
What can AI systems do?
• Gaming - AI plays crucial role in strategic games such
aas chess, poker, tic-tac-toe, etc., where machine can
think of large number of possible positions based on
heuristics knowledge.
• Natural Language Processing - It is possible to interact
with the computer that understands natural language
spoken by humans.
• Expert Systems - There are some applications which
integrate machine, software, and special information to
impact reasoning and advice to the users.
• Vission Systems - These systems understand, interpret,
and comprehend visual input on the computer.
Example:
1. A spying aeroplane takes photographs, which are used to figure out
spatial information or map of the areas.
2. Doctors use clinical expert system to diagnose the patient.
3. Police use computer software that can recognize the face of criminal with
the stored portrait made by forensic artist.
• Speech Recognition - Some intelligent systems are
capable of hearing and comprehending the language in
terms of sentences and their meanings while a human
talks to it. It can handle different accents, slang words,
noise in the background, change in human's noise due to
cold, etc.
• Handwriting Recognition - The handwriting recognition
software reads the text written on paper by a pen or on
screen by a stylus. It can recognize the shapes of the
letters and convert it into editable text.
• Intelligent Robots - Robots are able to perform the tasks
given by a human. They have sensors to detect physical
data from the real world such as lights, heat, temperature,
movement, sound, bump, and pressure. They have
efficient processors, multiple sensors and huge memory,
to exhibit intelligence. In addition, they are capable of
learning from their mistakes and they can adapt to the
new environment.
What can't AI systems do yet?
• Understand natural language robustly (ex: read and
understand articles in a newspaper)
• Surf the web
• Interpret an arbitrary visual scene
• Learn a natural language
• Construct plan in dynamic real-time domain
• Exhibit true autonomy and intelligence
Brief History of Artificial Intelligence
• The concept of intelligent machines is found in Greek
mythology.
– 8th century: Pygmalion
– Hephaestus created a huge robot, Talos, to guard Crete
• Philosophers have analyzed the nature of knowledge and
have explored formal frameworks for developing
conclusions.
• Mathematical formalizations in logic, computation and
probability
• Economists developed decision theory
• How does the brain process information?
• Psychologist have long studied human recognition
– knowledge about the nature of human intelligence
How do we build an efficient computer?
BACKGROUND
• 384-322 BC - Aristotle developed an informal system of syllogistic
logic, the first formal deductive reasoning system
• 13th Century - Ramon Lull, a spanished theologian, invented the idea
of a machine that would produce all knowledge, by putting together
words at random. He even tried to build it.
• Early 17th Century - Descartes proposed that bodies of animals are
nothing more than complex machines.
• 1642 - Pascal the first mechanical digital calculating machine.
• 1673 - Leibniz improved Pascal's machine.
• 19th Century - George Boole developed a binary algebra
representing (some) “laws of thought.”
• Charles Babbage and Ada Byron worked on programmable
mechanical calculating machines.
Background cont....
• In the late 19th century and early 20th century mathematical
philosophers like:
– Gootlob Frege,
– Bertram Russell;
– Alfred North Whitehead, and
– Kurt Godel
• built on Boole's initial logic concepts to develop mathematical
representations of logic problem.
Advent of Computer

• The advent of electronic computers provided a


revolutionary advance in the ability to study intelligence.
History
• 1943 - Warren McCulloch and Walter Pitts
– Boolean circuit model of brain
– “A Logical Calculus of Ideas Immanent in Nervous Activity” is
published.
• Explaining for the first time how it is possible for neural networks
to compute.
• 1949 - Donald Hebb demonstrated a simple updating rule for
modifying the connection strengths between neurons, such that
learning could take place.
• 1950 - Claude Shannon (1950) and Alan Turing (1953) were writing
chess programs for Von Neumann-style conventional computers.
• 1950 - Alan Turing: Turing's “Computing Machinery and Intelligence
” - articulated a complete vision of AI
– solving problems by searching through the space of possible by
solutions, guided by heuristics.
– illustrated his ideas on machine intelligence by reference to
chess.
– propounded the possibility of letting the machine alter its own
instructions so that machines can learn from experience.
• 1951: Marvin Minsky and Dean Edmonds built SNARC in 1951
– A neural network computer
– Used 3000 vacuum tubes
– Network with 40 neurons
• 1952 - 1956: Arthur Samuel's checkers (draughts) program
• 1956 - Dartmouth meeting: “Artificial Intelligence” adopted
– Alien Newell and Herbert Simon: Newel and Simon's Logic
Theorist, widely considered to be the first AI program
– the most lasting thing to come out of the workshop was an
agreement to adopt McCarthy's new name for the field: artificial
intelligence.
– The conference brings together the founding fathers of Artificial
Intelligence for the first time: John McCarthy
– Newell and Simon's early success was followed up with the
General Problem Solver, or GPS.
• 1959 - Herbert Gelernter constructed the Geometry Theorem
Prover. (Galernter's Geometry Engine)
• 1958 - McCarthy defined the high-level language
- Lisp, which was to become the dominant AI programming
language. Lisp is the second-oldest language in current use.
• 1961 - James Slangle (PhD dissertation, MIT) wrote (in Lisp) the
first symbolic integration program, SAINT, which solved calculus
problems at the college freshman level.
• 1963 - Thomas Evan's program Analogy to solve IQ test type
analogy problems.
- Edward A. Feigenbaum & Julian Feldman published
Computers and Thought, the first collection of articles about articles
about artificial intelligence.
• 1963 - James Slagle's SAINT program was able to solve
closed-form integration problems typical of first-year
college calculus courses.
• 1965 - Danny Bobrow shows that computers can
understand natural language well enough to solve algebra
word problems correctly.
- J. Allen Robinson invented a mechanical proof
procedure, the Resolution Method, which allowed programs
to work efficiently with formal logic as a representation
language.
• 1966 - 74 - AI discovers computational complexity.
• 1967 - Dendral program (Feigenbaum,Lederberg,
Buchanan,Sutherland at Stanford) demostrated to
interpret mass spectra on organic chemical compounds.
First successful knowledge-based program for scientific
reasoning.
• 1968 - Tom Evan's ANALOGY program solved geometric
analogy problems that appear in IQ test
– Bertran Rafael's SIR (Semantic Information Retrieval) was able
to accept input statements in a very restricted subset of English
and answer questions thereon.
– Marvin Minsky and Seymour Papert publish Perceptrons,
demonstrating limits of simple neural nets.
• 1969 - Cordell Green's question answering and planning
systems and the Shakey robotics project at the new
Stanford Research Institute (SRI)., demonstrated combining
locomotion, perception and problem solving.
• 1969-79 - Early development of knowledge-based systems.
• 1974 - MYCIN (Stanford) demostrated the power of rule -
based systems for knowledge represention and inference in
medical diagnosis and therapy
• 1975 - Sacerdoti developed a planning programs,
ABSTRIPS
• Minsky - Frames as a representation of knowledge
• Meta-Dendral learning program produced new results in
chemistry (rules of mass spectrometry)
• 1976 - Doug Lenat's AM program demonstrated the
discovery model (search for interesting conjectures).
• 1978 - Herb Simon wins the Nobel Prize in Economics for
his theory or bounded rationality.
• 1980's - Lisp Machines developed and marketed. First
expert system shells and commercial applications.
• 1985 - 95 - Neural networks return to popularity
• 1988 - Resurgence of probabilistic and decision-theoretic
methods Rapid increase in technical depth of mainstream
AI, “Nouvelle AI”: ALife, GAs, soft computing
• Early AI systems used general systems, little knowledge
• Specialized knowledge required for rich tasks to focus
reasoning.
• 1990's - Major advances in all areas of AI
– machine learning, data mining
– intelligent tutoring
– case-based reasoning
– multi-agent planning, scheduling
– uncertain reasoning
– natural language understanding and translation
– vision, virtual reality, games, and other topics
• Rod Brooks' COG Project at MIT, with numerous
collaborators, makes significant progress in building a
humanoid robot.
• 1997 - The Deep Blue chess program beats the current
world chess champion, Garry Kasparov, in a widely
followed match
• First official Robo-Cup soccer match featuring table-top
matches with 40 teams of interacting robots.
• Late 90's: Web crawlers and other AI-based information
extraction programs become essential in widespread use
of the world-wide-web
• Demonstration of an Intelligent Room and Emotional
Agents at MIT's AI Lab. Initiation of work on the Oxygen
Architecture, which connects mobile and stationary
computers in an adaptive network
• Interative robot pets (“smart toys”) become commercially
available, realizing the vision of the 18th cen. novelty toy
makers.
• 2000 - The Nomad robot explores remote regions of
Antarctica looking for meteorite samples.
Assignment:

1-5. What is Intelligent?


6-10. Difference between Artificial Intelligence VS. Human
Intelligence
Thank you for listening...

Вам также может понравиться