Вы находитесь на странице: 1из 38

Ar#cial

Intelligence
Dr. Qaiser Abbas
Department of Computer Science & IT,
University of Sargodha, Sargodha, 40100, Pakistan
qaiser.abbas@uos.edu.pk

Tuesday 15 September 15 1
1. What is AI?
Thinking Humanly Thinking Ra#onally
The exciMng new eort to make The study of mental faculMes through
computers think . . . machines with the use of computaMonal models.
minds, in the full and literal (Charniak and McDermo], 1985)
thought processes sense. (Haugeland, 1985)
and reasoning [The automaMon of] acMviMes that we The study of the computaMons that
associate with human thinking, make it possible to perceive, reason,
acMviMes such as decision-making, and act. (Winston, 1992)
problem solving,
learning . . . (Bellman, 1978)
Ac#ng Humanly Ac#ng Ra#onally
The art of creaMng machines that per- ComputaMonal Intelligence is the
form funcMons that require study of the design of intelligent
intelligence when performed by agents. (Poole et al., 1998)
behavior people. (Kurzweil, 1990)
The study of how to make computers AI ...is concerned with intelligent
do things at which, at the moment, behavior in arMfacts. (Nilsson, 1998)
people are be]er. (Rich and Knight,
1991)

Lec: success in terms of Right: an ideal performance


delity to human performance measure, called ra#onality
Tuesday 15 September 15 2
1.1. Ac#ng humanly: The Turing Test
approach
The Turing Test, proposed by Alan Turing (1950), was designed to
provide a saMsfactory operaMonal deniMon of intelligence.
A computer passes the test if a human interrogator, acer posing
some wri]en quesMons, cannot tell whether the wri]en responses
come from a person or from a computer. (See Ch 26 for details)
Programming a computer to pass a rigorously applied test provides
plenty to work on. The computer would need to possess the
following capabiliMes:
natural language processing to enable it to communicate successfully
in English;
knowledge representa#on to store what it knows or hears;
automated reasoning to use the stored informaMon to answer
quesMons and to draw new conclusions;
machine learning to adapt to new circumstances and to detect and
extrapolate pa]erns.

Tuesday 15 September 15 3
1.1. Ac#ng humanly: The Turing Test
approach
To pass the total Turing Test, the computer
will need
Computer vision to perceive objects, and
Robo#cs to manipulate objects and move about.

These six disciplines compose most of AI, and


Turing deserves credit for designing a test that
remains relevant 60 years later.
Tuesday 15 September 15 4
1.2. Thinking humanly: The cogni#ve
modeling approach
If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think. We need
to get inside the actual workings of human minds.
There are three ways to do this:
through introspecMontrying to catch our own thoughts as they go
by;
through psychological experimentsobserving a person in acMon;
and through brain imagingobserving the brain in acMon.
Once we have a suciently precise theory of the mind, it becomes
possible to express the theory as a computer program. If the
programs inputoutput behavior matches corresponding human
behavior, that is evidence that some of the programs mechanisms
could also be operaMng in humans.

Tuesday 15 September 15 5
1.2. Thinking humanly: The cogni#ve
modeling approach
For example, Allen Newell and Herbert Simon, who
developed GPS, the General Problem Solver (Newell and
Simon, 1961), were not content merely to have their
program solve problems correctly. They were more
concerned with comparing the trace of its reasoning steps
to traces of human subjects solving the same problems.
The interdisciplinary eld of cogni#ve science brings
together computer models from AI and experimental
techniques from psychology to construct precise and
testable theories of the human mind.
Real cogniMve science, however, is necessarily based on
experimental invesMgaMon of actual humans or animals.

Tuesday 15 September 15 6
1.3. Thinking ra#onally: The laws of
thought approach
The Greek philosopher Aristotle was one of the rst to a]empt to codify
right thinking, that is, irrefutable (impossible to deny or disprove)
reasoning processes.
His syllogisms provided pa]erns for argument structures that always
yielded correct conclusions when given correct premises:
for example, Socrates is a man; all men are mortal; therefore, Socrates is
mortal. These laws of thought were supposed to govern the operaMon of the
mind; their study iniMated the eld called logic.
Logicians in the 19th century developed a precise notaMon for statements
about all kinds of objects in the world and the relaMons among them.
(Contrast this with ordinary arithmeMc notaMon, which provides only for
statements about numbers.)
By 1965, programs existed that could, in principle, solve any solvable
problem described in logical notaMon. (Although if no soluMon exists, the
program might loop forever.)
The so-called logicist tradiMon within arMcial intelligence hopes to build
on such programs to create intelligent systems.

Tuesday 15 September 15 7
1.3. Thinking ra#onally: The laws of
thought approach
There are two main obstacles to this approach.
First, it is not easy to take informal knowledge
and state it in the formal terms required by
logical notaMon, parMcularly when the knowledge
is less than 100% certain.
Second, there is a big dierence between solving
a problem in principle and solving it in pracMce.
Even problems with just a few hundred facts can
exhaust the computaMonal resources of any
computer unless it has some guidance as to
which reasoning steps to try rst.

Tuesday 15 September 15 8
1.4. Ac#ng ra#onally: The ra#onal
agent approach
An agent is just something that acts (agent comes from the LaMn agere, to do).
Of course, all computer programs do something, but computer agents are
expected to do more: operate autonomously, perceive their environment, persist
over a prolonged Mme period, adapt to change, and create and pursue goals.
A ra#onal agent is one that acts so as to achieve the best outcome or, when there
is uncertainty, the best expected outcome.
In the laws of thought approach to AI, the emphasis was on correct inferences.
Making correct inferences is someMmes part of being a raMonal agent, because
one way to act raMonally is to reason logically to the conclusion that a given acMon
will achieve ones goals and then to act on that conclusion.
On the other hand, correct inference is not all of raMonality; in some situaMons,
there is no provably correct thing to do, but something must sMll be done.
There are also ways of acMng raMonally that cannot be said to involve inference.
For example, recoiling (spring back in fear) from a hot stove is a reex acMon that
is usually more successful than a slower acMon taken acer careful deliberaMon.

Tuesday 15 September 15 9
1.4. Ac#ng ra#onally: The ra#onal
agent approach
Knowledge representaMon and reasoning enable agents to reach good
decisions.
We need to be able to generate comprehensible sentences in natural
language to get by in a complex society.
We need learning not only for erudiMon (the quality of having or showing
great knowledge or learning;), but also because it improves our ability to
generate eecMve behavior.
The raMonal-agent approach has two advantages over the other
approaches.
First, it is more general than the laws of thought approach because correct
inference is just one of several possible mechanisms for achieving raMonality.
Second, it is more amenable (capable of being acted upon in a parMcular way)
to scienMc development than are approaches based on human behavior or
human thought.
The standard of raMonality is mathemaMcally well dened and completely
general, and can be unpacked to generate agent designs that provably
achieve it.

Tuesday 15 September 15 10
2. THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE
We will study a brief history of the disciplines
that contributed ideas, viewpoints, and
techniques to AI.

Tuesday 15 September 15 11
2.1. Philosophy
Can formal rules be used to draw valid conclusions?

Aristotle (384322 B.C.), was the rst to formulate a precise set of laws
governing the raMonal part of the mind. He developed an informal system
of syllogisms for proper reasoning, which in principle allowed one to
generate conclusions mechanically, given iniMal premises.
Later on, Ramon Lull (d. 1315), Thomas Hobbes (15881679), also
presents reasoning with a direcMon to numerical computaMon. Then
around 1500, Leonardo da Vinci (14521519) designed but did not build a
mechanical calculator; recent reconstrucMons have shown the design to
be funcMonal.
The rst known calculaMng machine was constructed around 1623 by the
German scienMst Wilhelm Schickard (15921635), although the Pascaline,
built in 1642 by Blaise Pascal (16231662), is more famous.

Tuesday 15 September 15 12
2.1. Philosophy
Gouried Wilhelm Leibniz (16461716) built a mechanical device intended to carry out operaMons
on concepts rather than numbers, but its scope was rather limited. Leibniz did surpass Pascal by
building a calculator that could add, subtract, mulMply, and take roots, whereas the Pascaline could
only add and subtract.
Some speculated that machines might not just do calculaMons but actually be able to think and act
on their own.
This lead to another discussion that machines are simply a ma]er or a mind?

How does the mind arise from a physical brain?

Its one thing to say that the mind operates, at least in part, according to logical rules, and to build
physical systems that emulate some of those rules; its another to say that the mind itself is such a
physical system.
Rene Descartes (15961650), was a strong advocate of the power of reasoning in understanding
the world, a philosophy now called ra#onalism, and one that counts Aristotle and Leibnitz as
members.
Descartes was also a proponent of dualism. He held that there is a part of the human mind (or soul
or spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand, did not
possess this dual quality; they could be treated as machines.

Tuesday 15 September 15 13
2.1. Philosophy
An alternaMve to dualism is materialism, which holds that the brains operaMon
according to the laws of physics cons7tutes the mind.

Given a physical mind that manipulates knowledge, the next problem is to


establish the source of knowledge.

Where does knowledge come from?

The empiricism movement, starMng with Francis Bacons (1561 1626) is


characterized by a dictum (formal announcement) of John Locke (16321704):
Nothing is in the understanding, which was not rst in the senses.
David Humes (17111776) proposed induc#on that general rules are acquired by
exposure to repeated associaMons between their elements.
Ludwig Wi]genstein (18891951) Bertrand Russell (18721970), Rudolf Carnap
(18911970), developed the doctrine of logical posi#vism. This doctrine holds that
all knowledge can be characterized by logical theories connected to observa#on
sentences that correspond to sensory inputs.

Tuesday 15 September 15 14
2.1. Philosophy
The conrma#on theory of Carnap and Carl Hempel (19051997) a]empted to
analyze the acquisiMon of knowledge from experience. It dened an explicit
computaMonal procedure for extracMng knowledge from elementary experiences
and was probably the rst theory of mind as a computaMonal process.
The nal element in the philosophical picture of the mind is the connecMon
between knowledge and acMon.

How does knowledge lead to acMon?

This quesMon is vital to AI because intelligence requires acMon as well as


reasoning. Moreover, only by understanding how acMons are jusMed can we
understand how to build an agent whose acMons are jusMable (or raMonal).
Aristotle argued (in De Motu Animalium) that acMons are jusMed by a logical
connecMon between goals and knowledge of the acMons outcome
Aristotles algorithm was implemented 2300 years later by Newell and Simon in
their GPS program. We would now call it a regression planning system (see
Chapter 10).

Tuesday 15 September 15 15
2.2 Mathema#cs
What are the formal rules to draw valid conclusions? (Seen in 2.1)
Philosophers staked out some of the fundamental ideas of AI, but
the leap to a formal science required a level of mathemaMcal
formalizaMon in three fundamental areas: logic, computaMon, and
probability.
The idea of formal logic can be traced back to the philosophers of
ancient Greece, but its mathemaMcal development really began
with the work of George Boole (18151864), who worked out the
details of proposiMonal or Boolean logic.
In 1879, Go]lob Frege (18481925) extended Booles logic to
include objects and relaMons, creaMng the rst- order logic that is
used today.
Alfred Tarski (19021983) introduced a theory of reference that
shows how to relate the objects in a logic to objects in the real
world.

Tuesday 15 September 15 16
2.2 Mathema#cs
What can be computed?
The next step was to determine the limits of what could be done with logic and computaMon.
The rst nontrivial (signicant) algorithm is thought to be Euclids algorithm for compuMng greatest
common divisors.
The word algorithm (and the idea of studying them) comes from al-Khowarazmi, a Persian
mathemaMcian of the 9th century, whose wriMngs also introduced Arabic numerals and algebra to
Europe.
Boole and others discussed algorithms for logical deducMon.
In 1930, Kurt Go del (19061978) showed that rst-order logic could not capture the principle of
mathemaMcal inducMon needed to characterize the natural numbers.
In 1931, Go del showed that limits on deducMon do exist and he presented the incompleteness
theorem, according to which there are true statements that are undecidable in the sense that they
have no proof within the theory.
Alan Turing (19121954) tried to characterize exactly which funcMons are computable and which
are not.
The ChurchTuring thesis states that the Turing machine (Turing, 1936) is capable of compuMng any
computable funcMon, and there are some funcMons that no Turing machine can compute.

Tuesday 15 September 15 17
2.2 Mathema#cs
A problem is called intractable if the Mme required to solve instances of
the problem grows exponenMally with the size of the instances.
The disMncMon between polynomial (nk) and exponenMal (2n) growth in
complexity was rst emphasized in the mid-1960s (Cobham, 1964;
Edmonds, 1965).
Therefore, one should strive to divide the overall problem of generaMng
intelligent behavior into tractable subproblems rather than intractable
ones.
How can one recognize an intractable problem? The theory of NP-
completeness, pioneered by Steven Cook (1971) and Richard Karp (1972),
provides a method.
Although it has not been proved that NP-complete problems are
necessarily intractable, most theoreMcians believe it.
Work in AI has helped explain why some instances of NP-complete
problems are hard, yet others are easy (Cheeseman et al., 1991).

Tuesday 15 September 15 18
2.2 Mathema#cs
How do we reason with uncertain informaMon?
Besides logic and computaMon, the third great contribuMon of
mathemaMcs to AI is the theory of probability. The Italian Gerolamo
Cardano (15011576) rst framed the idea of probability, describing it in
terms of the possible outcomes of gambling events.
Probability quickly became an invaluable part of all the quanMtaMve
sciences, helping to deal with uncertain measurements and incomplete
theories.
James Bernoulli (16541705), Pierre Laplace (17491827), and others
advanced the theory and introduced new staMsMcal methods.
Thomas Bayes (17021761), who appears on the front cover of the
textbook, proposed a rule for updaMng probabiliMes in the light of new
evidence.
Bayes rule underlies most modern approaches to uncertain reasoning in
AI systems.

Tuesday 15 September 15 19
2.3 Economics
How should we make decisions so as to maximize payo? (Read it yourself)
While the ancient Greeks and others had made contribuMons to economic
thought, Smith was the rst to treat it as a science, using the idea that economies
can be thought of as consisMng of individual agents maximizing their own
economic well-being.
Decision theory, which combines probability theory with uMlity theory (preferred
outcomes like dollar or hamburger), provides a formal and complete framework
for decisions (economic or otherwise) made under uncertainty that is, in cases
where probabilisMc descripMons appropriately capture the decision makers
environment.
How should we do this when others may not go along?
Decision theory is suitable for large economies where each agent need pay no
a]enMon to the acMons of other agents as individuals.
How should we do this when the payo may be far in the future?
This topic was pursued in the eld of opera#ons research. The work of Richard
Bellman (1957) formalized a class of sequenMal decision problems called Markov
decision processes, which we will study in Chapters 17 and 21.

Tuesday 15 September 15 20
2.4 Neuroscience
How do brains process informaMon?
Neuroscience is the study of the nervous system, parMcularly the
brain. The exact way in which the brain enables thought is one of
the great mysteries of science.
Paul Brocas (18241880) study of aphasia (speech decit) showed
that speech producMon was localized to the porMon of the lec
hemisphere now called Brocas area.
By that Mme, it was known that the brain consisted of nerve cells,
or neurons, but it was not unMl 1873 that Camillo Golgi (1843
1926) developed a staining (mark or discolour) technique allowing
the observaMon of individual neurons in the brain (see Figure 1.2
from the textbook).
Nicolas Rashevsky (1936, 1938) was the rst to apply mathemaMcal
models to the study of the nervous system.

Tuesday 15 September 15 21
2.4 Neuroscience

Tuesday 15 September 15 22
2.4 Neuroscience
We now have some data on the mapping between areas of the brain and the parts
of the body that they control or from which they receive sensory input.
Moreover, we do not fully understand how other areas can take over funcMons
when one area is damaged. There is almost no theory on how an individual
memory is stored.
The measurement of intact (undamaged) brain acMvity began in 1929 with the
invenMon by Hans Berger of the electroencephalograph (EEG). The recent
development of funcMonal magneMc resonance imaging (fMRI) (Ogawa et al.,
1990; Cabeza and Nyberg, 2001) is giving neuroscienMsts unprecedentedly detailed
images of brain acMvity.
Individual neurons can be sMmulated electrically, chemically, or even opMcally
(Han and Boyden, 2007), allowing neuronal input output relaMonships to be
mapped. Despite these advances, we are sMll a long way from understanding how
cogniMve (mental acMon or process to acquire knowledge and understanding)
processes actually work.
Brains and digital computers have somewhat dierent properMes. Figure 1.3
shows that computers have a cycle Mme that is a million Mmes faster than a brain.

Tuesday 15 September 15 23
2.4 Neuroscience

Tuesday 15 September 15 24
2.5 Psychology
How do humans and animals think and act?
It is about the behavior and cogniMve process
of humans and animals (Read it yourself).

Tuesday 15 September 15 25
2.6 Computer engineering
How can we build an ecient computer?
For arMcial intelligence to succeed, we need
two things: intelligence and an arMfact. The
computer has been the arMfact of choice.
EvoluMon of Computer and its socware tools
(Read it yourself).

Tuesday 15 September 15 26
2.7 Control theory and cyberne#cs
How can arMfacts operate under their own control?
Ktesibios of Alexandria (c. 250 B.C.) built the rst self-controlling machine: a water
clock with a regulator that maintained a constant ow rate. This invenMon
changed the deniMon of what an arMfact could do.
Previously, only living things could modify their behavior in response to changes in
the environment.
Other examples of self-regulaMng feedback control systems include the steam
engine governor, created by James Wa] (17361819), and the thermostat,
invented by Cornelis Drebbel (15721633), who also invented the submarine. The
mathemaMcal theory of stable feedback systems was developed in the 19th
century.
The central gure in the creaMon of what is now called control theory was Norbert
Wiener (18941964).
Control theory is an interdisciplinary branch of engineering and mathemaMcs that
deals with the behavior of dynamical systems with inputs, and how their behavior
is modied by feedback.
Cyberne#cs is the science of communicaMons and automaMc control systems in
both machines and living things.

Tuesday 15 September 15 27
2.7 Control theory and cyberne#cs
Ashbys Design for a Brain (1948, 1952) elaborated
that the intelligence could be created by the use of
homeostaMc devices containing appropriate feedback
loops to achieve stable adapMve behavior.
Homeosta#c: the tendency towards a relaMvely stable
equilibrium between interdependent elements.
Modern control theory, especially the branch known as
stochasMc opMmal control, has as its goal the design of
systems that maximize an objec#ve func#on over
Mme. This roughly matches our view of AI: designing
systems that behave opMmally.

Tuesday 15 September 15 28
2.8 Linguis#cs
How does language relate to thought?
Behaviorist theory did not address the noMon of creaMvity in languageit
did not explain how a child could understand and make up sentences that
he or she had never heard before.
Chomskys theorybased on syntacMc models going back to the Indian
linguist Panini (c. 350 B.C.)could explain this, and unlike previous
theories, it was formal enough that it could in principle be programmed.
Modern linguisMcs and AI, then, were born at about the same Mme, and
grew up together, intersecMng in a hybrid eld called computa#onal
linguis#cs or natural language processing.
Understanding language requires an understanding of the subject ma]er
and context, not just an understanding of the structure of sentences.
Much of the early work in knowledge representa#on (the study of how to
put knowledge into a form that a computer can reason with) was Med to
language and informed by research in linguisMcs.

Tuesday 15 September 15 29
3. The History Of Ar#cial Intelligence
3.1. The gesta#on of ar#cial intelligence (19431955)
The rst work that is now generally recognized as AI was done by Warren McCulloch
and Walter Pi]s (1943). They drew on three sources: knowledge of the basic physiology
and funcMon of neurons in the brain; a formal analysis of proposiMonal logic due to
Russell and Whitehead; and Turings theory of computaMon.
They showed, for example, that any computable funcMon could be computed by some
network of connected neurons, and that all the logical connecMves (and, or, not, etc.)
could be implemented by simple net structures.
Acer that Donald Hebb (1949) demonstrated a simple updaMng rule for modifying the
connecMon strengths between neurons. His rule, now called Hebbian learning, remains
an inuenMal model to this day.
Two undergraduate students at Harvard, Marvin Minsky and Dean Edmonds, built the
rst neural network computer in 1950. The S N A R C, as it was called, used 3000
vacuum tubes and a surplus automaMc pilot mechanism from a B-24 bomber to
simulate a network of 40 neurons.
There were a number of early examples of work that can be characterized as AI, but
Alan Turings vision was perhaps the most inuenMal. He proposed the Child
Programme idea, explaining Instead of trying to produce a programme to simulate the
adult mind, why not rather try to produce one which simulated the childs?

Tuesday 15 September 15 30
3.2. The birth of ar#cial intelligence
(1956)
Princeton was home to another inuenMal gure in AI, John
McCarthy. Acer receiving his PhD there in 1951 and working for
two years as an instructor, McCarthy moved to Stanford and then
to Dartmouth College, which was to become the ocial birthplace
of the eld.
For the next 20 years, the eld would be dominated by some
people and their students and colleagues at MIT, CMU, Stanford,
and IBM.
AI from the start embraced the idea of duplicaMng human faculMes
such as creaMvity, self-improvement, and language use. None of the
other elds were addressing these issues.
AI is the only one of these elds that is clearly a branch of computer
science, and AI is the only eld to a]empt to build machines that
will funcMon autonomously in complex, changing environments.

Tuesday 15 September 15 31
3.3. Early enthusiasm, great
expecta#ons (19521969)
Newell and Simons early success was followed up with the General
Problem Solver, or GPS.
The success of GPS and subsequent programs as models of cogniMon led
Newell and Simon (1976) to formulate the famous physical symbol
system hypothesis, which states that a physical symbol system has the
necessary and sucient means for general intelligent acMon.
At IBM, Nathaniel Rochester and his colleagues produced some of the rst
AI programs. Herbert Gelernter (1959) constructed the Geometry
Theorem Prover, which was able to prove theorems that many students of
mathemaMcs would nd quite tricky.
StarMng in 1952, Arthur Samuel wrote a series of programs for checkers
(draughts) that eventually learned to play at a strong amateur level. Along
the way, he disproved the idea that computers can do only what they are
told to: his program quickly learned to play a be]er game than its creator.
McCarthy dened the high-level language Lisp, which was to become the
dominant AI programming language for the next 30 years.

Tuesday 15 September 15 32
3.3. Early enthusiasm, great
expecta#ons (19521969)
Also in 1958, McCarthy published a paper enMtled Programs with Common
Sense, in which he described the Advice Taker, a hypotheMcal program
that can be seen as the rst complete AI system.
His plan to use logic to build the ulMmate Advice Taker was advanced by J.
A. Robinsons discovery in 1965 of the resoluMon method (a complete
theorem-proving algorithm for rst-order logic; see Chapter 9).
Minsky supervised a series of students who chose limited problems called
microworlds. James Slagles SAINT program (1963) was able to solve
closed-form calculus integraMon problems typical of rst year college
courses. Tom Evanss ANALOGY program (1968) solved geometric analogy
problems that appear in IQ tests. Daniel Bobrows STUDENT program
(1967) solved algebra story problems.
The most famous microworld was the blocks world (SHRDLU), which
consists of a set of solid blocks placed on a tabletop (or more ocen, a
simulaMon of a tabletop), as discussed on the next slide.

Tuesday 15 September 15 33
3.3. Early enthusiasm, great
expecta#ons (19521969)









The dialog that was used as a SHRDLU demo:
Person: PICK UP A BIG RED BLOCK.
Computer: OK. (does it)
Person: GRASP THE PYRAMID.
Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN.
Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX.
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK. (does it) Person: WHAT DOES THE BOX CONTAIN?
Computer: THE BLUE PYRAMID AND THE BLUE BLOCK.
And so on

Tuesday 15 September 15 34
3.4. A dose of reality (19661973)
From the beginning, AI researchers were not shy about
making predicMons of their coming successes. The
following statement by Herbert Simon in 1957 is ocen
quoted:
It is not my aim to surprise or shock youbut the
simplest way I can summarize is to say that there are now
in the world machines that think, that learn and that
create. Moreover, their ability to do these things is going
to increase rapidly unMlin a visible futurethe range of
problems they can handle will be coextensive with the
range to which the human mind has been applied.
Rest of literature, read it yourself.

Tuesday 15 September 15 35
Remaining
3.5 Knowledge-based systems: The key to power?
(19691979)
3.6 AI becomes an industry (1980present)
3.7 The return of neural networks (1986present)
3.8 AI adopts the scien#c method (1987present)
3.9 The emergence of intelligent agents (1995
present)
3.10 The availability of very large data sets (2001
present)

Tuesday 15 September 15 36
THE STATE OF THE ART
What can AI do today? A concise answer is dicult because
there are so many acMviMes in so many subelds. Here we
sample a few applicaMons;
Understanding and Processing Natural Languages:
RoboMc vehicles:
Speech recogniMon:
Autonomous planning and scheduling:
Game playing:
Spam ghMng:
LogisMcs planning:
RoboMcs:
Machine TranslaMon:

Tuesday 15 September 15 37
Assignment No.1

Tuesday 15 September 15 38

Вам также может понравиться