Вы находитесь на странице: 1из 94

Artificial Intelligence

Problem Solving and Searching


Institute of Information and Communication Technology
University of Sindh, Jamshoro

Dr. Zeeshan Bhatti


BSSW-PIV
Chapter 3
BY:By:DR.Dr.ZEESHAN
ZeeshanBHATTI
Bhatti

Last time: Summary


Definition of AI?
Turing Test?
Intelligent Agents:
Anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through its effectors to
maximize progress towards its goals.
PAGE (Percepts, Actions, Goals, Environment)
Described as a Perception (sequence) to Action Mapping: f : P* A
Using look-up-table, closed form, etc.

Agent Types: Reflex, state-based, goal-based, utility-based


Rational Action: The action that maximizes the expected value of
the performance measure given the percept sequence to date

Outline: Problem solving and search


Introduction to Problem Solving
Complexity
Uninformed search
Problem formulation
Search strategies: depth-first, breadth-first

Informed search
Search strategies: best-first, A*
Heuristic functions

Example: Measuring problem!

9l
3l

5l

Problem: Using these three buckets,


measure 7 liters of water.

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


(one possible) Solution:
a
0

b
0

c
0

3
0
3
0
3
0
3
1
0

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


Another Solution:
a

0
3
0
3
0
3
1
0

0
0
0
0
3
3
5
5

3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


Another Solution:
a
0

b
0

c
0

0
3
0
3
0
3
0
3
1
0

5
2
0
0
0
0
3
3
5
5

0
0
3
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


Another Solution:
a
0

b
0

c
0

0
3
3
3
0
3
0
3
1
0

5
2
0
0
0
0
3
3
5
5

0
0
2
3
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


Another Solution:
a
0

b
0

c
0

0
3
3
3
0
3
0
3
1
0

5
2
0
5
0
0
3
3
5
5

0
0
2
2
6
6
6
6
6
7

start

goal

9l
3l
a

5l
b

Example: Measuring problem!


Another Solution:
a
0

b
0

c
0

0
3
3
3
3

5
2
0
5
0

0
0
2
2
7

3
0
3
1
0

0
3
3
5
5

6
6
6
6
7

start

goal

goal

9l
3l
a

5l
b

Which solution do we prefer?


Solution 1:

Solution 2:

0
3
3
3
3

5
2
0
5
0

0
0
2
2
7

start

start

3
0
3
0
3
0
3
1
0
goal

0
0
0
0
0
3
3
5
5

0
3
3
6
6
6
6
6
7

goal

DZB1

Problem Solving Agents


Intelligent agents are supposed to maximize
their performance measure.
Achieving this is sometimes simplified if the
agent can adopt a goal and aim at satisfying it.
Goal formulation, based on the current situation
and the agents performance measure, is the first
step in problem solving.

Slide 21
DZB1

Dr. Zeeshan Bhatti, 14/3/2016

Example: Romania

Example: Romania

Problem-Solving Agent
Restricted form of general agent:

action

// What is the current state?


// From LA to San Diego (given curr. state)
// e.g., Gas usage

// If fails to reach goal, update

Note: This is offline problem-solving. Online problem-solving involves


acting w/o complete knowledge of the problem and environment

Example: Buckets
Measure 7 liters of water using a 3-liter, a 5-liter, and a 9liter buckets.
Formulate goal: Have 7 liters of water
in 9-liter bucket
Formulate problem:
States:
amount of water in the buckets
Operators:
Fill bucket from source, empty bucket
Find solution:
state

sequence of operators that bring you


from current state to the goal

Problem types

Problem types
Single-state problem:

deterministic, accessible

Agent knows everything about world, thus can


calculate optimal action sequence to reach goal state.

Multiple-state problem:

deterministic, inaccessible

Agent must reason about sequences of actions and


states assumed while working towards goal state.

Contingency problem:

nondeterministic, inaccessible

Must use sensors during execution


Solution is a tree or policy
Often interleave search and execution

Exploration problem:

unknown state space

Discover and learn about environment while taking actions.

Problem types

Single-state problem:

deterministic, accessible

Agent knows everything about world (the exact state),


Can calculate optimal action sequence to reach goal state.

E.g., playing chess. Any action will result in an exact state

Problem types
Multiple-state problem:

deterministic, inaccessible

Agent does not know the exact state (could be in any of the
possible states)
May not have sensor at all

Assume states while working towards goal state.


E.g., walking in a dark room
If you are at the door, going straight will lead you to the kitchen
If you are at the kitchen, turning left leads you to the bedroom

Problem types
Contingency problem: nondeterministic, inaccessible
Must use sensors during execution
Solution is a tree or policy
Often interleave search and execution

E.g., a new skater in an arena


Sliding problem.
Many skaters around

Problem types
Exploration problem: unknown state space

Discover and learn about environment while


taking actions.
E.g., Maze

Example: Vacuum world


Simplified world: 2 locations, each may or not contain dirt,
each may or not contain vacuuming agent.
Goal of agent: clean up the dirt.

Example: vacuum world


Single-state, start in #5.
Solution?

By: Dr. Zeeshan Bhatti

34

Example: vacuum world


Single-state, start in #5.
Solution? [Right, Suck]

Multiple State or Sensorless,


start in {1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

35

Example: vacuum world

Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

Contingency
Nondeterministic: Suck may
dirty a clean carpet
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7

Solution?
36

Example: vacuum world

Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

Contingency
Nondeterministic: Suck may
dirty a clean carpet
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7

Solution? [Right, if dirt then Suck]


37

Single-state problem formulation


A problem is defined by four items:
1. initial state e.g., "at Arad"
2. actions or successor function S(x) = set of actionstate pairs

e.g., S(Arad) = {<Arad Zerind, Zerind>, }

3. goal test, can be

explicit, e.g., x = "at Bucharest"


implicit, e.g., Checkmate(x)

4. path cost (additive)

e.g., sum of distances, number of actions executed, etc.


c(x,a,y) is the step cost, assumed to be 0

A solution is a sequence of actions leading from the initial state to a


goal state

38

Selecting a state space


Real world is absurdly complex
state space must be abstracted for problem solving

(Abstract) state = set of real states


(Abstract) action = complex combination of real actions
e.g., "Arad Zerind" represents a complex set of possible routes, detours,
rest stops, etc.

For guaranteed realizability, any real state "in Arad must get to some
real state "in Zerind"
(Abstract) solution =
set of real paths that are solutions in the real world

Each abstract action should be "easier" than the original problem

39

Vacuum world state space graph

states?
actions?
goal test?
path cost?

40

Vacuum world state space graph

states? integer dirt and robot location


actions? Left, Right, Suck
goal test? no dirt at all locations
path cost? 1 per action
41

Example: The 8-puzzle

states?
actions?
goal test?
path cost?
42

Example: The 8-puzzle

states? locations of tiles


actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move

43
[Note: optimal solution of n-Puzzle family is NP-hard]

Example: robotic assembly

states?: real-valued coordinates of robot joint angles parts of


the object to be assembled
actions?: continuous motions of robot joints
goal test?: complete assembly
path cost?: time to execute

44

Tree search algorithms


Basic idea:
offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding
states)

45

Example: Romania
In Romania, on vacation. Currently in Arad.
Flight leaves tomorrow from Bucharest.
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
operators: drive between cities
Find solution:
sequence of cities, such that total driving distance is
minimized.

Example: Traveling from Arad To Bucharest

Tree search example

48

Tree search example

49

Tree search example

50

Implementation: general tree search

51

Implementation: states vs. nodes


A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth

The Expand function creates new nodes, filling in the


various fields and using the SuccessorFn of the problem to
create the corresponding states.

By: Dr. Zeeshan Bhatti


52

Search strategies
A search strategy is defined by picking the order of node
expansion
Strategies are evaluated along the following dimensions:

completeness: does it always find a solution if one exists?


time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?

Time and space complexity are measured in terms of


b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be )

By: Dr. Zeeshan Bhatti

53

Binary Tree Example

Depth = 0

root

Depth = 1

Depth = 2

N1

N3

N2

N4

N5

N6

Number of nodes: n = 2 max depth


Number of levels (max depth) = log(n) (could be n)

Uninformed search strategies


Uninformed search strategies use only the
information available in the problem definition

Breadth-first search

Uniform-cost search

Depth-first search

Depth-limited search
By: Dr. Zeeshan Bhatti
55

Breadth-first search
Expand shallowest unexpanded node

Implementation:
fringe is a FIFO queue, i.e., new successors go at end

By: Dr. Zeeshan Bhatti

56

Breadth-first search
Expand shallowest unexpanded node

Implementation:
fringe is a FIFO queue, i.e., new successors go at end

By: Dr. Zeeshan Bhatti

57

Breadth-first search
Expand shallowest unexpanded node

Implementation:
fringe is a FIFO queue, i.e., new successors go at end

By: Dr. Zeeshan Bhatti

58

Breadth-first search
Expand shallowest unexpanded node

Implementation:
fringe is a FIFO queue, i.e., new successors go at end

By: Dr. Zeeshan Bhatti

59

Properties of breadth-first search


Complete? Yes (if b is finite)

Time? 1+b+b2+b3+ +bd + b(bd-1) = O(bd+1)

Space? O(bd+1) (keeps every node in memory)

Optimal? Yes (if cost = 1 per step)

Space is the bigger


problem (more than time)
By: Dr. Zeeshan Bhatti
60

Uniform-cost search
Expand least-cost unexpanded node

Implementation:
fringe = queue ordered by path cost

Equivalent to breadth-first if step costs all equal

Complete? Yes, if step cost

Time? # of nodes with g cost of optimal solution,


O(bceiling(C*/ )) where C* is the cost of the optimal solution
Space? # of nodesBy:
with
g cost of optimal solution,
Dr. Zeeshan Bhatti
61
ceiling(C*/
)
O(b
)

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

62

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

63

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

64

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

65

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

66

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

67

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

68

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

69

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

70

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

71

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

72

Depth-first search
Expand deepest unexpanded node

Implementation:
fringe = LIFO queue, i.e., put successors at front

By: Dr. Zeeshan Bhatti

73

Properties of depth-first search


Complete? No: fails in infinite-depth spaces, spaces
with loops
Modify to avoid repeated states along path

complete in finite spaces

Time? O(bm): terrible if m is much larger than d


but if solutions are dense, may be much faster than
breadth-first

Space? O(bm), i.e., linear space!

By: Dr. Zeeshan Bhatti


Optimal? No

74

Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors

Recursive implementation:

By: Dr. Zeeshan Bhatti

75

Iterative deepening search

By: Dr. Zeeshan Bhatti

76

Iterative deepening search l =0

By: Dr. Zeeshan Bhatti

77

Iterative deepening search l =1

By: Dr. Zeeshan Bhatti

78

Iterative deepening search l =2

By: Dr. Zeeshan Bhatti

79

Iterative deepening search l =3

By: Dr. Zeeshan Bhatti

80

Iterative deepening search


Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + + bd-2 + bd-1 + bd
Number of nodes generated in an iterative deepening search
to depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + + 3bd-2 +2bd-1 + 1bd
For b = 10, d = 5,

NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111

NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

By: Dr. Zeeshan Bhatti


81

Properties of iterative
deepening search
Complete? Yes

Time? (d+1)b0 + d b1 + (d-1)b2 + + bd = O(bd)

Space? O(bd)

Optimal? Yes, if step cost = 1

By: Dr. Zeeshan Bhatti

82

Summary of algorithms

By: Dr. Zeeshan Bhatti

83

Repeated states
Failure to detect repeated states can turn a linear
problem into an exponential one!

By: Dr. Zeeshan Bhatti

84

Graph search

By: Dr. Zeeshan Bhatti

85

Summary
Problem formulation usually requires abstracting away realworld details to define a state space that can feasibly be
explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space and not
much more time than other uninformed algorithms

By: Dr. Zeeshan Bhatti

86

Complexity
Why worry about complexity of algorithms?
because a problem may be solvable in principle but may
take too long to solve in practice

Complexity

Why worry about complexity of algorithms?


because a problem may be solvable in principle
but may take too long to solve in practice
How can we evaluate the complexity of
algorithms?
through asymptotic analysis, i.e., estimate time
(or number of operations) necessary to solve an
instance of size n of a problem when n tends
towards infinity
.

Complexity example: Traveling Salesman Problem


There are n cities, with a road of length Lij joining
city i to city j.
The salesman wishes to find a way to visit all cities that
is optimal in two ways:
each city is visited only once, and
the total route is as short as possible.

Complexity example: Traveling Salesman Problem

This is a hard problem: the only known algorithms (so far)


to solve it have exponential complexity, that is, the number
of operations required to solve it grows as exp(n) for n
cities.

Why is exponential complexity hard?


It means that the number of operations necessary to
compute the exact solution of the problem grows
exponentially with the size of the problem (here, the
number of cities).
exp(1)
= 2.72
exp(10)
= 2.20 104 (daily salesman trip)
exp(100)
= 2.69 1043 (monthly salesman planning)
exp(500)
= 1.40 10217 (music band worldwide tour)
exp(250,000) = 10108,573
(fedex, postal services)
Fastest
computer
= 1012
operations/second

So

In general, exponential-complexity problems cannot be

solved for any but the smallest instances!

Вам также может понравиться