You are on page 1of 66

M



m m 

 
 m 

m
m A monkey is in a cage and bananas are suspended from
the ceiling, the monkey wants to eat a banana but
cannot reach them
 in the room are a chair and a stick
 if the monkey stands on the chair and waves the stick, he can
knock a banana down to eat it
 what are the actions the monkey should take?

Initial state:
monkey on ground
with empty hand
bananas suspended
Goal state:
monkey eating
Actions:
climb chair/get off
grab X
wave X
eat X
 
m Given a problem expressed as a state space (whether
explicitly or implicitly)
 with operators/actions, an initial state and a goal state, how
do we find the sequence of operators needed to solve the
problem?
 this requires search
m Formally, we define a search space as [N, A, S, GD]
 N = set of nodes or states of a graph
 A = set of arcs (edges) between nodes that correspond to the
steps in the problem (the legal actions or operators)
 S = a nonempty subset of N that represents start states
 GD = a nonempty subset of N that represents goal states
m Our problem becomes one of traversing the graph from
a node in S to a node in GD
 we can use any of the numerous graph traversal techniques
for this but in general, they divide into two categories:
brute force ² unguided search
heuristic ² guided search

   
  
m As shown a few slides back, the 8-puzzle has over 40000
different states
 what about the 15 puzzle?
m A brute force search means trying all possible states blindly
until you find the solution
 for a state space for a problem requiring n moves where each move
consists of m choices, there are 2m*n possible states
 two forms of brute force search are: depth first search, breath first search
m A guided search examines a state and uses some heuristic
(usually a function) to determine how good that state is (how
close you might be to a solution) to help determine what state
to move to
 hill climbing
 best-first search
 A/A* algorithm
 Minimax
m While a good heuristic can reduce the complexity from 2m*n to
something tractable, there is no guarantee so any form of
search is O(2n) in the worst case

  m 
 
m he common form of reasoning starts with data and leads to
conclusions
 for instance, diagnosis is data-driven ² given the patient symptoms, we
work toward disease hypotheses
we often think of this form of reasoning as ´forward chainingµ through
rules
m Backward search reasons from goals to actions
 Planning and design are often goal-driven
´backward chainingµ
    

Starting at node A, our search gives us:


A, B, E, K, S, L, , F, M, C, G, N, H, O, P,
U, D, I, Q, J, R
      
    
m
m    

Starting at node A, our search would generate the


nodes in alphabetical order from A to U
m    
 
m   


m he monkey and the banana
m he purpose of this example is to show the use of variables.

Description
A monkey enters a room via the door. In the room, near the window, is a box. In the
middle of the room hangs a banana from the ceiling. he monkey wants to grasp the
banana, and can do so after climbing on the box in the middle of the room.

States
For each state, we need to record:
- the position of the monkey (door, window, middle, ...)
- the position of the box
- if the monkey is on the box
- if the monkey has the banana
he initial state is (door, window, no, no).
he set of goal states is (*, *, *, yes).
m Moves
walk(P): from (M, B, no, H) to (P, B, no, H).
push(P): from (M, M, no, H) to (P, P, no, H).
climb: from (M, M, no, H) to (M, M, yes, H).
grasp: from (middle, B, yes, no) to (middle, B, yes, yes).

State space
Without variables, the state space and search space can be
very large (how many positions are there?).
With variables, we can represent the reachable part as follows.
m Monkey and Banana Example
here is a monkey at the door of a room.
In the middle of the room a banana hangs
from the ceiling. he monkey wants it, but
cannot jump high enough from the floor.
At the window of the room there is a box that
the monkey can use.
m he monkey can perform the following actions:
m Walk on the floor
m Climb the box
m Push the box around (if it is beside the box)
m Grasp the banana if it is standing on the box directly under the banana
m We define the state as a 4 tuple :
m (monkey at, on floor, box at, has banana)
m move( state( middle, onbox , middle, hasnot ), onbox,
m grasp, state( middle, onbox , middle, has)). onbox
m move( state( P, onfloor , P, H ), onfloor
m climb, state( P, onbox , P, H )). onbox
m move( state( P1, onfloor , P1, H ), onfloor
m push( P1, P2 ), state( P2, onfloor , P2, H)). onfloor
m move( state( P1, onfloor , B, H ), onfloor
m walk( P1, P2 ), state( P2, onfloor , B, H )). Onfloor

m canget ( state( _, _, _, has )).


m canget ( State1 ) :-
move( State1, Move, State2 ),
canget ( State2 ).
m canget( state( atdoor , onfloor , atwindow , hasnot ))



m 


x x

18



m 


Program 1:
Data Structures:
m Board: 9 element vector representing the board, with 1-9 for each square.
An element contains the value 0 if it is blank, 1 if it is filled by X, or 2 if it is
filled with a O
m Movetable: A large vector of 19,683 elements ( 3^9), each element is 9-
element vector.
Algorithm:
1. View the vector as a ternary number. Convert it to a
decimal number.
2. Use the computed number as an index into
Move-able and access the vector stored there.
3. Set the new board to that vector.

19



m 


Comments:
his program is very efficient in time.
1. A lot of space to store the Move-able.
2. A lot of work to specify all the entries in the
Move-able.
3. Difficult to extend.

20



m 


p 
  
  

21



m 


Program 2:
Data Structure: A nine element vector representing the board. But instead of
using 0,1 and 2 in each element, we store 2 for blank, 3 for X and 5 for O
Functions:
Make2: returns 5 if the center sqaure is blank. Else any other balnk sq
Posswin(p): Returns 0 if the player p cannot win on his next move; otherwise it
returns the number of the square that constitutes a winning move. If the
product is 18 (3x3x2), then X can win. If the product is 50 ( 5x5x2) then O
can win.
Go(n): Makes a move in the square n
Strategy:
urn = 1 Go(1)
urn = 2 If Board[5] is blank, Go(5), else Go(1)
urn = 3 If Board[9] is blank, Go(9), else Go(3)
urn = 4 If Posswin(X) â 0, then Go(Posswin(X))
.......

22



m 


Comments:
1. Not efficient in time, as it has to check several
conditions before making each move.
2. Easier to understand the program·s strategy.

3. Hard to generalize.

23



m 


  
p  
 
p ' 

24



m 


Comments:
1. Checking for a possible win is quicker.
2. Human finds the row-scan approach easier,
while
computer finds the number-counting approach
more
efficient.

25



m 


Program 3:
1. If it is a win, give it the highest rating.

2. Otherwise, consider all the moves the opponent


could make next. Assume the opponent will make
the move that is worst for us. Assign the rating of
that move to the current node.
3. he best node is then the one with the highest
rating.

26



m 


Comments:
1. Require much more time to consider all possible
moves.

2. Could be extended to handle more complicated


games.

27
      

mEach position can be described by an 8-by-8 array.
mInitial position is the game opening position.
mGoal position is any position in which the
opponent does not have a legal move and his or
her king is under attack.
mLegal moves can be described by a set of rules:
' Left sides are matched against the current state.
' Right sides describe the new resulting state.

28
      

mState space is a set of legal positions.
mStarting at the initial state.
mUsing the set of rules to move from one state
to another.
mAttempting to end up in a goal state.

29
      

m
´You are given two jugs, a 4-litre one and a 3-litre
one.
Neither has any measuring markers on it. here
is a
pump that can be used to fill the jugs with water.
How
can you get exactly 2 litres of water into 4-litre
jug.µ

30
      

m
mState: (x, y)
x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
mStart state: (0, 0).
mGoal state: (2, n) for any n.
mAttempting to end up in a goal state.

31
      

m
1. (x, y) d (4, y)
if x 4
2. (x, y) d (x, 3)
if y 3
3. (x, y) d (x ' d, y)
if x  0
4. (x, y) d (x, y ' d)
if y  0

32
      

m
5. (x, y) d (0, y)
if x  0
6. (x, y) d (x, 0)
if y  0
7. (x, y) d (4, y ' (4 ' x))
if x  y  4, y  0
8. (x, y) d (x ' (3 ' y), 3)
if x  y  3, x  0

33
      

m
9. (x, y) d (x  y, 0)
if x  y  4, y  0
10. (x, y) d (0, x  y)
if x  y  3, x  0
11. (0, 2) d (2, 0)

12. (2, y) d (0, y)

34
      

m
1. current state = (0, 0)
2. Loop until reaching the goal state (2, 0)
' Apply a rule whose left side matches the current state
' Set the new current state to be the resulting state

(0, 0)
(0, 3)
(3, 0)
(3, 3)
(4, 2)
(0, 2)
(2, 0)
35
      

m
he role of the condition in the left side of a rule
l restrict the application of the rule
l more efficient

1. (x, y) d (4, y)
if x 4
2. (x, y) d (x, 3)
if y 3

36
      

m

Special-purpose rules to capture special-case


knowledge that can be used at some stage in
solving a
problem

11.(0, 2) d (2, 0)

12. (2, y) d (0, y)

37
    
Requirements of a good search strategy:
1. It causes motion
Otherwise, it will never lead to a solution.

2. It is systematic
Otherwise, it may use more steps than necessary.

3. It is efficient
Find a good, but not necessarily the best, answer.

38
    
1. Uninformed search (blind search)
Having no information about the number of steps from the current
state to the goal.

2. Informed search (heuristic search)


More efficient than uninformed search.

39
    

(0, 0)

(4, 0) (0, 3)

(4, 3) (0, 0) (1, 3) (4, 3) (0, 0) (3, 0)

40
     m
 
mBreadth-first search
Expand all the nodes of
one level first.

mDepth-first search
Expand one of the nodes at
the deepest level.

41
     m
 

„     
 
   
 

 

  

„ 

             

42
     m
 

„     
 
   
   
   
    

„   

             

43
        
 
mHeuristic: involving or serving as an aid to
learning, discovery, or problem-solving by
experimental and especially trial-and-error
methods.
(Merriam-Webster·s dictionary)
mHeuristic technique improves the efficiency of
a search process, possibly by sacrificing claims
of completeness or optimality.

44
        
 
mHeuristic is for combinatorial explosion.
mOptimal solutions are rarely needed.

45
        
 
he ravelling Salesman Problem
´A salesman has a list of cities, each of which he must
visit exactly once. here are direct roads between each
pair of cities on the list. Find the route the salesman
should follow for the shortest possible round trip that
both starts and finishes at any one of the cities.µ
M
p p



 
p 
„
46
        
 
Nearest neighbour heuristic:
1. Select a starting city.
2. Select the one closest to the current city.
3. Repeat step 2 until all cities have been visited.

47
        
 
Nearest neighbour heuristic:
1. Select a starting city.
2. Select the one closest to the current city.
3. Repeat step 2 until all cities have been visited.

O(n2) vs. O(n!)

48
 m

mSearching for a goal state = Climbing to the


top of a hill

49
 m

mGenerate-and-test + direction to move.


mHeuristic function to estimate how close a
given state is to a goal state.

50
   m

Algorithm
1. Evaluate the initial state.

2. Loop until a solution is found or there are


no new operators left to be applied:
' Select and apply a new operator
' Evaluate the new state:
goal d quit
better than current state d new current state

51
   m

mEvaluation function as a way to inject task-


specific knowledge into the control process.

52
     m
    

mConsiders all the moves from the current state.


mSelects the best one as the next state.

53
     m
    

Algorithm
1. Evaluate the initial state.

2. Loop until a solution is found or a complete


iteration produces no change to current state:
' SUCC = a state such that any possible successor of the
current state will be better than SUCC (the worst state).
' For each operator that applies to the current state, evaluate
the new state:
goal d quit
better than SUCC d set SUCC to this state
' SUCC is better than the current state d set the current
state to SUCC.
54
 m 

Local maximum
A state that is better than all of its neighbours,
but not
better than some other states far away.

55
 m 

Plateau
A flat area of the search space in which all
neighbouring
states have the same value.

56
 m 

Ways Out

mBacktrack to some earlier node and try going


in a different direction.
mMake a big jump to try to get in a new section.
mMoving in several directions at once.

57
 m 

mHill climbing is a local method:


Decides what to do next by looking only at the
´immediateµ consequences of its choices.
mGlobal information might be encoded in
heuristic functions.

58
m    
mDepth-first search: not all competing branches
having to be expanded.

mBreadth-first search: not getting trapped on dead-


end paths.
l Combining the two is to follow a single path at
a time, but switch paths whenever some
competing path look more promising than the
current one.

59
m    

M M M

 „
 „

  p  
 
 
M M

 „
 „

 
       
      
 
p

60
m    
mOPEN: nodes that have been generated, but
have not examined.
his is organized as a priority queue.

mCLOSED: nodes that have already been


examined.
Whenever a new node is generated, check
whether it has been generated before.

61
m    

Algorithm
1. OPEN = {initial state}.

2. Loop until a goal is found or there are no


nodes left in OPEN:
' Pick the best node in OPEN
' Generate its successors
' For each successor:
new d evaluate it, add it to OPEN, record its parent
generated before d change parent, update successors

62
m    
mGreedy search:
h(n) = estimated cost of the cheapest path from
node n to a goal state.
Neither optimal nor complete

mUniform-cost search:
g(n) = cost of the cheapest path from the initial
state to node n.
Optimal and complete, but very inefficient

63

m  


  M   

          !   !  

M
   

M  M"       p   p

64

m  



M M 
 
 „

  

M  M pp
 p
 „
p   „
p
  
     
     

65

m  



M pp M p

 p „ p  p „ p


    
    

   p

  ! #$     

66