Вы находитесь на странице: 1из 249

Artificial Intelligence

Lesson 1

1
About
• Lecturer: Prof. Sarit Kraus
• TA: Ariel Rosenfeld
• (almost) All you need can be found on the
course website:
– http://u.cs.biu.ac.il/~rosenfa5/

2
Course Requirements 1
• The grade is comprised of 70% exam and 30% exercises.
• 3 programming exercises will be given. Work individually.
• All the exercises are counted for the final grade, but you
can pass the course without submitting them if your final
grade (composed from the exam and exercises grades) is
above the required threshold. The exercises are equally
counted.
• Exercises will be written in C++ or JAVA only. They
should compile and run on planet machine, and will be
submitted via “submit”. Be precise!

3
Course Requirements 2
• Exercises are not hard, but work is required. Plan your
time ahead!
• When sending me mail please include the course number
(89-570) in the header, to pass the automatic spam filter.
• You (probably) will be required to participate in AI
experiments.
• See other general rules in:
http://u.cs.biu.ac.il/~haimga/Teaching/AI/assignments/general-rules.pdf

4
Course Schedule
• Lesson 1:
– Introduction
– Transferring a general problem to a graph
search problem.
• Lesson 2
– Uninformed Search (BFS, DFS etc.).
• Lesson 3
– Informed Search (A*,Best-First-Search etc.).

5
Course Schedule – Cont.
• Lesson 4
– Local Search (Hill Climbing, Genetic
algorithms etc.).
• Lesson 5
– “Search algorithms” chapter summery.
• Lesson 6-7
– Game-Trees: Min-Max & Alpha-Beta
algorithms.

6
Course Schedule – Cont.
• Lesson 8-9
– Planning: STRIPS algorithm
• Lesson 10-11-12
– Learning: Decision-Trees, Neural Network,
Naïve Bayes, Bayesian Networks and more.
• Lesson 13
– Questions and exercise.

7
AI – Alternative Definitions
• Elaine Rich and Kevin Knight: AI is the study of how to
make computers do things at which, at the moment, people
are better.
• Stuart Russell and Peter Norvig: [AI] has to do with
smart programs, so let's get on and write some.
• Claudson Bornstein: AI is the science of common sense.
• Douglas Baker: AI is the attempt to make computers do
what people think computers cannot do.
• Astro Teller: AI is the attempt to make computers do what
they do in the movies.

8
AI Domains
• Games – chess, checkers, tile puzzle.
• Expert systems
• Speech recognition and Natural language
processing, Computer vision, Robotics.

9
AI & Search
• "The two most fundamental concerns of AI
researchers are knowledge representation and
search”
• “knowledge representation … addresses the
problem of capturing in a language…suitable for
computer manipulation”
• “Search is a problem-solving technique that
systematically explores a space of problem
states”.Luger, G.F. Artificial Intelligence: Structures and Strategies for
Complex Problem Solving

10
Solving Problems with Search
Algorithms
• Input: a problem P.
• Preprocessing:
– Define states and a state space
– Define Operators
– Define a start state and goal set of states.
• Processing:
– Activate a Search algorithm to find a path form
start to one of the goal states.

11
Example - Missionaries & Cannibals

• State space – [M,C,B]


• Initial State – [3,3,1]
• Goal State – [0,0,0]
• Operators – adding or subtracting the
vectors [1,0,1], [2,0,1], [0,1,1], [0,2,1] or
[1,1,1]
• Path – moves from [3,3,1] to [0,0,0]
• Path Cost – river trips
12
Breadth-First-Search Pseudo code
• Intuition: Treating the graph as a tree and scanning top-
down.
• Algorithm:
BFS(Graph graph, Node start, Vector Goals)
1. L make_queue(start)
2. While L not empty loop
1. n  L.remove_front()
2. If goal (n) return true
3. S  successors (n)
4. L.insert(S)
3. Return false
13
Breadth-First-Search Attributes
• Completeness – yes (b  , d  )
• Optimality – yes, if graph is un-
weighted.
• Time Complexity: O(1  b  b2  ...  bd 1  b)  O(bd 1 )
• Memory Complexity: O(b d 1 )
– Where b is branching factor and d is the
solution depth
• See water tanks example.

14
Artificial Intelligence

Lesson 2

15
Uninformed Search
• Uninformed search methods use only
information available in the problem
definition.
– Breadth First Search (BFS)
– Depth First Search (DFS)
– Iterative DFS (IDA)
– Bi-directional search
– Uniform Cost Search (a.k.a. Dijkstra alg.)

16
Depth-First-Search Pseudo code
DFS(Graph graph, Node start, Vector Goals)
1. L make_stack(start)
2. While L not empty loop
2.1 n  L.remove_front()
2.2 If goal (n) return true
2.3 S  successors (n)
2.4 L.insert(S)
3. Return false

17
Depth-First-Search Attributes
• Completeness – No. Infinite loops or
Infinite depth can occur.
• Optimality – No.
m
• Time Complexity: O (b )
• Memory Complexity: O(bm) 1
– Where b is branching factor and m is the
2 5
maximum depth of search tree
• See water tanks example 3

4
18
Limited DFS Attributes
• Completeness – Yes, if d≤l
• Optimality – No.
• Time Complexity: O(bl )
– If d<l, it is larger than in BFS
• Memory Complexity: O(bl )
– Where b is branching factor and l is the
depth limit.

19
Depth-First Iterative-Deepening
0
1,3, 2,6,16
9

4,10 c
5,13 7,17c 8,20

11 12 14 c
15 18 19 21 22

The numbers represent the order generated by DFID


20
Iterative-Deepening Attributes
• Completeness – Yes
• Optimality – yes, if graph is un-weighted.
• Time Complexity:
O((d )b  (d  1)b 2  ...  (1)b d )  O(b d )

• Memory Complexity: O(db)


– Where b is branching factor and d is the maximum
depth of search tree
21
State Redundancies
• Closed list - a hash table which holds the
visited nodes.
• For example BFS: Closed List

Open List (Frontier)

22
Bi-directional Search
• Search both from initial state to goal state.
• Operators must be symmetric.

S G

23
Bi-directional Search Attributes
• Completeness – Yes, if both directions use BFS
• Optimality – yes, if graph is un-weighted and both
directions use BFS.
d /2
• Time and memory Complexity: O (b )
• Pros.
– Cuts the search tree by half (at least theoretically).
• Cons.
– Frontiers must be constantly compared.

24
Minimum cost path
• General minimum cost path-search
problem:
– Find shortest path from start state to one of the
goal states in a weighted graph.
– Path cost function is g(n): sum of weights from
start state to goal.

25
Uniform Cost Search
• Also known as Dijkstra’s algorithm.
• Expand the node with the minimum path
cost first.
• Implementation: priority queue.

26
Uniform Cost Search Attributes
• Completeness: yes, for positive weights
• Optimality: yes
c / e 
• Time & Memory complexity: O(b )
– Where b is branching factor, c is the optimal solution cost
and e is the minimum edge cost

27
Example of Uniform Cost Search
• Assume an example tree with different edge costs, represented by
numbers next to the edges.

2 a
1
b c

1 2 1 2
f gc dc ec

Notations for this example:


generated node
expanded node
28
Example of Uniform Cost Search

2 a
1

Closed list:
a
0
Open
29
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2

a
Closed list:
b c
2 1
Open
30
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
dc ec

a c
Closed list:
b d e
2 2 3
Open
31
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
f gc dc ec

a c b
Closed list:
d e f g
2 3 3 4
Open
32
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
f gc dc ec

a c b d
Closed list:
e f g
3 3 4
Open
33
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
f gc dc ec

a c b d e
Closed list:
f g
3 4
Open
34
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
f gc dc ec

a c b d e f
Closed list:
g
4
Open
35
list:
Example of Uniform Cost Search
2 a
1
b c

1 2 1 2
f gc dc ec

a c b d e f g
Closed list:

Open
36
list:
Informed Search
• Incorporate additional measure of a
potential of a specific state to reach the
goal.
• A potential of a state to reach a goal is
measured through a heuristic function h(n).
• An evaluation function is denoted f(n).

37
Best First Search Algorithms
• Principle: Expand node n with the best
evaluation function value f(n).
• Implement via a priority queue
• Algorithms differ with definition of f :
– Greedy Search: f (n)  h(n)
– A*: f (n)  g (n)  h(n)
– IDA*: iterative deepening version of A*
– Etc’
38
Exercise
• Q: Does a Uniform-Cost search be considered as
a Best-First algorithm?
• A: Yes. It can be considered as a Best-First
algorithm with evaluation function f(n)=g(n).
• Q: In what scenarios IDS outperforms DFS?,
BFS?
• A:
– IDS outperforms DFS when the search tree is a lot
deeper than the solution depth.
– IDS outperforms BFS when BFS run out of memory.

39
Exercise – Cont.
• Q: Why do we need a closed list?
• A: Generally a closed list has two main functionalities:
– Prevent re-exploring of nodes.
– Hold solution path from start to goal (DFS based algorithms have
it anyway).
• Q: Does Breadth-FS find optimal path length in general?
• A: No, unless the search graph is un-weighted.
• Q: Will IDS always find the same solution as BFS given
that the nodes expansion order is deterministic?
• A: Yes. Each iteration of IDS explores new nodes the same
order a BFS does.
40
Artificial Intelligence

Lesson 3
41
Informed Search
• Incorporate additional measure of a
potential of a specific state to reach the
goal.
• A potential of a state to reach a goal is
measured through a heuristic function h(n),
thus always h(goal) = 0.
• An evaluation function is denoted f(n).

42
Best First Search Algorithms
• Principle: Expand node n with the best
evaluation function value f(n).
• Implement via a priority queue
• Algorithms differ with definition of f :
– Greedy Search: f (n)  h(n)
– A*: f (n)  g (n)  h(n)
– IDA*: iterative deepening version of A*
– Etc’.
43
Properties of Heuristic functions
• The 2 most important properties:
– relatively cheap to compute
– relatively accurate estimator of the cost to reach a goal.
Usually a “good” heuristic is if ½ opt(n)<h(n)≤opt(n)
• Examples:
– Navigating in a network of roads from one location to
another. Heuristic function: Airline distance.
– Sliding-tile puzzles. Heuristic function: Manhattan
distance - number of horizontal and vertical grid units each
tile is displaced from its goal position

44
Heuristic Function h(n)
• Admissible/Underestimate: h(n) never
overestimate the actual cost from n to goal

• Consistent/monotonic (desirable) :
h(m)-h(n) ≤w(n,m) where m is parent of n. This
ensures f(n) ≥f(m).

45
Best-FS Algorithm Pseudo code
1. Start with open = [initial-state].
2. While open is not empty do
1. Pick the best node on open.
2. If it is the goal node then return with success.
Otherwise find its successors.
3. Assign the successor nodes a score using the
evaluation function and add the scored nodes
to open
46
General Framework using Closed-
list (Graph-Search)
GraphSearch(Graph graph, Node start, Vector goals)
1. O make_data_structure(start) // open list
2. Cmake_hash_table // closed list
3. While O not empty loop
1. n  O.remove_front()
2. If goal (n) return n
3. If n is found on C  continue
4. //otherwise
5. O  successors (n)
6. Cn
4. Return null //no goal found

47
Greedy Search Attributes
• Completeness: No. Inaccurate heuristics can
cause loops (unless using a closed list), or
entering an infinite path
• Optimality: No. Inaccurate heuristics can
lead to a non optimal solution. s
1 3
• Time & Memory complexity:
a h=2 h=1 b
m
O(b )
2 g 1
48
A* Algorithm
• Combines greedy h(n) and uniform cost
g(n) approaches.

• Evaluation function: f(n)=g(n)+h(n)

49
A* Pseudo code
A-Star(Graph graph, Node start, Node goal, HeuristicFunction h)
1. O make_priority_queue(startNode) // open list
2. Cmake_hash_table // closed list
3. While O not empty loop
1. n  O.remove_front() //O is sorted by f(n)=g(n)+h(n) values
2. If goal (n) return n
3. If n is found on C  continue
4. //otherwise
5. S  successors (n)
6. For each node s in S
1. Set s.g=n.g+w(n,s)
2. Set s.parent=n //for path extraction
3. Set s.h=h(s) //to calculate f
4. Os
7. Cn
4. Return null //no goal found
50
A* Algorithm (1)
• Completeness:
– In a finite graph: Yes
– In an infinite graph: if all edge costs are finite and have
a minimum positive value, and all heuristic values are
finite and non-negative.

• Optimality:
– In tree-search: if h(n) is admissible
– In graph-search: if it is also consistent

51
A* Algorithm (2)
• optimally efficient: A* expands the
minimal number of nodes possible with any
given (consistent) heuristic.
• Time and space complexity:
– Worst case: Cost function f(n) = g(n)
O(bc / e )
– Best case: Cost function f(n) = g(n) + h*(n)
O(bd )
52
A* Application Example
• Game: Tales of Trolls
and Treasures
• Yellow dots are nodes
in the search graph.

53
IDA* Algorithm

• Each iteration is a depth-first search that


keeps track of the cost evaluation f = g + h
of each node generated.
• The cost threshold is initialized to the
heuristic of the initial state.
• If a node is generated whose cost exceeds
the threshold for that iteration, its path is cut
off.

54
IDA* Pseudo code
• IDAStar-Main (Node root)
1. Set bound = f(root);
2. WHILE (bound<infinity)
1. Set bound= IDAStar(root, bound)

• IDAStar(node n, Double bound)


1. if n is a goal, Exit algorithm and return goal
2. if n has no children, return infinity
3. fn = infinity
4. for each child c of n, Set f=f(c )
1. IF (f<= bound) fn=min(fn, IDAStar(c,bound))
2. Else fn=min(fn,f)
5. Return fn

55
IDA* Attributes
• The cost threshold increases in each iteration to
the total cost of the lowest-cost node that was
pruned during the previous iteration.
• The algorithm terminates when a goal state is
reached whose total cost does not exceed the
current threshold.
• Completeness and Optimality: Like A*
• Space complexity: O(c)
• Time complexity*: O(bc / e )

56
Duplicate Pruning
• Do not enter the father of the current state
– With or without using closed-list

• Using a closed-list, check the closed list before


entering new nodes to the open list
– Note: in A*, h has to be consistent!
– Do not remove the original check

• Using a stack, check the current branch and


stack status before entering new nodes
57
Exercise
• Q: What are the Alg. Endless
branch
Informed Space
pruning
Optimality

advantages of IDA* Adv.


over: A* V
– A*?
– DFS (no closed list)? DFS V V V

– Uniform-Cost (closed
list)? UC V V

58
Exercise – Cont.
• Q: When IDA* is not preferable?
• A:
– A space graph with dense node duplications
– When all the node costs are different, if the asymptotic complexity
of A* is O(N) - IDA*‘s complexity can get in the worst case to
O(N2).
• Q: What algorithm we’ll get if we implement Greedy
search on a uniform cost graph using
– h(n)= g(n) ?
– h(n)= -g(n) ?
• A:
– h(n)= g(n)  BFS
– h(n)= -g(n)  DFS
59
Exercise – True/False.
Sentence True/False
DFS is not optimal True, see DFS slides for example
Forward Search is always more False, For example if there are
preferable than Backwards Search more start nodes than goal nodes,
or it is more natural to go
backwards (expert systems).

ID alg. is always equal or slower True. The last iteration expands


than BFS (assuming nodes nodes as BFS.
expansion order is deterministic) False. its space complexity is bd
IDS alg. is the exact instead of b d .
implementation of BFS

60
Artificial Intelligence

Lesson 4
61
‫אלגוריתמים המבצעים שיפור‬
‫איטרטיבי‬
‫‪ ‬עבור בעיות בהן המטרה לא ידועה‪.‬‬
‫‪ ‬דוגמאות‪ - :‬להרוויח כמה שיותר כסף‪.‬‬
‫‪ -‬לארוז בכמה שפחות נפח‪.‬‬
‫‪ -‬לשבץ עם כמה שפחות קונפליקטים‪.‬‬
‫‪ ‬יודעים רק איך להשוות בין שני מצבים ולומר מי מהם‬
‫יותר טוב‪.‬‬

‫‪ ‬מגרילים פתרון‪ ,‬ומנסים לעשות שינויים מקומיים כדי‬


‫לשפר אותו‪.‬‬
‫‪62‬‬
Local Search
• Local improvement, no paths
• Look around at states in the local neighborhood and
choose the one with the best value
• Pros: - Quick (usually linear)
– Sometimes enough
– Linear space complexity
– can often find reasonable solutions in large or infinite (continuous)
state spaces for which systematic algorithms are unsuitable.
– Suitable for optimization problems: Math problems for finding
optimal value for functions under specific constrains.
• Cons:
– Not optimal: Travelling Sale Person problem: Find the shortest
path s.t every city will be visited only once.
– Can stuck on local maximum, plateau.
63
Local Search – Cont.
• In order to avoid local Algorithm p

maximum and Hill p=0


plateaus we permit Climbing,GSAT
moves to states with Random Walk p=1
lower values in
Mixed Walk, p=c (domain
probability p. Mixed GSAT specific)
• The different Simulated p=acceptor(dh,
algorithms differ in p. Annealing T)

64
Hill Climbing
while f-value(state) <= f-value(next-best(state))
f-value
state := next-best(state)

f-value = evaluation(state)

states

65
Hill Climbing
• Always choose the next best successor
• Stop when no improvement is possible
• The problems:
– Stops in local maximum
– If the best neighbor is equal to the node, it chooses
the neighbor
– If there are some equals neighbors, choose one
randomly
– Can stuck with no progress because of all above
66
In order to avoid plateaus and
local maximum:

- Sideways move: go to sons in which their value


equal to mine
- Stochastic hill climbing: Choose the node with
the highest grade (how much its solution is
good)
- Random-restart algorithm

67
‫‪Random Restart Hill Climbing‬‬
‫‪ .1‬בחר בנקודה רנדומאלית והרץ את ‪.hill climbing‬‬
‫‪ .2‬אם הפתרון שמצאת טוב יותר מהפתרון הטוב‬
‫ביותר שנמצא עד כה – שמור אותו‪.‬‬
‫‪ .3‬חזור ל‪.1-‬‬
‫מתי נסיים? – לאחר מספר קבוע של איטרציות‪.‬‬
‫– לאחר מספר קבוע של איטרציות שבהן‬
‫לא נמצא שיפור לפתרון הטוב ביותר שנמצא עד‬
‫כה‪.‬‬
‫‪68‬‬
Random Restart Hill Climbing

f-value = evaluation(state)

69 Ram Meshulam 2004


‫‪Simulated Annealing‬‬
‫‪ ‬במקום להתחיל בכל פעם מחדש‪ ,‬נאפשר ירידה ממצב השיא אליו‬
‫הגענו‪.‬‬
‫‪ ‬התהליך דומה לטיפוס הרים אבל בכל שלב בוחרים צעד אקראי‪.‬‬
‫‪ ‬אם הצעד משפר את ערך ‪ - f‬נבצע אותו‪.‬‬

‫‪ ‬אחרת‪ ,‬נבצע אותו בהסתברות מסוימת‪.‬‬

‫‪ ‬פונקצית ההסתברות יורדת בצורה חזקתית כל עוד לא מוצאים‬


‫פתרון‪.‬‬
‫‪70‬‬
Simulated Annealing
• Permits moves to states with lower values
• Gradually decreases the frequency of such moves
and their size.
• Analogue to physical process of freezing liquid.
• Schedule()
– Returns the current temperature
– Depends on start temperature and round number
• Acceptor()
– Returns the probability of choosing “bad” node.
– Depends on h(n)-h(n_son) and current temperature.

71
Simulated Annealing – Pseudo code
• Simulated Annealing(start node s, Temperature t, )
1. Set startTemp=t //for schedule function
2. Set h= h(s)
3. Set round=0
4. while terminal condition not true
1. Set s_new = choose random son of s
2. Set h_new = h(s_new)
3. if (h_new < h) or (random() < acceptor(h_new-h,t))
1. Set s=s_new
2. Set h=h_new
3. Set t=schedule(startTemp, round)
4. Set round=round+1

72
Simulated Annealing – Pseudo code
Cont.
• Acceptor func: Decides
h
whether to go to a bad 
c t
node or not…example: e
0  c 1

• Schedule func: Decrease


cround  startTemp
the temp following the
0<c<1
rounds. example:

73
GSAT
• Greedy local search procedure for satisfying
logic formulas in a conjunctive normal form
(CNF).
• An implementation of Hill Climbing for the
CNF domain.
• Note: SAT is NP-Complete problem.

74
GSAT
• Searcher:

• states: variable assignments


• actions: flip a variable's assignment
• score: the number of unsatisfied clauses

• Start with a random assignment.


• While not sat...
• Flip the value assigned to the variable that yields greatest
number of satisfied clauses.
• Repeat #flips.
• Repeat with new random assignment #trials.
75
GSAT – Pseudo code
• GSAT(clauses C,Integer tries, Integer flips)
1. for i=1 to tries
1. Set T=a randomly generated truth assignment
2. for j= 1 to flips
1. if T satisfies C then return T
2. FLIP any variables in T that results in the greatest decrease
in the number of unsatisfied clauses
3. Save the currently best T

76
Genetic Algorithm
• Inspired by Darwin's theory of evolution:
survival of the fittest.
• Begins with a set of solutions
“chromosomes” called population.
• Best solutions from generation n are taken
and used to form a generation n+1 applying
crossover and mutation operators.

77
Genetic Algorithm Pseudo code
• choose initial population
• evaluate each individual's fitness
• repeat until terminating condition
– select individuals to reproduce //better fitness better
//chance to be selected
– mate pairs at random
– in crossover_prob. apply crossover operator
– in mutation_prob. apply mutation operator
– evaluate each individual's fitness
78
Exercise
• Q: Is there a danger of Local maximum in GA? How does
the algorithm tries to avoid it?
• A: The mutation operator, which inserts randomization to
the algorithm.
• Q: If start temperature very close to 0 in SA
– how will the algorithm behave?
– What problem will it cause?
– How partially can we solve it?
• A:
– Greedy Search with no Closed list.
– It will stuck on the first local max.
– Random-restart.

79
Exercise – Cont.
• Q: Solve the Traveling Salesman Problem using:
– Simulated annealing (SA)
– Genetic Algorithm (GA).
• A:
– For both algorithms a state is a vector which represents
the order in which the salesman travels.
– State value/fitness is the distance the agent traveled.
– State expand/mutation is to swap order of two cities in
path.

80
Exercise – Cont.
• GA:
– crossover: “greedy crossover”
[greffenstette,1985]:
– GreedyCrossover(vector v1, vector v2)
1. Set vector res=v1[0] //v1 and v2 are chosen
randomly
2. Repeat until |res|=number of cities
1. Select the closest city to res[i] from v1[i+1],v2[i+1]
which is not already in res.
2. If not possible select randomly a city which is not in res.

81
Artificial Intelligence

Lesson 5
82
Search Algorithms Hierarchy
Global

Informed Uninformed

A* DFS BFS
Greedy
IDA* IDS Uniform Cost

Local

Mixed Walk
GSAT Random
Mixed GSAT
Hill Climbing Walk
83 Simulated Annealing
Exercise
• What are the different BFS Queue
data structures used to
implement the open
list in BFS,DFS,Best- DFS Stack
FS:

Best-FS Priority
(Greedy,A*,Unifo Queue
rm-Cost Alg).
84
Exercise – Cont.
• If there is no solution A* will explore the
whole graph [yes]
• An admissible heuristic function h(n) will
always return smaller values than the real
distance to the goal [no. h(n)<=h*(n) ]
• h,h’ admissible  A* will expand the same
number of nodes with both func. [no]

85
Artificial Intelligence

Lesson 6
(From Russell & Norvig)
86
Games- Outline
• Optimal decisions
• α-β pruning
• Imperfect, real-time decisions

87
Games vs. search problems
• "Unpredictable" opponent  specifying a
move for every possible opponent reply

• Time limits  unlikely to find goal, must


approximate

88
Game tree (2-player,
deterministic, turns)

89
Minimax
• Perfect play for deterministic games
• Idea: choose move to position with highest minimax value
= best achievable payoff against best play
• E.g., 2-ply game:

90
Minimax algorithm

91
Properties of minimax
• Complete? (=will not run forever) Yes (if tree is finite)

• Optimal? (=will find the optimal response) Yes (against an


optimal opponent)

• Time complexity? O(bm)

• Space complexity? O(bm) (depth-first exploration), O(bm)


for saving the optimal response

• For chess, b ≈ 35, m ≈100 for "reasonable" games


 exact solution completely infeasible
•92
α-β pruning example

93
α-β pruning example

94
α-β pruning example

95
α-β pruning example

96
α-β pruning example

97
Properties of α-β
• Pruning does not affect final result

• Good move ordering improves effectiveness of pruning

• With "perfect ordering“ on binary tree, time complexity =


O(bm/2)
 doubles depth of search

• A simple example of the value of reasoning about which


computations are relevant (a form of metareasoning)

98
Why is it called α-β?
• α is the value of the best
(i.e., highest-value)
choice found so far at
any choice point along
the path for max

• If v is worse than α, max


will avoid it
prune that branch

• Define β similarly for


min
99
The α-β algorithm

100
The α-β algorithm

101
‫‪Resource limits‬‬
‫‪Suppose we have 100 secs, explore 104 nodes/sec‬‬
‫‪ 106 nodes per move‬‬
‫‪Standard approach:‬‬
‫‪• cutoff test: e.g., depth limit‬‬
‫‪(perhaps add quiescence search: Additional “grade” for each‬‬
‫)‪node‬‬
‫מצב חוסר שקט‪ .‬בהקשר של משחקים – מצב בו שני צדדים במשחק בעיצומו של החלפת כלים‪.‬‬
‫אם אלגוריתם החיפוש יסיים לחפש בשלב כזה‪ ,‬יש סיכוי גבוה כי יחזיר ערך שגוי מאחר ותיתכן‬
‫המשכת החלפת כלים נוספת‪.‬‬
‫פתרון הבעיה הוא להמשיך להעמיק בענף העץ עד שמגיעים למצב שקט שבו אין החלפת כלים‪.‬‬

‫‪• evaluation function: estimated desirability of position‬‬


‫‪•102‬‬
Evaluation functions
• For chess, typically linear weighted sum of features
Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s)

• e.g., w1 = 9 with
f1(s) = (number of white queens) – (number of black
queens), etc.

103
Cutting off search
MinimaxCutoff is identical to MinimaxValue except
1. "Terminal ?“ is replaced by “Cutoff?”
2. Utility is replaced by Eval

Does it work in practice?


bm = 106, b=35  m=4

4-ply lookahead is a hopeless chess player!


– 4-ply ≈ human novice
– 8-ply ≈ typical PC, human master
– 12-ply ≈ Deep Blue, Kasparov

104
Deterministic games in practice
• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a precomputed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.

• Chess: Deep Blue defeated human world champion Garry Kasparov in


a six-game match in 1997. Deep Blue searches 200 million positions
per second, uses very sophisticated evaluation, and undisclosed
methods for extending some lines of search up to 40 ply.

• Othello: human champions refuse to compete against computers, who


are too good.

• Go: human champions refuse to compete against computers, who are


too bad. In go, b > 300, so most programs use pattern knowledge bases
to suggest plausible moves.

105
Summary
• Games are fun to work on!

• They illustrate several important points about AI

• perfection is unattainable  must approximate

• good idea to think about what to think about

106
Artificial Intelligence

Lesson 7
107
Planning
• Traditional search methods does not fit to a
large, real world problem: it’s needed to
define specific states, and not in general.
• We want to use general knowledge
• We need general heuristic
• Problem decomposition

108
STRIPS Algorithm
• Strips – Stands for STanford Research
Institute Problem Solver (1971).
• Strips idea: start from the goal to the start
state
• See example (pdf).

109
STRIPS – Representation
• States and goal – sentences in FOL.
• Operators – are combined of 3 parts:
– Operator name
– Preconditions – a sentence describing the conditions
that must occur so that the operator can be executed.
– Effect – a sentence describing how the world has
change as a result of executing the operator. Has 2
parts:
• Add-list
• Delete-list
– Optionally, a set of (simple) variable constraints

110
Example – Blocks world
Basic operations
– stack(X,Y): put block X on block Y
– unstack(X,Y): remove block X from block Y
– pickup(X): pickup block X
– putdown(X): put block X on the table

B
A C
TABLE
111
Example – Blocks world (Cont.)
operator(stack(X,Y), operator(unstack(X,Y),
Precond [holding(X),clear(Y)], [on(X,Y), clear(X), handempty],
Add [handempty,on(X,Y),clear(X)],
[holding(X),clear(Y)],
Delete [holding(X),clear(Y)],
Constr [X\==Y,Y\==table,X\==table]). [handempty,clear(X),on(X,Y)],
[X\==Y,Y\==table,X\==table]).

operator(pickup(X),
[ontable(X), clear(X), handempty],
operator(putdown(X),
[holding(X)],
[ontable(X),clear(X),handempty], [holding(X)],
[X\==table]). [ontable(X),handempty,clear(X)],
[holding(X)],
[X\==table]).
112
STRIPS Pseudo code
STRIPS(stateList start, stateList goals)
1. Set state = start
2. Set plan = []
3. Set stack = goals
4. while stack is not empty do
1. STRIPS-Step()
5. Return plan
113
STRIPS Pseudo code – Cont.
STRIPS-Step()
switch top of stack t :
1. case t is a goal that matches state:
1. pop stack
2. case t is an unsatisfied conjunctive-goal:
1. select an ordering for the sub-goals
2. push the sub-goals into stack

114
STRIPS Pseudo code – Cont.
3. case t is a simple unsatisfied goal
1. choose an operator op whose add-list matches t
2. replace the t with op
3. push preconditions of op to stack
4. case t is an operator
1. pop stack
2. state = state + t.add-list - t.delete-list
3. plan = [plan | t]

115
Versions and Decision points
• 3 decision points
– How to order sub-goals?
– Which operator to choose?
– Which object to place in a variable?

• Different versions
– Backtracking? (at each decision point)
– Lifted: remain a variable in the stack with no value Vs.
– Grounded: for each variable, a value is assigned

• The original STRIPS


– Backtrack only on the order of sub-goals
– Lifted

116
Artificial Intelligence

Lesson 8
117
Outline
• Inductive learning
• Decision tree learning

118
Learning
• Learning is essential for unknown environments,
– i.e., when designer lacks omniscience

• Learning is useful as a system construction


method,
– i.e., expose the agent to reality rather than trying to
write it down

• Learning modifies the agent's decision


mechanisms to improve performance
119
Learning Paradigms
• Supervised Learning: with a “supervisor”.
Inputs and their supplied outputs by the
“supervisor”
• Reinforced Learning: with “reward” for a
good action, or “penalty” for a bad action.
Self learning.
• Unsupervised Learning: Try to learn, but
it’s unknown if the learning is correct or
not.
120
Inductive learning
• Simplest form: learn a function from examples

• f is the target function,


An example is a pair (x, f(x))

• Problem: find a hypothesis h


such that h ≈ f
given a training set of examples

• This is a highly simplified model of real learning:


– Ignores prior knowledge
– Assumes examples are given

121
Inductive learning method
• Construct/adjust h to agree with f on training set
• (h is consistent if it agrees with f on all examples)
• E.g., curve fitting:

122
Inductive learning method
• Construct/adjust h to agree with f on training set
• (h is consistent if it agrees with f on all examples)
• E.g., curve fitting:

123
Inductive learning method
• Construct/adjust h to agree with f on training set
• (h is consistent if it agrees with f on all examples)
• E.g., curve fitting:

• Ockham’s razor: prefer the simplest hypothesis consistent with


data
• The tradeoff between the expressiveness of a hypothesis space
and the complexity of finding simple and consistent hypothesis
124
Learning decision trees
Problem: decide whether to wait for a table at a restaurant,
based on the following attributes:
1. Alternate: is there an alternative restaurant nearby?
2. Bar: is there a comfortable bar area to wait in?
3. Fri/Sat: is today Friday or Saturday?
4. Hungry: are we hungry?
5. Patrons: number of people in the restaurant (None, Some, Full)
6. Price: price range ($, $$, $$$)
7. Raining: is it raining outside?
8. Reservation: have we made a reservation?
9. Type: kind of restaurant (French, Italian, Thai, Burger)
10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)
125
Attribute-based representations
• Examples described by attribute values (Boolean, discrete, continuous)
• E.g., situations where I will/won't wait for a table:

• Classification of examples is positive (T) or negative (F)


126
Decision trees
• One possible representation for hypotheses
• E.g., here is the “true” tree for deciding whether to wait:

127
Expressiveness
• Decision trees can express any function of the input attributes.
• E.g., for Boolean functions, truth table row → path to leaf:

• Trivially, there is a consistent decision tree for any training set with one path
to leaf for each example (unless f nondeterministic in x) but it probably won't
generalize to new examples

• Prefer to find more compact decision trees


128
Decision tree learning
• Aim: find a small tree consistent with the training examples
• Idea: (recursively) choose "most significant" attribute as root of
(sub)tree

129
Choosing an attribute
• Idea: a good attribute splits the examples into subsets that
are (ideally) "all positive" or "all negative"

• Patrons? is a better choice

130
Using information theory
• To implement Choose-Attribute in the DTL
algorithm
• Information Content of an answer (Entropy):
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
• For a training set containing p positive examples
and n negative examples:
p n p p n n
I( , ) log 2  log 2
pn pn pn pn pn pn

131
Information gain
• A chosen attribute A divides the training set E into subsets
E1, … , Ev according to their values for A, where A has v
distinct values.
v
p i  ni pi ni
remainder ( A)   I( , )
i 1 p  n pi  ni pi  ni
• Information Gain (IG) or reduction in entropy from the
attribute test:
p n
IG( A)  I ( , )  remainder ( A)
pn pn
• Choose the attribute with the largest IG
132
Information gain
For the training set, p = n = 6, I(6/12, 6/12) = 1 bit

Consider the attributes Patrons and Type (and others too):

2 4 6 2 4
IG( Patrons )  1  [ I (0,1)  I (1,0)  I ( , )]  .0541 bits
12 12 12 6 6
2 1 1 2 1 1 4 2 2 4 2 2
IG(Type)  1  [ I ( , )  I ( , )  I ( , )  I ( , )]  0 bits
12 2 2 12 2 2 12 4 4 12 4 4

Patrons has the highest IG of all attributes and so is chosen by the DTL
algorithm as the root

133
Example contd.
• Decision tree learned from the 12 examples:

• Substantially simpler than “true” tree---a more complex


hypothesis isn’t justified by small amount of data
134
Performance measurement
• How do we know that h ≈ f ?
1. Use theorems of computational/statistical learning theory
2. Try h on a new test set of examples
(use same distribution over example space as training set)
Learning curve = % correct on test set as a function of training set size.

A learning curve for the


decision tree algorithm on
100 randomly generated
examples in the restaurant
domain. The graph
summarizes 20 trials.

135
Summary
• Learning needed for unknown environments, lazy
designers
• Learning agent = performance element + learning
element
• For supervised learning, the aim is to find a simple
hypothesis approximately consistent with training
examples
• Decision tree learning using information gain
• Learning performance = prediction accuracy
measured on test set

136
Fresh our memory with
PROBABILITY

Lesson 9
137
Unconditional Probability
• Unconditional or prior probability that a
proposition A is true: P(A)
– In the absence of any other information, the probability
to event A is P(A).
– Probability of application accepted:
P(application-accept) = 0.2
• Propositions include random variables X
– Each random variable X has domain of values:
{red, blue, …green}
– P(X=Red) means the probability of X to be Red

138
Unconditional Probability
• If application-accept is binary random variable ->
values = {true,false}
– P(application-accept) same as P(app-accept = True)
– P(~app-accept) same as P(app-accept = False)

• If Status-of-application domain:
{reject, accept, wait-list}
– We are allowed to make statements such as:
P(status-of-application = reject) = 0.2
P(status-of-application = accept) = 0.3
P(status-of-application = wait-list) = 0.5
139
Conditional Probability
• What if agent has some evidence?
– E.g. agent has a friend who has applied with a much weaker
qualification, and that application was accepted?

• Posterior or conditional probability


P(A|B) probability of A given all we know is B
– P(X=accept|Weaker application was accepted)
– If we know B and also know C, then P(A| B  C)

140
Product rule

– P(A  B) = P(A|B)*P(B)
– P(A  B) = P(B|A)*P(A)

– P(A|B) = P(A  B) / P(B)


– P(B|A) = P(A  B) / P(A) A B

141
Probability Distribution
• Probability of all the possible values of X Denote by
P(X)
– Note that P is in bold
– In our example:
X = Status-of-application
Xi {reject, accept, wait-list}
P(X) = <0.2, 0.3, 0.5>

•  P(X=xi) = 1

142
Joint Probability Distribution

• Joint probability distribution is a table


– Assigns probabilities to all possible assignment of values for
combinations of variables

• P(X1,X2,..Xn) assigns probabilities to all possible


assignment of values to variables X1, X2,..Xn

143
Joint Probability Distribution
• X1 = Status of your application
• X2 = Status of your friend’s application
• Then P(X1,X2)
X1
Reject Accept Wait-list

Reject 0.15 0.3 0.02


X2
Accept 0.3 0.02 0.09
Wait-list 0.02 0.09 0.01

144
Bayes’ Rule
• Given that
– P(A  B) = P(A|B)*P(B)
– P(A  B) = P(B|A)*P(A)
 P(B|A) = P(A|B)*P(B)
P(A)
• Determine P(B|A) given P(A|B), P(B) and P(A)
• Generalize to some background evidence e
P ( Y | X, e) = P(X | Y, e) * P(Y | e)
P(X | e)

145
Bayes’ Rule Example
• S: Proposition that patient has stiff neck
• M: Proposition that patient has meningitis
• Meningitis causes stiff-neck, 50% of the time

• Given:
– P(S | M) = 0.5
– P(M) = 1/50,000
– P(S) = 1/20
– P(M|S) = P(S| M) * P(M) / P(S) = 0.0002

• If a patient complains about stiff-neck,


P(meningitis) only 0.0002
146
Bayes’ Rule
• How can it help us?
– P(A|B) may be causal knowledge, P(B|A) diagnostic knowledge
– E.g., A is symptom, B is disease

• Diagnostic knowledge may vary:


– Robustness by allowing P(B | A) to be computed from others

147
Bayes’ Rule Use
• P(S | M) is causal knowledge, does not change
– It is “model based”
– It reflects the way meningitis works

• P(M | S) is diagnostic; tells us likelihood of M given


symptom S
– Diagnostic knowledge may change with circumstance, thus helpful
to derive it
– If there is an epidemic, probability of Meningitis goes up; rather
than again observing P(M | S), we can compute it

148
Computing the denominator: P(S)

We wish to avoid computing the denominator in the


Bayes’ rule
– May be hard to obtain
– Introduce 2 different techniques to compute (or avoid
computing P(S))

149
Computing the denominator:
#1 approach - compute relative likelihoods:
• If M (meningitis) and W(whiplash) are two possible
explanations:
– P(M|S) = P(S| M) * P(M) / P(S)
– P(W|S) = P(S| W) * P(W)/ P(S)
• P(M|S)/P(W|S) = P(S|M) * P(M) / P(S| W) * P(W)
• Disadvantages:
– Not always enough
– Possibility of many explanations

150
Computing the denominator:
#2 approach - Using M & ~M:
• Checking the probability of M, ~M when S
– P(M|S) = P(S| M) * P(M) / P(S)
– P(~M|S) = P(S| ~M) * P(~M)/ P(S)
• P(M|S) + P(~M | S) = 1 (must sum to 1)
– [P(S|M)*P(M)/ P(S) ] +
[P(S|~M) * P(~M)/P(S)] = 1
– P(S|M) * P(M) + P(S|~M) * P(~M) = P(S)
• Calculate P(S) in this way…

151
Computing the denominator
The #2 approach is actually - normalization:
• 1/P(S) is a normalization constant
– Must ensure that the computed probability values sum to 1
– For instance: P(M|S)+P(~M|S) must sum to 1
• Compute:
– (a) P(S|~M) * P(~M)
– (b) P(S | M) * P (M)
– (a) and (b) are numerators, and give us “un-normalized
values”
– We could compute those values and then scale them so that
they sum to 1
152
Simple Example
• Suppose two identical boxes
• Box1:
– colored red from inside
– has 1/3 black balls, 2/3 red balls
• Box2:
– colored black from inside
– has 1/3 red balls, 2/3 black balls
• We select one Box at random; cant tell how it is colored
inside.
• What is the probability that Box is red inside?

153
Applying Bayes’ Rule
What if we were to select a ball at random from Box, and it is red,
Does that change the probability?
P(Red-box | Red-ball) = P(Red-ball | Red-box) * P(Red-box)
P(Red-ball)
= 2/3 * 0.5 / P(Red-ball)
How to calculate P(Red-ball)?

P(Black-box|Red-ball) = P(Red-ball |Black-box)*P(Black-box)


P(Red-ball)
= 1/3 * 0.5 / P(Red-ball)

Thus, by our approach #2: 2/3 * 0.5 / P(Red-ball) +


1/3 * 0.5 / P(Red-ball)
=1
Thus, P(Red-ball) = 0.5, and P(Red-box | Red-ball) = 2/3
154
Absolute and Conditional Independence
• Absolute: P(X|Y) = P(X) or P(X  Y) = P(X)P(Y)
• Conditional: P(A  B | C) = P(A | C) P(B | C)
• P(A| B  C)
– If A and B are conditionally independent given C Then,
probability of A is not dependent on B
– P(A| B  C) = P(A| C)
• E.g. Two independent sensors S1 and S2 and a jammer J1
– P(Si) = Probability Si can read without jamming
– P(S1 | J1  S2) = P(S1 | J1)

155
Combining Evidence
• Example:
– S: Proposition that patient has stiff neck
– H: Proposition that patient has severe headache
– M: Proposition that patient has meningitis
– Meningitis causes stiff-neck, 50% of the time
– Meningitis causes head-ache, 70% of the time

• probability of Meningitis should go up, if both symptoms


reported
• How to combine such symptoms?

156
Combining Evidence
• P(C| A  B) = P(C  A  B) / P ( A  B)

• Numerator:
– P(C  A  B) = P(B | A  C) * P(A  C)
= P(B | C) * P(A  C)
= P(B | C) * P(A | C) * P (C)

• Going back to our example:


P(M | S  H) = P(S| M) * P(H| M) * P(M)
P( S  H)

157
Artificial Intelligence

Lesson 10
(From Russell & Norvig)
158
Introduction
• Why ANN
Try to imitate the computational abilities of the human brain.
Some tasks can be done easily (effortlessly) by humans but are hard by
conventional paradigms on Von Neumann machine with algorithmic
approach
• Pattern recognition (e.g, recognition of old friends or simply a hand-
written character)
• Content addressable recall (ASSOCIATIVE MEMORIES)
• Approximate, common sense reasoning (e.g., driving in busy streets,
deciding what to do when we miss the bus)
These tasks are often ill-defined, experience based, hard to apply logic

159
Introduction
Von Neumann machine
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Human Brain
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

• One or a few high speed (ns) • Large # (1011) of low speed


processors with considerable processors (ms) with limited
computing power computing power
• One or a few shared high speed • Large # (1015) of low speed
buses for communication connections
• Sequential memory access by • Content addressable recall
address (CAM)
• Problem-solving knowledge is • Problem-solving knowledge
separated from the computing resides in the connectivity of
component neurons
• Hard to be adaptive • Adaptation by changing the
connectivity
160
Biological neural activity•

– Each neuron has a body, an axon, and many dendrites


• Can be in one of the two states: firing and rest.
• Neuron fires if the total incoming stimulus exceeds the threshold
– Synapse: thin gap between axon of one neuron and dendrite
of another.
• Signal exchange
• Synaptic strength/efficiency
161
Introduction
• What is an (artificial) neural network
– A set of nodes (units, neurons, processing elements)
• Each node has input and output
• Each node performs a simple computation by its node
function
– Weighted connections between nodes
• Connectivity gives the structure/architecture of the net
• What can be computed by a NN is primarily determined
by the connections and their weights
– A very much simplified version of networks of
neurons in animal nerve systems
162
ANN Neuron Models
• Each node has one or more
inputs from other nodes, and
one output to other nodes
• Input/output values can be
– Binary {0, 1}
– Bipolar {-1, 1}
– Continuous General neuron model
• All inputs to one node come in
at the same time and remain
activated until the output is
produced
• Weights associated with links
• f (net ) is the node function
net  i 1 wi xi is most popular
n

163
Weighted input summation
Node Function
• Identity function : f (net )  net.
• Constant function : f (net )  c.
• Step (threshold) function

Step function
where c is called the threshold
• Ramp function

Ramp function
164
Node Function
• Sigmoid function
– S-shaped
– Continuous and everywhere
differentiable
– Rotationally symmetric about
some point (net = c)
Sigmoid function
– Asymptotically approach
saturation points

– Examples: When y = 0 and z = 0:


a = 0, b = 1, c = 0.
Larger x gives steeper curve

165
Perceptron
• The purpose: examples classification:
• Perceptron with N inputs lines gets an
example (x1,…, xn) as input, where each xi is
an attribute value.
• Result=f (x1,…, xn)
• If the result>threshold, return 1, otherwise 0.
• Note: perceptron works only for functions that
are linear separated…
166
Perceptrons
• A simple perceptron
– Structure:
• Single output node with threshold function
• n input nodes with weights wi, {i = 1 to n}
– To classify input patterns into one of the two classes
(depending on whether output = 0 or 1)
– Example: input patterns: (x1, x2)
• Two groups of input patterns
(0, 0) (0, 1) (1, 0) (-1, -1);
(2.1, 0) (0, -2.5) (1.6, -1.6)
• Can be separated by a line on the (x1, x2) plane x1 - x2 = 2
• Classification by a perceptron with
w1 = 1, w2 = -1, threshold = 2
167
Perceptrons

(-1, -1)
(1.6, -1.6)

• The step function is:


• 1, if x>2

• F(x)= {
• 0, if x<2

• Implement threshold by a node x0


– Constant output 1
– Weight w0 = - threshold
– A common practice in NN design

168
Perceptrons
• Linear separability
– A set of (2D) patterns (x1, x2) of two classes is linearly
separable if there exists a line on the (x1, x2) plane
• w0 + w1 x1 + w2 x2 = 0
• Separates all patterns of one class from the other class
– A perceptron can be built with
• 3 input x0 = 1, x1, x2 with weights w0, w1, w2
– n dimensional patterns (x1,…, xn)
• Hyperplane w0 + w1 x1 + w2 x2 +…+ wn xn = 0 dividing the
space into two regions
– Can we get the weights from a set of sample patterns?
• If the problem is linearly separable, then YES (by
perceptron learning)
169
• Examples of linearly separable classes
- Logical AND function o x
patterns (bipolar) decision boundary
x1 x2 output w1 = 1
-1 -1 -1 w2 = 1 o o
-1 1 -1 w0 = -1
1 -1 -1 x: class I (output = 1)
1 1 1 -1 + x1 + x2 = 0 o: class II (output = -1)
- Logical OR function
x x
patterns (bipolar) decision boundary
x1 x2 output w1 = 1
-1 -1 -1 w2 = 1
-1 1 1 w0 = 1 o x
1 -1 1
1 1 1 1 + x1 + x2 = 0 x: class I (output = 1)
o: class II (output = -1)
170
Perceptron Learning Algorithm
1. Initialize weights and threshold:
Set wi(t), (0 <= i <= n), to be the weight i at time t, and ø to be the threshold
value in the output node.
Set w0 to be -ø, the bias, and x0 to be always 1.
Set wi(0) to small random values, thus initializing the weights and threshold.
2. Present input and desired output
Present input x0, x1, x2, ..., xn and desired output d(t). (x0 is always1).
3. Calculate the actual output:
y(t) = fh[w0(t)x0(t) + w1(t)x1(t) + .... + wn(t)xn(t)]
4. Adapts weights
wi(t+1) = wi(t) + η[d(t) - y(t)]xi(t) , where 0 <= η <= 1 is a positive gain
function that controls the adaption rate.
• Steps 3 and 4. are repeated until the iteration error is less than a user-specified
error threshold or a predetermined number of iterations have been completed.
171
Perceptron Learning
• Note:
– It is a supervised learning
– Learning occurs only when a sample input misclassified
(error driven)

• Termination criteria: learning stops when all samples are


correctly classified
– Assuming the problem is linearly separable
– Assuming the learning rate (η) is sufficiently small

172
Perceptron Learning
Choice of learning rate:
– If η is too large:
– existing weights are overtaken by η[d(t) - y(t)]
– If η is too small (≈ 0): very slow to converge
– Common choice: η = 0.1
• Non-numeric input:
– Different encoding schema
ex. Color = (red, blue, green, yellow). (0, 0, 1, 0) encodes
“green”

173
Network Architecture
• MLP: Feedforward Networks
– A connection is allowed from a node in layer i only to
nodes in layer i + 1.
– Most widely used architecture.

Conceptually, nodes
at higher levels
successively
abstract features
from preceding
layers

174
Perceptron Learning Quality
– Generalization: can a trained perceptron correctly classify
patterns not included in the training samples?
• Common problem for many NN learning models
– Depends on the quality of training samples selected.
– Also to some extent depends on the learning rate and
initial weights
– How can we know the learning is ok?
• Reserve a few samples for testing

175
Linear Separability Again
• Examples of linearly inseparable classes
- Logical XOR (exclusive OR) function x o

patterns (bipolar) decision boundary


x1 x2 output
-1 -1 -1 o x
-1 1 1
1 -1 1 x: class I (output = 1)
1 1 -1 o: class II (output = -1)
No line can separate these two classes, as can be seen from the
fact that the following linear inequality system has no solution

w0  w1  w2  0 (1) because we have w0 < 0 from


w0  w1  w2  0 (2) (1) + (4), and w0 >= 0 from
w  w  w  0 (3) (2) + (3), which is a
w0  w1  w2  0
 0 1 2 (4) contradiction
176
– XOR can be solved by a more
complex network with hidden
units
Threshold  1
2
x1 z1 2 Threshold  0
-2
Y
-2
x2 2 z2 2

(-1, -1) (-1, -1) -1


(-1, 1) (-1, 1) 1
(1, -1) (1, -1) 1
(1, 1) (-1, -1) -1

177
MultiLayer NN
– Perceptron extension:
1. Hidden layer in addition to input and output layers
2. In the output layer, it’s possible to have more than
one node, e.g., characters classification
3. Activation function: Sigmoid functions and not a
regular step function
4. The functions can be different in each node, but, in
general, use the same function for all the nodes
5. In the input layer, it’s possible to use step function

178
MultiLayer NN-Purpose
• Examples classification: possible to classify
more than 2 groups
• Function proximity: f: RnRm.
Input layer with n nodes; output layer with m nodes
• MLP has much higher computational power than
a simple perceptron
• Possible to handle also function that are not
linear separable.
179
Multilayer Network Learning Algorithm

180
Backpropagation example

w13
x1 x3 w35
w14
x5
w23
x2 w24 x4 w45

Sigmoid as activation function with x=3:


• g(in) = 1/(1+℮-3·in)
• g’(in) = 3g(in)(1-g(in))

181
Adding the threshold

1 1
x0 w03
x6
w04 w65
w13
x1 x3 w35
w14
x5
w23
x2 w24 x4 w45

182
Training Set
• Logical XOR (exclusive OR) function
x1 x2 output
0 0 0
0 1 1
1 0 1
1 1 0

• Choose random weights


• <w03,w04,w13,w14,w23,w24,w65,w35,w45> =
<0.03,0.04,0.13,0.14,-0.23,-0.24,0.65,0.35,0.45>

• Learning rate: 0.1 for the hidden layers, 0.3 for the output layer

183
First Example
• Compute the outputs
• a0 = 1 , a1= 0 , a2 = 0
• a3 = g(1*0.03 + 0*0.13 + 0*-0.23) = 0.522
• a4 = g(1*0.04 + 0*0.14 + 0*-0.24) = 0.530
• a6 = 1, a5 = g(0.65*1 + 0.35*0.522 + 0.45*0.530) = 0.961
• Calculate ∆5 = 3*g(1.0712)*(1-g(1.0712))*(0-0.961) = -0.108
• Calculate ∆6, ∆3, ∆4
• ∆6 = 3*g(1)*(1-g(1))*(0.65*-0.108) = -0.010
• ∆3 = 3*g(0.03)*(1-g(0.03))*(0.35*-0.108) = -0.028
• ∆4 = 3*g(0.04)*(1-g(0.04))*(0.45*-0.108) = -0.036
• Update weights for the output layer
• w65 = 0.65 + 0.3*1*-0.108 = 0.618
• w35 = 0.35 + 0.3*0.522*-0.108 = 0.333
• w45 = 0.45 + 0.3*0.530*-0.108 = 0.433
184
First Example (cont)
• Calculate ∆0, ∆1, ∆2
• ∆0 = 3*g(1)*(1-g(1))*(0.03*-0.028 + 0.04*-0.036) = -0.001
• ∆1 = 3*g(0)*(1-g(0))*(0.13*-0.028 + 0.14*-0.036) = -0.006
• ∆2 = 3*g(0)*(1-g(0))*(-0.23*-0.028 + -0.24*-0.036) = 0.011
• Update weights for the hidden layer
• w03 = 0.03 + 0.1*1*-0.028 = 0.027
• w04 = 0.04 + 0.1*1*-0.036 = 0.036
• w13 = 0.13 + 0.1*0*-0.028 = 0.13
• w14 = 0.14 + 0.1*0*-0.036 = 0.14
• w23 = -0.23 + 0.1*0*-0.028 = -0.23
• w24 = -0.24 + 0.1*0*-0.036 = -0.24

185
Second Example
• Compute the outputs
• a0 = 1, a1= 0 , a2 = 1
• a3 = g(1*0.027 + 0*0.13 + 1*-0.23) = 0.352
• a4 = g(1*0.036 + 0*0.14 + 1*-0.24) = 0.352
• a6 = 1, a5 = g(0.618*1 + 0.333*0.352 + 0.433*0.352) = 0.935
• Calculate ∆5 = 3*g(0.888)*(1-g(0.888))*(1-0.935) = 0.012
• Calculate ∆6, ∆3, ∆4
• ∆6 = 3*g(1)*(1-g(1))*(0.618*0.012) = 0.001
• ∆3 = 3*g(-0.203)*(1-g(-0.203))*(0.333*0.012) = 0.003
• ∆4 = 3*g(-0.204)*(1-g(-0.204))*(0.433*0.012) = 0.004
• Update weights for the output layer
• w65 = 0.618 + 0.3*1*0.012 = 0.623
• w35 = 0.333 + 0.3*0.352*0.012 = 0.334
• w45 = 0.433 + 0.3*0.352*0.012 = 0.434
186
Second Example (cont)
• Calculate ∆0, ∆1, ∆2
• Skipped, we do not use them
• Update weights for the hidden layer
• w03 = 0.027 + 0.1*1*0.003 = 0.027
• w04 = 0.036 + 0.1*1*0.004 = 0.036
• w13 = 0.13 + 0.1*0*0.003 = 0.13
• w14 = 0.14 + 0.1*0*0.004 = 0.14
• w23 = -0.23 + 0.1*1*0.003 = -0.23
• w24 = -0.24 + 0.1*1*0.004 = -0.24

187
Summary
• Single layer nets have limited representation power
(linear separability problem)

• Error driven seems a good way to train a net

• Multi-layer nets (or nets with non-linear hidden


units) may overcome linear inseparability problem

188
Artificial Intelligence

Lesson 11
(From Russell & Norvig)
189
Conditional probability
• Conditional or posterior probabilities
e.g., P(cavity | toothache) = 0.8
i.e., given that toothache is all I know

• Notation for conditional distributions:


P(Cavity | Toothache) = 2-element vector of 2-element vectors)

• If we know more, e.g., cavity is also given, then we have


P(cavity | toothache,cavity) = 1

• New evidence may be irrelevant, allowing simplification, e.g.,


P(cavity | toothache, sunny) = P(cavity | toothache) = 0.8

• This kind of inference, sanctioned by domain knowledge, is crucial

190
Inference by enumeration
• Start with the joint probability distribution:

• Can also compute conditional probabilities:


P(cavity | toothache) = P(cavity  toothache)
P(toothache)
= 0.016  0.064
 0.4
0.108  0.012  0.016  0.064

191
Independence
• A and B are independent iff
P(A|B) = P(A) or P(B|A) = P(B) or P(A, B) = P(A) P(B)

P(Toothache, Catch, Cavity, Weather)


= P(Toothache, Catch, Cavity) P(Weather)

• Absolute independence powerful but rare

• Dentistry is a large field with hundreds of variables, none of which are


independent. What to do?

192
Conditional independence
• P(Toothache, Cavity, Catch) has 23 independent entries

• If I have a cavity, the probability that the probe catches in it doesn't


depend on whether I have a toothache:
(1) P(catch | toothache, cavity) = P(catch | cavity)

• The same independence holds if I haven't got a cavity:


(2) P(catch | toothache,cavity) = P(catch | cavity)

• Catch is conditionally independent of Toothache given Cavity:


P(Catch | Toothache,Cavity) = P(Catch | Cavity)

• Equivalent statements:
P(Toothache | Catch, Cavity) = P(Toothache | Cavity)

P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)

193
Bayesian networks
• A simple, graphical notation for conditional independence
assertions and hence for compact specification of full joint
distributions
• It describes how variables interact locally
• Local interactions chain together to give global, indirect
interactions

• Syntax:
– a set of nodes, one per variable
– a directed, acyclic graph (link ≈ "directly influences")
– a conditional distribution for each node given its parents:
P (Xi | Parents (Xi))- conditional probability table (CPT)

194
Example 1
• Topology of network encodes conditional independence
assertions: P(W=true) = 0.4
P(Cavity=true) = 0.8

Cavity P(C=true |
Cavity)
T .9
F .05
Cavity P(T=true | Cavity)
T .8
F .4

• Weather is independent of the other variables


• Toothache and Catch are conditionally independent given
Cavity
• It is usually easy for a domain expert to decide what direct
influences exist
195
Example 2
• N independent coin flips :

P(X1=tree) = 0.5 P(X2=tree) = 0.5 P(Xn=tree) = 0.5

• No interactions between variables: absolute independence


• Does every Bayes Net can represent every full joint?
• No. For example, Only distributions whose variables are
absolutely independent can be represented by a Bayes’ net
with no arcs.

196
Calculation of Joint Probability
• How to build the Bayes net?
• Given its parents, each node is conditionally
independent of everything except its descendants

• Thus,

P(x1x2…xn) = Pi=1,…,nP(xi|parents(Xi))
 full joint distribution table
• Every BN over a domain implicitly represents some joint
distribution over that domain

197
Example 3
• I'm at work, neighbor John calls to say my alarm is
ringing, but neighbor Mary doesn't call. Sometimes it's set
off by minor earthquakes. Is there a burglar?

• Variables: Burglary, Earthquake, Alarm, JohnCalls,


MaryCalls

• Network topology reflects "causal" knowledge:


– A burglar can set the alarm off
– An earthquake can set the alarm off
– The alarm can cause Mary to call
– The alarm can cause John to call

198
Example contd.

For example, what is the probability that there is a burglary, earthquake, alarm,
Jon call, Mary doesn’t?
P(b,e,a,j,~m)=P(b)*P(e)*P(a|b,e)*P(j|a)*P(⌐ m|a)

199
Answering queries

• I'm at work, neighbor John calls to say my alarm is ringing, but


neighbor Mary doesn't call. Sometimes it's set off by minor
earthquakes. Is there a burglar?

– P(b|j,⌐m) = P(b,j,⌐m)/P(j,⌐m) (based on p(a|b)=p(a,b)/p(b))


– P(b,j ⌐m) = P(b,e,a,j,⌐m) + P(b,⌐e,a,j,⌐m) + P(b,e,⌐a,j,⌐m) + P(b,⌐e,⌐a,j,⌐m) =
P(b)P(e)P(a|b,e)P(j|a)P(⌐m|a) +
P(b)P(e)P(⌐a|b,e)P(j|⌐a)P(⌐m|⌐a) +
P(b)P(⌐e)P(a|b, ⌐e)P(j|a)P(⌐m|a) +
P(b)P(⌐e)P(⌐a|b, ⌐e)P(j|⌐a)P(⌐m|⌐a)
– Do the same to calculate P(⌐b,j ⌐m) and normalize
P(b|j,⌐m)+ P(⌐ b|j,⌐m)=1
P(b,j,⌐m)+P(⌐ b,j,⌐m)= P(j,⌐m)

200
Laziness and Ignorance
• The probabilities actually summarize a potentially infinite
set of circumstances in which the alarm might fail to go off
– high humidity
– power failure
– dead battery
– cut wires
– a dead mouse stuck inside the bell
• John or Mary might fail to call and report it
– out to lunch
– on vacation
– temporarily deaf
– passing helicopter

201
Compactness
• A CPT for Boolean Xi with k Boolean parents has 2k rows for the
combinations of parent values

• Each row requires one number p for Xi = true


(the number for Xi = false is just 1-p)

• If each variable has no more than k parents, the complete network


requires O(n · 2k) numbers
• I.e., grows linearly with n, vs. O(2n) for the full joint distribution
• For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)

• We utilize the property of locally structured system:


local connections between the variables, so each variable won’t have too
many parents.
The number of parents depends of how the net is built.
Worst case: n · 2k is equal to 2n
202
Causality?
• Rain (a) causes Traffic (b)
• Let’s build the joint: p(a,b)=p(a|b)*p(b)=p(b|a)*p(a)

203
Reverse Causality?
• Both nets are legal, but the previous one is preferred.
Rain cause traffic in general, tough there is a connection
between traffic and rain….

204
Causality?
• What do the arrows really mean?
• Topology may happen to encode causal structure
• Topology really encodes conditional independencies

• When Bayes’ nets reflect the true causal patterns:


– Often simpler (nodes have fewer parents)
– Often easier to think about
– Often easier to elicit from experts
• BNs need not actually be causal
– Sometimes no causal net exists over the domain
– E.g. consider the variables Traffic and RoofDrips
– End up with arrows that reflect correlation, not causation

205
Example 2, Again
What if the net is build not in a logical order The net looks much more complicated.
Consider the following 2 orders for insertion:
• (a) MaryCalls, JohnCalls, Alarm, Burglary, Earthquake
– Since, P(Burglary|Alarm, JohnCalls, MaryCalls) = P(BurglarylAlarm)
• (b) Mary Calls, JohnCalls, Earthquake, Burglary, Alarm.

206
Connection Types
Name Diagram X ind. Z? X ind. Z, given Y?

B A M
Causal chain Not necessarily Yes

A
Common Cause No Yes

J M

B E
Common Effect Yes No

207
Test Question
H P(G=true | H)
T .4
P(H=true) = 0.1 F .8
H G

H G P(R =true | H, G)
J
false false 0.2
false true 0.9
true false 0.3
true true 0.8
R P(J=true | R)
false 0.2
H- Hardworking true 0.7
G- Good Grader
R- Excellent Recommendation
J- Landed a good Job
208
What can be inferred?
i: P  H , G   P  H   P G 
ii
iii
P  J R, H   P  J R 
PJ   PJ H 

Q: What is the value of P(H,G,¬R,¬J)?
A: P(H,G, ¬R, ¬J) = P(H)*P(G|H)*P(¬R|H,G)*P(¬J|H,G,
¬R) = P(H)*P(G|H)*P(¬R|H,G)*P(¬J| ¬R) = 0.1 * 0.4 * 0.2
* 0.8 = 0.0064

Q: What if we want to add another parameter, C= Has The


Right Connections?
209
Answer
H P(G=true | H)
T .4
P(H=true) = 0.1 F .8
H G
P(C=true) = ???
C

C H G P(R =true | H, G,C)


J
false false false ??
false false true ??
R P(J=true | R)
false true false ??
false 0.2
false true true ??
true 0.7
true false false ??
true false true ??
true true false ??
true true true ??

210
Reachability (the Bayes Ball)
Given bayes net, source node and target node, are these two
nodes independent?
• Shade evidence (things that happened) nodes
• Start at source node
• Try to reach target by search
• States: node, along with previous arc
• Successor function:
– Unobserved nodes:
• To any child of X
• To any parent of X if S is coming from a child
– Observed nodes:
• From parent of X to parent of X
• If you can’t reach a node, it’s conditionally independent of
the start node. If there is a path, they are probably
211
dependent.
Example
• L ind. T’, given T?
Yes
• L ind. B?
Yes
• L ind. B, given T?
No
• L ind. B, given T’?
No
• L ind. B, given T and R?
Yes

212
Naïve Bayes C

X1 X2 X3 … Xn
• Conditional Independence Assumption: features are
independent of each other given the class:
P( X1 ,, X n | C )  P( X1 | C )  P( X 2 | C )   P( X n | C )
• What can we model with naïve bayes?
• Any process where,
• Each cause has lots of “independent” effects
• Easy to estimate the CPT for each effect
• We want to reason about the probability of different
causes given observed effects
213
Naive Bayes Classifiers
Task: Classify a new instance D based on a tuple of attribute values into
one of the classes cj  C

D  x1 , x2 ,, xn

cMAP  argmax P(c | x1 , x2 ,, xn )


cC

According to Rule Bayes P( x1 , x2 ,, xn | c) P(c)


 argmax
c C P( x1 , x2 ,, xn )

Since the denominator is fix  argmax P( x1 , x2 ,, xn | c) P(c)


c C

214
Summary
• Bayesian networks provide a natural
representation for (causally induced)
conditional independence
• Topology + CPTs = compact representation
of joint distribution
• Generally easy for domain experts to
construct

215
Artificial Intelligence

Lesson 12
216
Robotics, a Case Study - Coverage
• Many applications:
– Floor cleaning, mowing, de-mining, ….

• Many approaches:
– Off-line (getting a map in advance) or On-line
– Heuristic or Complete (promise complete coverage)
• Multi-robot, motivated by robustness and efficiency

217
A Robot….
• Feels the environment using sensors.
• Has calculations abilities.
• Knows to execute specific operations.
• Uses all the above: feels the environment,
processes it and then decides what it is
needed to operate and then does it.
Robots Environment Parameters
• Influence:
– Dynamic: The environment is changed even tough the robot did no action.
– Static: If the robot did no action, the environment is not changed.
• Feel:
– Accessible: The robot can feel everything in the environment.
– Inaccessible: The robot can feel only specific factors in the environment. The other
factors remain hidden.
• Expected result:
– Non-Deterministic: The robot action’s result is only one from various results options.
– Deterministic: The expected result of a robot is the expected change.
• Possible values of actions and feelings:
– Discrete: The actions and feelings of the robot are discrete, i.e., it is clearly separated
one from the other, and it’s limited on the number of these actions and feelings.
– Continues: The actions and feelings are continues, i.e., there are unlimited possible
values.
Environment Assumptions
• Static: If the robot did no action, the environment is not changed.
– to be able to guarantee completeness

• Inaccessible: The robot can feel only specific factors in the


environment.
– greater impact on the on-line version

• Non-deterministic (move 5M, but able to move 5.1M)


• Continuous: actions and feelings have continues values
– Exact cellular decomposition: exact shapes not necessarily in the
same size
– Approximate cellular decomposition: squares in the same size
220
MSTC- Multi Robot Spanning Tree
Coverage
• The purpose: cover a specific area using a few robots
that will build a spanning tree of the area.
• Complete - with approximate cellular decomposition
• Robust
– Coverage completed as long as one robot is alive
– The robustness mechanism is simple
• Off-line and On-line algorithms
– Off-line: the map is known in advance. It’s possible to plan and improve.
o Analysis according to initial positions
o Efficiency improvements
– On-line: The map is not known. The robotics need to find the map while running.
o Implemented on simulation of real-robots
221
Off-line Coverage, Basic Assumptions
• Area division – n cells
• k homogenous robots
• Robots movement

222
STC: Spanning Tree Coverage
(Gabrieli and Rimon 2001)
‫ ואין‬,G ‫ המכיל את כל צומתי‬,G ‫ הוא תת גרף קשיר של‬G ‫ עץ פורש של גרף קשיר‬,‫בתורת הגרפים‬ •
.‫גרף כזה הוא עץ‬-‫ תת‬.‫לו מעגלים‬

• Area division
• Graph definition
• Building the spanning tree

223
Non-backtracking MSTC
• Initialization phase: Build STC, distribute to robots
• Distributed execution: Each robot follows its section
– Low risk of collisions
C
Robot B is done!

Robot A is done!
B

Robot C is done!

224
Guaranteed Robustness
• Coverage completed as long as one robot is alive
• Low communication is needed. No need for the
robots re-allocation
C

225
Analysis: Non-backtracking MSTC
• Running time = max ik step(i)
C

n 
 k 1 D B

• Best case:
A

• Worst case: n – k
– Unfortunately, common case C
D

A B

226
Backtracking MSTC
• Similar initialization phase
• But here:
– robots backtrack to assist others
– No point is covered more than twice
D
C

B
A

227
Backtracking MSTC (cont.)
• Same robustness mechanism: coverage is promised as
long as one robot is alive.
• Same low communication requirements, no robots re-
allocations.
Robot C is done!

Robot A is done!

Robot B is done!

228
Backtracking MSTC Analysis

n 
 k 1
Best case: The same
 2n 
 3 1
B

Worst case: k=2


n A

k>2  2 

D
C
A B

229
Efficiency in Off-line Coverage

• Off-line: getting a map in advance

• Optimal MSTC- improves the average case

• Heterogeneous robots- flexibility

• Optimal spanning tree- improves the worst case

230
Optimal MSTC
• Similar initialization phase
• Robots backtrack to assist others:
– All the robots can backtrack
– Backtracking on any number of steps
• No point is covered more than twice
D
C

• Same robustness mechanism


• Same communication requirements
B

E A
231
Optimal MSTC (cont.)
• Choose a robot
• Search for the minimum valid solution
– Left search
– Right search
D

• Complexity: C

– Check on all the robots: k


– Each search: O(n logn)
B
– Validity check: O(k)
– Total: O(k2n logn) A
E
232
Heterogeneous Robots

• Different speeds
– Non-backtracking MSTC 
– Backtracking MSTC 
– Optimal MSTC 
• Different fuel/battery time
– Non-backtracking MSTC 
– Backtracking MSTC 
– Optimal MSTC 

233
Optimal Spanning tree

• Improves the worst case with all 3 algorithms


• The construction is believed to be NP-Hard
R1 R1

R3 R2 R3
R2

(a) (b)

234
Generating a Good Spanning Tree
(Believed to be NP-Hard)
A A

B B

C C
A B = 28 cells A B = 12 cells
B C = 4 cells B C = 12 cells
C A = 4 cells C A = 12 cells
235
A Heuristic Solution
• Build k subtrees on coarse grid
– Start building subtrees from initial locations
– Add cells to each subtree gradually
– Spread away from other robots (based on Manhattan dist)
• Connect subtrees
– Randomly pick connections between subtrees
– Calculate x in resulting tree
– Repeat k^a times (a is a parameter)
– Report tree yielding minimal x

236
Illustration – Stage 1

Min{3,4}
=3

Min{1,2}
=1

Min{2,3}
=2

237
Example

X = 13
16
17
16

238 238
On-line MSTC

• Same basic assumptions:


– Area decomposition- n cells
– k homogenous robots
– Equal tool size and robot movements
• All the robots know their absolute initial position
• Initialization phase
1. Agreed-upon grid construction
2. Self-localization
3. Locations update

239
On-line MSTC (Cont.)

240
Guaranteed Robustness

• Coverage completed as long as one robot is


alive
• No need for re-allocation

241
From Theory to Practice
• Player/Stage with modeled RV-400 robots
• Localization solutions
– GPS
– Odometry with limited errors
• Agreed-upon grid options
– Big enough work-area
– Dynamic work-area
• Collisions avoidance with bumps
– Random wait
– Communication based
• Limited sensors solution
242
Off-line Algorithms Experiments (1)
• Work area: 30X20 cells, 2400 sub-cells
• Each point represents 100 trials
1420

1220
Coverage time

1020

820

620

420

220

20
1 6 11 16 21 26 31
Number of robots

non-backtracking-random backtracking-random optimal-random best case

243
Off-line Algorithms Experiments (2)
• Work area: 30X20 cells with 80 holes, 2080 sub-cells
• Each point represents 100 trials
1420

1220
Coverage time

1020

820

620

420

220

20
1 6 11 16 21 26 31
Number of robots

non-backtracking-random backtracking-random optimal-random best case

244
Experimental Results
920 non-backtracking-
820 random
backtracking-random
720
620 optimal-random
Coverage time

520
non-backtracking-
420 Best STC
320 optimal-Best STC
220
best case
120
20
3 13 23
Number of robots
245
Experimental Results - 27% Obstacles

246
On-line Algorithm Run-time Example

247
On-line Algorithm Experiments
• Random places
• Each point represents 10 trials
04:19:12

03:50:24

03:21:36

02:52:48

02:24:00
Time

01:55:12

01:26:24

00:57:36

00:28:48

00:00:00
2 4 6 8 10
Number of robots

Outdoor environment Indoor environment

248
Conclusion

• Complete and robust multi-robot algorithms

• Redundancy vs. efficiency with off-line algorithms

• Optimal MSTC which handle heterogeneous robots

• Implemented on-line MSTC with approximation techniques

249

Вам также может понравиться