Вы находитесь на странице: 1из 30

Artificial Intelligence

Unit – I
1. Problem Formulation
2. State Space Formulation
3. Uninformed Search Strategies
4. Heuristics
5. Informed Search Strategies
6. Constraint Satisfaction Problems

State Space Search :

• Formulate a Problem as a State Space Search by showing the


(i) The legal problem states
(ii) The legal operators
(iii) Initial and Goal states

• A State is defined by the specification of the values of all attributes of interest in the
world.

• An Operator changes one state into the other. It has a precondition which is the
value of certain attributes prior to the application of the operator, and a set of effects,
which are the attributes altered by the operator

• The Initial state is where you start

• The Goal state is the partial description of the solution

Goal Directed Agent :


• A Goal Directed Agent needs to achieve certain goals.

• Such an agent selects its actions based on the goal it has.

• Many problems can be represented as a set of states and a set of rules of how one
state is transformed to another.

• Each State is an abstract representation of the agent's environment, that denotes a


configuration of the agent.

• Initial state : The description of the starting configuration of the agent

• An Action/ Operator takes the agent from one state to another state.

• A Plan is a sequence of actions.

• A Goal is a description of a set of desirable states of the world.

• Goal states are often specified by a goal test which any goal state must satisfy.

State Space Search Notations :

• An Initial State is the description of the starting configuration of the agent.

• An Action or an Operator takes the agent from one state to another state which is
called a Successor State. (A state can have a number of successor states.)

• A Plan is a sequence of actions.

• The cost of a plan is referred to as the Path Cost. The path cost is a positive number,
and a common path cost may be the sum of the costs of the steps in the path.

• Problem Formulation means choosing a relevant set of states to consider, and a


feasible set of operators for moving from one state to another.

• Search is the process of considering various possible sequences of operators applied


to the initial state, and finding out a sequence which culminates in a goal state.

Search Problem :

A search problem consists of the following:


• S: the full set of states
• s 0 : the initial state
• A:S→S is a set of operators
• G is the set of final states. Note that G ⊆SS
• The Search Problem is to find a sequence of actions which transforms the agent
from the initial state to a goal state g∈G. G.

• A search problem is represented by a 4-tuple {S, s 0 , A, G}.


S: set of states
s0 ∈G. S : initial state
A: S -> S operators/ actions that transform one state to another state
G : goal, a set of states. G ⊆S S

• This sequence of actions is called a Solution Plan. It is a path from the initial state to
a goal state.

• A Plan P is a sequence of actions. P = {a 0 , a 1 , ... , a N } which leads to traversing


a number of states {s 0 , s 1 , ... , s N+1 ∈G. G}.

• A sequence of states is called a Path.

• The cost of a path is a positive number. In many cases the path cost is computed by
taking the sum of the costs of each action.

Representation of search problems :

A search problem is represented using a directed graph.

• The states are represented as nodes.

• The allowed actions are represented as arcs.


Searching Process :

The generic searching process can be very simply described in terms of the following
steps:

Do until a solution is found or the state space is exhausted.


1. Check the current state
2. Execute allowable actions to find the successor states.
3. Pick one of the new states.
4. Check if the new state is a solution state
If it is not, the new state becomes the current state and the process is repeated

Example Problems :

1. Pegs and Disks Problem (Tower of Hanoi):

Consider the following problem.


We have 3 pegs and 3 disks.
Operators: one may move the topmost disk on any needle to the topmost position to any
other needle.
2. Water Jug Problem :

Problem : You have two jugs of capacity Ci = 4L and Cj = 3L.


: You have to measure 1L of water.
: Faucet is provided to fill the jug with water.

Initial State : Both jugs are empty

Goal State : Any of them contains exactly 1L.

Actions/Operators : a1: (x , y) -> (4 , y)


a2: (x , y) -> (x , 3)
a3: (x , y) -> (x+y , 0) x+y < (Ci = 4L)
a4: (x , y) -> (0 , y+x) y+x < (Cj = 3L)
a5: (x , y) -> (Ci , y-(Ci-x)) Ci = 4L, x>0
a6: (x , y) -> (x-(Cj-y) , Cj) Cj = 3L, y>0
a7: (x , y) -> (0 , y) x>0
a8: (x , y) -> (x , 0) y>0

Cost : Charge one unit for transfer of 1L of water.


: Charge one unit to fill 1L of water.

X Y Action
Start State 0 0 -
4 0 a1
1 3 a6
Goal State 1 0 a8

Goal state has been reached as 4L jug contains exactly 1L of water.

Plan P : {a1, a6, a8}

Total Cost C : (1*4) + (1*3) = 4+3 = 7


: C = 7 units

3. 8-Puzzle Problem :

• In the 8-puzzle problem we have a 3×3 square board and 8 numbered tiles.

• The board has one blank position.

• Bocks can be slid to adjacent blank positions.

• We can alternatively and equivalently look upon this as the movement of the blank
position up, down, left or right.
• The objective of this puzzle is to move the tiles starting from an initial position and
arrive at a given goal configuration.

• The 15-puzzle problems is similar to the 8-puzzle. It has a 4×4 square board and 15
numbered tiles.

Search :

Searching through a state space involves the following:


• A set of states
• Operators and their costs
• Start state
• A test to check for goal state

Basic Search Agorithm :


Evaluating Search Strategies :

1. Completeness: Is the strategy guaranteed to find a solution if one exists?

2. Optimality: Does the solution have low cost or the minimal cost?

3. What is the search cost associated with the time and memory required to find
a solution?

a. Time complexity: Time taken (number of nodes expanded) (worst or


average case) to find a solution.

b. Space complexity: Space used by the algorithm measured in terms of


the maximum size of fringe

Search Tree – Terminology :

• Root Node: The node from which the search starts.

• Leaf Node: A node in the search tree having no children.

• Ancestor/Descendant: X is an ancestor of Y is either X is Y’s parent or X is an


ancestor of the parent of Y. If S is an ancestor of Y, Y is said to be a descendant
of X.

• Branching factor: the maximum number of children of a non-leaf node in the


search tree

• Path: A path in the search tree is a complete path if it begins with the start node
and ends with a goal node. Otherwise it is a partial path.

Node data structure :

• A node used in the search algorithm is a data structure which contains the following:
1. A state description
2. A pointer to the parent of the node
3. Depth of the node
4. The operator that generated this node
5. Cost of this path (sum of operator costs) from the start state

• The nodes that the algorithm has generated are kept in a data structure called OPEN
or fringe.

• Initially only the start node is in OPEN.

• The search starts with the root node.


• The algorithm picks a node from OPEN for expanding and generates all the children
of the node.

• Expanding a node from OPEN results in a closed node.

• Some search algorithms keep track of the closed nodes in a data structure called
CLOSED.

• A solution to the search problem is a sequence of operators that is associated with a


path from a start node to a goal node.

• The cost of a solution is the sum of the arc costs on the solution path.

• The search process constructs a search tree, where


• root is the initial state and
• leaf nodes are nodes
• not yet expanded (i.e., in fringe) or
• having no successors (i.e., “dead-ends”)

• Search tree may be infinite because of loops even if state space is small.

Uninformed Search Strategies or Blind Search Strategies :

Blind search or Uninformed search that does not use any extra information about the
problem domain. The two common methods of blind search
are:
• BFS or Breadth First Search
• DFS or Depth First Search

1. Breadth First Search (BFS) :

Algorithm
Properties

• Complete.

• The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.

• A complete search tree of depth d where each non-leaf node has b children, has a
total of 1 + b + b 2 + ... + b d = (b (d+1) - 1)/(b-1) nodes

• Time Complexity = O(bd)

• Space Complexity = O(bd)

• b = branching factor (i.e. the number of children) at each node.


d = depth

Advantage

Finds the path of minimal length to the goal.

Disadvantage

Requires the generation and storage of a tree whose size is exponential the the depth of
the shallowest goal node.
2. Uniform-Cost Search :

• The algorithm expands nodes in the order of their cost from the source.

• The path cost is usually taken to be the sum of the step costs.

• In uniform cost search the newly generated nodes are put in OPEN according to their
path costs.

• This ensures that when a node is selected for expansion it is a node with the
cheapest cost among the nodes in OPEN.

• Let g(n) = cost of the path from the start node to the current node n. Sort nodes by
increasing value of g.

• Some properties of this search algorithm are:


• Complete
• Optimal/Admissible
• Exponential time and space complexity, O(bd)
3. Depth First Search (DFS) :

Algorithm

Properties

• The nodes in OPEN follow a LIFO order (Last In First Out). OPEN is thus
implemented using a stack data structure.

• The algorithm takes exponential time.

• If N is the maximum depth of a node in the search space, in the worst case the
algorithm will take time O(bd).
• However the space taken is linear in the depth of the search tree, O(bN).
• Note that the time taken by the algorithm is related to the maximum depth of the
search tree.

• If the search tree has infinite depth, the algorithm may not terminate.

• This can happen if the search space is infinite. It can also happen if the search space
contains cycles.

• The latter case can be handled by checking for cycles in the algorithm. Thus
Depth First Search is not complete.

4. Depth Limited Search :

Algorithm

Properties

• This variation of DFS solves the problem of search tree being infinite by keeping the
depth bound.

• Nodes are only expanded if they have depth less than the bound.

• Time complexity = O(bl)

• Space complexity = O(bl)

• l is the depth limit.


5. Depth First Iterative Deepening (DFID) or (IDDF) :

Algorithm

First do DFS to depth 0 (i.e., treat start node as having no successors), then, if no solution
found, do DFS to depth 1, etc.

Procedure

Successive depth-first searches are conducted – each with depth bounds increasing by 1

Advantage

• Linear memory requirements of depth-first search

• Guarantee for goal node of minimal depth

Properties

• For large d the ratio of the number of nodes expanded by DFID compared to that of
DFS is given by b/(b-1).

• The algorithm is Complete .

• Optimal/Admissible if all operators have the same cost. Otherwise, not optimal
but guarantees finding solution of shortest length (like BFS).
• Time complexity is a little worse than BFS or DFS because nodes near the top of
the search tree are generated multiple times, but because almost all of the nodes
are near the bottom of a tree, the worst case time complexity is still exponential,
O(bd)

• If branching factor is b and solution is at depth d, then nodes at depth d are


generated once, nodes at depth d-1 are generated twice, etc.
Hence b d + 2b (d-1) + ... + db <= b d / (1 - 1/b) 2 = O(bd)

• Linear space complexity, O(bd), like DFS

• Depth First Iterative Deepening combines the advantage of BFS (i.e., completeness)
with the advantages of DFS (i.e., limited space and finds longer paths more quickly).

• This algorithm is generally preferred for large state spaces where the solution depth
is unknown.

6. Bi-Directional Search :

• Suppose that the search problem is such that the arcs are bidirectional.

• That is, if there is an operator that maps from state A to state B, there is another
operator that maps from state B to state A.

• Many search problems have reversible arcs. 8-puzzle, 15-puzzle, path planning etc
are examples of search problems.

• However there are other state space search formulations which do not have this
property. The water jug problem is a problem that does not have this property.

• But if the arcs are reversible, you can see that instead of starting from the start state
and searching for the goal, one may start from a goal state and try reaching the start
state.

• If there is a single state that satisfies the goal property, the search problems are
identical.

• How do we search backwards from goal? One should be able to generate predecessor
states. Predecessors of node n are all the nodes that have n as successor.

Algorithm

• Bidirectional search involves alternate searching from the start state toward
the goal and from the goal state toward the start.

• The algorithm stops when the frontiers intersect.


Properties

• Works on search problems which are bi-directional

• Time complexity = O(bd/2)

• Space complexity = O(bd/2)

Comparing Search Strategies :

Breadth Depth Uniform Depth Iterative Bi-


First First Cost Limited Deepining directional
Criteria Search Search Search Search Depth Search
(BFS) (DFS) (UCS) (DLS) First (BDS)
(IDDS)
Time O(bd) O(bd) O(bd) O(bl) O(bd) O(bd/2)
Complexity
Space O(bd) O(bN) O(bd) O(bl) O(bd) O(bd/2)
Complexity
Optimum ? YES NO YES NO YES YES

Complete ? YES NO YES NO YES YES


Graph Search Algorithms :

Heuristics :

• Heuristic means “rule of thumb”.

• Heuristics are criteria, methods or principles for deciding which among several
alternative courses of action promises to be the most effective in order to achieve
some goal.

• In heuristic search or informed search, heuristics are used to identify the most
promising search path.

Heuristic Function :

• A heuristic function at a node n is an estimate of the optimum cost from the current
node to a goal. Heuristic function estimates how close a state is to the goal.

• It is denoted by h(n). h(n) = estimated cost of the cheapest path from node n to a goal
node

• It takes the current state of the agent as its input and produces the estimation of how
close agent is from the goal.

• The heuristic method, however, might not always give the best solution, but it
guaranteed to find a good solution in reasonable time.

• The value of the heuristic function is always positive.

Admissibility of Heuristic Function is given as:

h(n) <= h*(n)


Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than
or equal to the estimated cost.
Informed Search Strategies :
• Informed search algorithm contains an array of knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc.
• This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
• The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.

1. Best First Search (BFS) or Greedy Search :

Algorithm

Greedy Search

• In greedy search, the idea is to expand the node with the smallest estimated cost to
reach the goal.

• We use a heuristic function


f(n) = h(n)
h(n) estimates the distance remaining to a goal.

• Greedy algorithms often perform very well. They tend to find good solutions quickly,
although not always optimal ones.

• The resulting algorithm is not optimal.

• The algorithm is also incomplete, and it may fail to find a solution even if one exists.

• A good heuristic for the route-finding problem would be straight-line distance to the
goal.
Example:
Advantages
• Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.

• This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages
• It can behave as an unguided depth-first search in the worst case scenario.

• It can get stuck in a loop as DFS.

• This algorithm is not optimal.

Propertires

• Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
• Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space
• Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.
• Optimal: Greedy best first search algorithm is not optimal.

2. A* Search or Optimum Best First Search :

• We will next consider the famous A* algorithm. This algorithm was given by Hart,
Nilsson & Rafael in 1968.

• A* is a best first search algorithm with


f(n) = g(n) + h(n)
where
g(n) = sum of edge costs from start to n
h (n) = estimate of lowest cost path from n to goal
f(n) = actual distan ce so far + estimated distance remaining

• h(n) is said to be admissible if it underest imates the cost of any solution that can be
reached from n. If C*(n) is the cost of the cheapest solution path from n to a goal
node, a nd if h is admissible,
h(n) <= C*(n).
Algorithm

Example
Advantages

• It is the best search algorithm

• It is optimal and complete

• It can solve very complex problems

Disadvantages

• It does not always produce the shortest path as it mostly based on heuristics and
approximation.

• A* search algorithm has some complexity issues.

• The main drawback of A* is memory requirement as it keeps all generated nodes in


the memory, so it is not practical for various large-scale problems.

Properties

• The algorithm A* is admissible. This means that provided a solution exists, the first
solution found by A* is an optimal solution. A* is admissible under the following
conditions:
• In the state space graph
o Every node has a finite number of successors
o Every arc in the graph has a cost greater than some ε> 0
• Heuristic function: for every node n, h(n) ≤ h*(n)

• A* is optimally efficient for a given heuristic, it can be shown that no other op timal
algorithm will expand fewer nodes and find a solution.

• However, the number of nodes searched still exponential in the worst case.

• A heuristic is consistent if:


h (n) <= cost(n, n') + h(n')

• If a heuristic h is consistent, the f values along any path will be nondecreasing:


f(n') = estimated distance from start to goal through n'
= actual distance from start to n + step cost from n to n' + estimated distance
from n' to goal
= g(n) + cost(n, n') + h(n')
> g(n) + h(n) because cost(n, n') + h(n') > h(n) by consistency
= f(n)
Therefore f(n') > f(n), so f never decreases along a path.

• If a heuristic h is inconsistent, we can tweak the f values so that they behave as if h


were consistent, using the pathmax equation:
f(n') = max(f(n), g(n') + h(n'))
• Complete: A* algorithm is complete as long as:
▪ Branching factor is finite.
▪ Cost at every action is fixed.
• Optimal: A* search algorithm is optimal if it follows below two conditions:
▪ Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
▪ Consistency: Second required condition is consistency for only A* graph-search.
• If the heuristic function is admissible, then A* tree search will always find the least cost
path.
• Time Complexity: The time complexity of A* search algorithm depends on heuristic
function, and the number of nodes expanded is exponential to the depth of solution d. So the
time complexity is O(b^d), where b is the branching factor.
• Space Complexity: The space complexity of A* search algorithm is O(b^d).

Proof of Admissibility of A*

• We will show that A* is admissible if it uses a monotone heuristic.

• A monotone heuristic is such that along any path the f-cost never decreases.

• But if this property does not hold for a given heuristic function, we can make the f
value monotone by making use of the following trick (m is a child of n)
f(m) = max (f(n), g(m) + h(m))
o Let G be an optimal goal state
o C* is the optimal path cost.
o G2 is a suboptimal goal state: g(G2) > C*

• Suppose A* has selected G2 from OPEN for expansion

• Consider a node n on OPEN on an optimal path to G.

• Thus C* ≥ f(n)

• Since n is not chosen for expansion over G2, f(n) ≥ f(G2)

• G2 is a goal state. f(G2) = g(G2)

• Hence C* ≥ g(G2).

• This is a contradiction. Thus A* could not have selected G2 for expansion before
reaching the goal by an optimal path.
Hill Climbing Algorithm :

• Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem.
• It terminates when it reaches a peak value where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems.
• It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
• A node of hill climbing algorithm has two components which are state and value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.

Features

Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move
in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
No backtracking: It does not backtrack the search space, as it does not remember the
previous states.

State Space Diagram


The state-space landscape is a graphical representation of the hill-climbing algorithm which
is showing a graph between various states of algorithm and Objective function/Cost.
Types of Hill Climbing

• Simple Hill Climbing


• Steepest Ascent Hill Climbing
• Stochastic Hill Climbing

1. Simple Hill Climbing :

• Only evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state.
• It only checks it's one successor state, and if it finds better than the current state, then
move else be in the same state.
• This algorithm has the following features:
▪ Less time consuming
▪ Less optimal solution and the solution is not guaranteed

Algorithm

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
1. If it is goal state, then return success and quit.
2. Else if it is better than the current state then assign new state as a current state.
3. Else if not better than the current state, then return to step2.
Step 5: Exit.

2. Steepest Ascent Hill Climbing :

• The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.

• This algorithm examines all the neighboring nodes of the current state and selects
one neighbor node which is closest to the goal state.

• This algorithm consumes more time as it searches for multiple neighbors

Algorithm
• Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.

• Step 2: Loop until a solution is found or the current state does not change.

1. Let SUCC be a state such that any successor of the current state will be better than it.
2. For each operator that applies to the current state:
I. Apply the new operator and generate a new state.
II. Evaluate the new state.
III.If it is goal state, then return it and quit, else compare it to the SUCC.
IV.If it is better than SUCC, then set new state as SUCC.
V. If the SUCC is better than the current state, then set current state to SUCC.

• Step 3: Exit.

3. Stochastic Hill Climbing :

• Stochastic hill climbing does not examine for all its neighbor before moving.

• Rather, this search algorithm selects one neighbor node at random and decides
whether to choose it as a current state or examine another state.

Problems in Hill Climbing Algorithm

1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than the
local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction to
move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to
solve the problem. Randomly select a state which is far away from the current state so it is possible
that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.

Simulated Annealing :
• A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to be incomplete because it can get stuck on a local maximum.
• And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.
• In simulated annealing in the algorithm picks a random move, instead of picking the
best move.
• If the random move improves the state, then it follows the same path.
• Otherwise, the algorithm follows the path which has a probability of less than 1 or it
moves downhill and chooses another path.
Means End Analysis :

• Means-Ends Analysis is problem-solving techniques used in Artificial intelligence for


limiting search in AI programs.
• It is a mixture of Backward and forward search technique.
• The MEA technique was first introduced in 1961 by Allen Newell, and Herbert A.
Simon in their problem-solving computer program, which was named as General
Problem Solver (GPS).
• The MEA analysis process centered on the evaluation of the difference between the
current state and goal state.
• The means-ends analysis process can be applied recursively for a problem. It is a
strategy to control search in problem-solving. Following are the main Steps which
describes the working of MEA technique for solving a problem.
1. First, evaluate the difference between Initial State and final State.
2. Select the various operators which can be applied for each difference.
3. Apply the operator at each difference, which reduces the difference between the
current state and goal state.

Algorithm
Let's we take Current state as CURRENT and Goal State as GOAL, then following are the
steps for the MEA algorithm.

• Step 1: Compare CURRENT to GOAL, if there are no differences between both then return
Success and Exit.

• Step 2: Else, select the most significant difference and reduce it by doing the following steps
until the success or failure occurs.
1. Select a new operator O which is applicable for the current difference, and if there is
no such operator, then signal failure.
2. Attempt to apply operator O to CURRENT. Make a description of two states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.
3. If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success
and return the result of combining FIRST-PART, O, and LAST-PART.
Constraint Satisfaction Problems :
• Constraint satisfaction problems or CSPs are mathematical problems where one must
find states or objects that satisfy a number of constraints or criteria.
• A constraint is a restriction of the feasible solutions in an optimization problem.

Examples
1. n-Queen Problem
2. A crossword Problem
3. A map coloring Problem
4. Boolean Satisfiability Problem (SAT)
5. A cryptarithmetic Problem

A Constraint Satisfaction Problem (CSP) is characterized by:


• a set of variables {x1, x2, .., xn},
• for each variable xi a domain Di with the possible values for that variable, and
• a set of constraints, i.e. relations, that are assumed to hold between the values of
the variables. We will only consider constraints involving one or two
variables.
• The constraint satisfaction problem is to find, for each i from 1 to n, a value in Di for
xi so that all constraints are satisfied.

Representation of CSP

• A CSP is usually represented as an undirected graph, called Constraint Graph


where the nodes are the variables and the edges are the binary constraints.
• Unary constraints can be disposed of by just redefining the domains to contain only
the values that satisfy all the unary constraints.
• Higher order constraints are represented by hyperarcs.
Solving a CSP
four popular solution methods for CSPs, namely, Generate-and-Test, Backtracking,
Consistency Driven, and Forward Checking.

1. Generate and Test


We generate one by one all possible complete variable assignments and for each we test
if it satisfies all constraints. The corresponding program structure is very simple, just
nested loops, one per variable. In the innermost loop we test each constraint. In most
situation this method is intolerably slow.

2. Backtracking
We order the variables in some fashion, trying to place first the variables that are more
highly constrained or with smaller ranges. This order has a great impact on the efficiency
of solution algorithms and is examined elsewhere. We start assigning values to variables.
We check constraint satisfaction at the earliest possible time and extend an assignment if
the constraints involving the currently bound variables are satisfied.

3. Consistency Driven
Consistency techniques effectively rule out many inconsistent labeling at a very early stage,
and thus cut short the search for consistent labeling. The consistency techniques are
deterministic, as opposed to the search which is non-deterministic. Thus the deterministic
computation is performed as soon as possible and non-deterministic computation during
search is used only when there is no more propagation to done.
3.1 Node Consistency
The node representing a variable V in constraint graph is node consistent if for every
value x in the current domain of V, each unary constraint on V is satisfied. If the
domain D of a variable V containts a value "a" that does not satisfy the unary
constraint on V, then the instantiation of V to "a" will always result in immediate
failure. Thus, the node inconsistency can be eliminated by simply removing those
values from the domain D of each variable V that do not satisfy unary constraint on
V.
3.2 Arc Consistency
In the constraint graph, binary constraint corresponds to arc, therefore this type of
consistency is called arc consistency. Arc (V i ,V j ) is arc consistent if for every
value x the current domain of V i there is some value y in the domain of V j such that
V i =x and V j =y is permitted by the binary constraint between V i and V j . Note,
that the concept of arc-consistency is directional, i.e., if an arc (V i ,V j ) is
consistent, than it does not automatically mean that (V j ,V i ) is also consistent.
3.3 Path Consistency (k-consistent)
A graph is K-consistent if the following is true: Choose values of any K-1 variables
that satisfy all the constraints among these variables and choose any K th variable.
Then there exists a value for this Kth variable that satisfies all the constraints among
these K variables. A graph is strongly K-consistent if it is J-consistent for all J<=K.
A node representing variable V i is restricted path consistent if it is arc-consistent,
i.e., all arcs from this node are arc-consistent, and the following is true: For every
value a in the domain D i of the variable V i that has just one supporting value b from
the domain of incidental variable V j there exists a value c in the domain of other
incidental variable V k such that (a,c) is permitted by the binary constraint between V
i and V k , and (c,b) is permitted by the binary constraint between V k and V j .

4. Forward Checking
Forward checking is the easiest way to prevent future conflicts. Instead of performing arc
consistency to the instantiated variables, it performs restricted form of arc consistency to
the not yet instantiated variables. We speak about restricted arc consistency because
forward checking checks only the constraints between the current variable and the future
variables. When a value is assigned to the current variable, any value in the domain of a
"future" variable which conflicts with this assignment is (temporarily) removed from the
domain. The advantage of this is that if the domain of a future variable becomes empty, it
is known immediately that the current partial solution is inconsistent. Forward checking
therefore allows branches of the search tree that will lead to failure to be pruned earlier
than with simple backtracking.

Вам также может понравиться