Академический Документы
Профессиональный Документы
Культура Документы
Get rid of existential variables (by Existential Instantiation). Drop quantifiers altogether (because all variables are now universally quantified). Insure that variables in a formula dont occur anywhere else (by substituting brand new ones). Get rid of top-level (by just asserting both conjuncts).
Loves(x,x) and Loves(Pat,Jan) If the goal is Loves(Jan,Jan), our retriever will conclude that it is true. If the goal is Loves(y,Jan), our retriever will return two bindings.
Implicitly doing Universal Instantiation and Existential Introduction. Note that variables in goals have a (more or less) existential interpretation. So it is implicitly doing Universal Instantiation.
Note that, unlike our previous assumptions about format, not everything can be expressed this way. Now, use the Horn clauses to make inferences.
But a lot can.
More Precisely
An Inference Procedure:
First, look in the KB for something that unifies with the goal. Next, look for Horn clauses whose RHS (i.e., the consequent) unifies with the goal.
For each, try to prove the LHS (i.e., the antecedent), after substitution with the unifier. If find one, have one answer.
For each conjunct, use the inference procedure on it, carrying along any unifiers found.
Example
Suppose in our KB are:
1. Dog(x) Mammal(x) 2. RR(y) Dog(y) 3. RR(Fido)
and suppose our goal is Mammal(Fido). First, we look in the KB for (something that unifies with) Mammal(Fido); this fails. Next, look for Horn clauses whose RHS unifies with Mammal(Fido). Find fact 1 with {x/Fido} as the unifier. Fail to find in KB; look for another rule; unify Dog(Fido) with RHS of fact 2; try to prove RR(Fido). Find in KB!
Try to prove Dog(x){x/Fido}, i.e., Dog(Fido).
Practical Considerations
With backward chaining, we can have an infinite number of answers.
Suppose we have a recursive rule, e.g.: Person(x) Person(Father(x)) and the query is Person(y). Now, if there are any people, there are an infinite number of them.
Care must be exerted here.
For example, the same rule can be used more than once in a derivation.
When we try to deduce Mammal(Fido), we might just as easily unify with the Cat rule, etc., and spend a lot of effort trying to prove Fido is a cat, a marsupial, etc., each of which fails.
An Alternative
Rather than doing a costly computation whenever we try to retrieve Mammal(Fido), our retriever could, upon learning that RR(Fido), use rules to automagically insert in the KB that Dog(Fido) and Mammal(Fido)
I.e., when we add a fact to the KB, check to see if it unifies with the LHS of an implication, and, if so, we assert the RHS, after substitution, in the KB.
More on Chaining
Combining forward chaining with backward chaining is fine.
We sometimes call implications used for BC if-neededs and those used for FC, if-addeds. We need some way of indicating how we are going to use a particular implication.
Dog(x) Mammal(x) ; If we learn something is a dog, ; add that it is a mammal. Dog(y) RR(y) ; If we want to know if something ; is a dog, try proving it is a Rhodesian ; Ridgeback.
E.g., from the above KB, cant deduce Mammal(Fido) from RR(Fido).
But, perhaps surprisingly, no variant of chaining will establish this fact! Also, from P Q and Q R, we cant conclude P R just by chaining. I.e., our inference methods are not complete. With only Horn clauses (which dont allow negations), we can prove every atomic sentence in prop. calc., but we still cant deduce P R from P Q and Q R.
We Can Do Better
We know that there is some complete inference method for FOPC. What might one be? In fact, there is a deductive method that is
complete as efficient as any complete method of inference. Its due to J. Robinson
P Q is equivalent to P Q resolution lets us conclude Q from these, which is Intuitively, resolution allows us to join constraints together.
just want we want.
More Generally
Suppose our KB consists of nothing but sentences of the form P Q, or P Q. We can
negate the goal, add it to our KR keep applying resolution until we deduce a contradiction.
Example
Suppose our KR is PQ P R Q R and want to prove R S. Negating R S, we get R S so add R, S to our KB. Now apply resolution rule to try to find a contraction.
QR KB: PQ P R Q R R S
looking for disjunctions with disjuncts, one of which is the negation of the other, and ORing together the remaining disjuncts, look for disjunctions with disjuncts, one of which can be unified with the negation of the other, and OR together the remaining disjuncts, substituting with the unifier.
More Formally
Resolution for FOPC: p1 ... pn q1 ... qm and pi ,qj are unified by (p1 ... pi-1 pi+1 ... pn q1 ... qj-1 qj+1 ... qm)/ where pi , qj are atomic formulas or negations of atomic formulas.
Clausal Form
Since our sentence format is so restricted, we can write clauses in an abbreviated way. E.g., instead of x,y (P(x) Q(x,A)) we will write {P(x), Q(x,A)} I.e., a clause is just a set of literals, with the implicit interpretation that they are ORed together.
Resolution (cont)
We negate what we are trying to prove and add it to the KB,
being sure it too is in clausal form.
Form a new clause by adding the remaining disjuncts of the resolvants, and substituting as per the unifier. Keep going until we deduce the empty clause.
Example
We can resolve {P(x), Q(x,A)} with {R(w,z), Q(z,w)}: The negation of Q(x,A) and Q(z,w) unify, with unifier {x/z, w/A}. Applying the unifier to the remaining disjuncts gives us {P(z), R(A,z)}
Resolution Example
Lets do an example that we used BC for.
The KB was
x Dog(x) Mammal(x) y RR(y) Dog(y) RR(Fido) Mammal(Fido)
Resolution Examples
{Dog(x), Mammal(x)} {RR(y), Dog(y)}
{RR(x), Mammal(x)}
{RR(Fido)} {Mammal(Fido)}
Clauses:
{Mammal(Fido)}
Note
In resolution, the goal and the elements of the KB can be any sentences at all
as long as we express them in clausal form. Unlike backward chaining, etc., the goal in resolution is interpreted just like any item in the KB.
In particular, unscoped variables would be treated universally rather than existentially.
E.g., in BC, Loves(x,Pat) mean Who loves Pat?; in resolution, it would mean Does everyone love Pat?.
However, we dont (yet) have a way to retrieve answers; we can only test for truth.
Is Resolution Complete?
Yes!
in the sense that, if a goal follows from a KB, resolution is guaranteed to find a proof. This is called refutation completeness. in the sense that not every sentence logically implied by every KB can be produced by resolution. But this is not really relevant to our concerns.
For example, the tautology {P,P} cannot be constructed by resolution from the empty KB.
No!
Bad News!
Resolution doesnt work for sentences involving equality. For example, if we have
P(A) A=B
then
P(B)
clearly follows.
This can be fixed by another inference rule called paramodulation, or by a lot of less general (perhaps more efficient) hacks.
Good News!
Resolution is as fast as any complete algorithm.
Bad News!
Resolution is hopelessly slow (hyperexponential in the number of literals). There are various strategies to help make things somewhat more efficient, but they dont fundamentally solve the problem.
Worse News!
Whether or not something follows from a set of premises is not decidable. I.e., if neither nor is implied by the KB, there is no way to determine this. This question is partially decidable for FOPC. It is not even recursively enumerable for second order logic.
Causal Form
Resolution requires our KB to be in clausal form. Turns out, we can algorithmically convert any sentence of FOL into clausal form. Thus, resolution is a completely general procedure.
Eliminate implications Distribute negations Rename variables for quantifiers Eliminate existential quantifiers Drop universal quantifiers Distribute disjunctions over conjunctions Turn conjuncts into clauses Standardizing variables apart
1. Eliminate implications
Eliminate implications by substituting logical equivalences. I.e.,
is equivalent to
2. Distribute negations
Distribute negations over other logical operations:
is replaced by ( ) is replaced by ( ) is replaced by x is replaced by x x is replaced by x
3. Rename variables
Rename variables so that each quantifier has a unique variable.
Existential variables not in the scope of any universal quantifiers are replaced by new constants. Existential variables in the scope of some universal quantifiers are replaced by new functions of all the universally quantified variables.
is logically equivalent to
( ) ( )
Eliminate implications:
x ((y Born-in(x,y)) (y (Moving-to(x,y) Citizen(x,y)) ) )
2.
Distribute negations:
x ((y Born-in(x,y)) (y (Moving-to(x,y) Citizen(x,y)) ) )
3.
Rename variables:
x ((y Born-in(x,y)) (z (Moving-to(x,z) Citizen(x,z)) ) )
5.
6.
Distribute disjunctions:
(Born-in(x,Country1(x)) Moving-to(x,Country2(x))) (Born-in(x,Country1(x)) Citizen(x,Country2(x)))
8.
Get Ready
Convert KB to Causal Form:
x,y R(x,y) R(y,x) x,y,z, R(x,y) R(y,z) R(x,z) x y R(x,y) becomes { R(x,y), R(y,x) } { R(x,y), R(y,z), R(x,z) } { R(x,F(x)) } { R(A,A) }
and the goal, x R(x,x), negated becomes We didnt standardize the variables apart, so lets be careful.
{R(F(A),A) } { R(x,y), R(y,x) } {R(A,F(A)) } Clauses { R(x,y), R(y,x) } { R(x,y), R(y,z), R(x,z) } { R(x,F(x)) } { R(A,A) }
Summary
Backward and forwarding chaining are useful strategies for definite clause data bases. Not everything we want to say can be expressed this way, however. Resolution is a completely general FOPC inference technique.
It requires putting our knowledge is clausal form. It is intrinsically slow (but as fast as any complete method).