Академический Документы
Профессиональный Документы
Культура Документы
5. Explain the process of Skolemization. How is this accomplished? Give suitable examples in
support of your answer.
Ans. Skolemization is the process of removing existential quantifiers by elimination. In the simple
translate into P (A), where A is a constant that does not appear elsewhere in the KB. But there
is the added complication that some of the existential quantifiers, even though move left, may still be
nested inside a universal quantifier.
Example of Skolemization
Consider “Everyone has a heart”:
Λ Has (x, y)
Λ Has (x, H)
Which says that everyone has the same heart H.? We need to say that the heart they have is not
necessarily shared, that is, it can be found by applying to each person a function that maps from person
to heart:
Where F is a function name that does not appear elsewhere in the KB. F is called a Skolem
Function. In general, the existentially quantified variable is replaced by a term that consists of a Skolem
Function applied to all the variables universally quantified outside the existential quantifier in question.
Skolemization eliminates all existentially quantified variables, so we are now free to drop the universal
quantifiers, because any variable must be universally quantified.
6. Give the classification of different types of tasks of AI.
Ans. One possible classification of AI tasks is into 3 classes: Mundane tasks, Formal tasks and Expert
tasks.
Mundane Tasks
o Perception
o Vision
o Speech
o Natural Language understanding, generation and translation
o Common-sense Reasoning
o Simple reasoning and logical symbol manipulation
o Robot Control
Formal Tasks
o Games
Chess
Backgammon
Draughts
GO
o Mathematics
Geometry and Logic
Logic Theorist: - It proved mathematical theorems. It actually proved several
theorems from Classical Math Textbooks.
Integral Calculus
Programs such as Mathematical and Mathcad and perform complicated
symbolic integration and differentiation.
o Proving properties of Programs e.g. correctness
Expert Tasks
o Engineering
Design
Fault Finding
Manufacturing
Planning
Scientific Analysis
Medical Diagnosis
7. Explain briefly the process of matching production rules against working memory.
Ans. Production systems may vary on the expressive power of conditions in production rules.
Accordingly, the pattern matching algorithm which collects production rules with matched conditions
may range from the naïve-trying all rules in sequence, stopping at the first match-to the optimized, in
which rules are “compiled” into a network of inter-related conditions.
The latter is illustrated by the RETE algorithm, designed by Charles L. Forgy in 1983, which is used
in a series of production systems, called OPS and originally developed at Carnegie Mellon University
culminating in OPS5 in the early eighties. OPS5 may be viewed as a full-fledged programming
language for production system programming.
Ans. Depth first is good because a solution can be found without computing all nods and breadth first
is good because it does not get trapped in dead ends. The best first search allows us to switch
between paths thus gaining the benefit of both approaches. At each step the most promising node is
chosen. If one of the nodes chosen generates nodes that are less promising it is possible to choose
another at the same level and in effect the search changes from depth to breadth. If on analysis these
are no better than this previously unexpanded node and branch is not forgotten and the search
method reverts to the descendants of the first choice and proceeds, backtracking as it were.
Inferential Efficiency the ability to direct the inferential mechanisms into the most productive
directions by storing appropriate guides.
Acquisition Efficiency the ability to acquire new knowledge using automatic methods
wherever possible rather than reliance on human intervention.
10. Explain the different strategies for the selection of clauses to be resolved.
Ans. Many different strategies have been tried for selecting the clauses to be resolved. These
includes: -
Level saturation or two-pointer method: - The outer pointer starts at the negated conclusion: the
inner pointer starts at the first clause. The two clauses denoted by the pointers are resolved if
possible, with the result added to the end of the list of clauses. The inner pointer is incremented
to the next clause until it reaches the outer pointer; then the outer pointer is incremented and
the inner pointer is reset to the front. The two-pointer method is a breadth-first method that will
generate many duplicate clauses.
Set of Support: - One clause in each resolution step must be part of the negated conclusion or a
clause derived from it. This can be combined with the two-pointer method by putting the clauses
from the negated conclusion at the end of the list. Set-of-support keeps the proof process focused
on the theorem to be proved rather than trying to prove everything.
Unit Preference: Clauses are prioritized, with unit clauses preferred, or more generally, shorter
clauses preferred. Resolution with a unit clause makes the result smaller.
Linear Resolution: - One clause in each step must be the result of the previous step. This is a
depth-first strategy. It may be necessary to back up to a previous clause if no resolution with the
current clause is possible.