Вы находитесь на странице: 1из 40

The Use of Logic Programming in Scenario-Writing: From AI

to PROLOG

W^f†ˆƒ¦ †„vƒ¦ §Sjaƒ  ¨eaÂ

ÎÏW’ˆ| Wj¦g‚

-1-

¨U»lc… ©g»c  UŽ»
»[`[Š ‘[…§ Y”Š…  ªUl§i„…  ‰ˆ Y†ˆU„[ˆ Y†l†l ‰ˆ Ƨ¡j` Yl§i„…  Âh iW[x[

¿”†c[…U»»W ‰”ˆ[Žˆ†… ªUˆ†xˆ ©gwU ªUl§i„…  ¿]ˆ[   ÐÎÐÎ ipˆ ¹Ãinˆ ±Us  ‘~ žY`hˆŠ…§ ‘ˆ†x…§

­i[»‚ˆW £g»W[ ‘»
»
Y”wUˆ[`› À†x… ªUppf[ ‘~ ‰”]cUW†…  ©g”xW ½U~• ¼§in[l›§  ‘†W‚[lˆ…§

‘»»~  ;U”`…Š„[  ;U”¯Up[§  ;U”±Urc ‡…Ux…UW YclU„…  ª›c[…  VlUŠ[ ÒžÑžÐžÏ ªUl§i„…  Y”`ŽŠˆ…

¿ˆ»»n[ Uˆ„ ipˆ ‘~  Y”ˆŠ[…  ½U~• ‘†W‚[lˆ…§ ¿”†c[…  ¯UxW  qxW ‰ˆ _[Š Uˆ… ¶§ix[lUW ‘Ž[Š[ Y”UŽŠ…§

©gc…  ±Us  ‘~ €’€a uS†^ ¾†u «SYˆ ªUl§i„…§  ƒ…h„ ‰”]cUW†… YˆU ±¯Upˆ… ;Uclˆ

¨eaƒ  ©¦°¦en¤

Y”`ŽŠˆ†… ÀUx…  ±Us™   ‘… ª›¤Ul[…  ‰ˆ Y”`ŽŠˆ…§ Ï

€”Ws[… µi~ ‘… ! ÀUx… ‘~U‚]… ¡Uw… ‰ˆ Y”`ŽŠˆ…§ Ð

ªU”±UŠ”l… YWU[„ " i”tŠ[…§ # ¿ˆUn… $ €lŠ… $ Y”`ŽŠˆ…§ Ñ

ª§inNˆ…§ % ‘ˆ†x…§ ¨Ulc…§ % Y”ˆ„… & Y`hˆŠ… & Y”`ŽŠˆ…§ Ò

±U]¢ ' «†[…UW Yœx… ( Y`hˆŠ… ‘ˆ”Uˆ…§ ±Us™§ Ó

ipˆ ¿W‚[lˆ ¼§in[l› ‡”…U— ) ¯gx[ˆ ÁU„l… * ¬h”ˆŠ ­i[‚ˆ Ô


The Demography Model: Design, Interface, and Software

ÁU„l… + ¨Ux”[l + ‘†w Yw±jŠˆ… + Y”ipˆ… + ¶±— + ©±g Ö


Demographic, Economic, and Environmental Interactions: A
Proposal for Design and Software Needs, (Using Visual Basic &
Access)
Y”†W‚[lˆ Uˆ§±ŠUW ÏÎ

ÏÏ
Simulation as Implemented in "Egypt 2020" Model

ÏÐ
The Use of Logic Programming in Scenario-Writing: From AI to
PROLOG
ÏÑ
Investigating Quantitative Relations and Ceilings from Huge
Data-bases: Data Mining
¿ˆx… , ©‚… V†s…§ - ¶ix… , ªUœw Y`hˆŠ… ¿fgˆ ­i[‚ˆ ÏÒ

¿W‚[lˆ… , ¿c ª§±§c - Y`hˆŠ…§ - ‘ˆ†x…§ ¨Ulc… , ©gc - ¨UWn ÏÓ

Y»Š±U‚ˆ ‘»~ Y»”…Uˆ`™ , ª§i»nNˆ…§ -/. Y»”xˆ[`ˆ ¬U»


»[Š 0 Y…§ 1 ³U” ÏÔ

ªU”±UŠ”l…§

-2-
‡tŠ… ªU”ˆUŠ”¯ ‰w i”Wx[†… ¿f§gˆ… qxW ‘~ ª§±s[…§ ÏÕ

Y»”`±Uf… Y»
»”¯Up[› ª§gUx[†… ªUŠU”W ©gwU ¡UnŠ™ ‘…× ª§sf…§ ÏÖ

ipˆ…

ªU”±UŠ”l… Y`hˆŠ ‘~ ªUˆ†xˆ… qxW… YŠ±U‚ˆ… ªUlU”‚…§ Ï×

Y»”]cW… ½i»… ªU»


»l§±  qxW ‘~ ª¯±  Uˆ„ ¿W‚[lˆ… ªU”±UŠ”l ÐÎ

ÐÎÐÎ ipˆ ¹Ãinˆ…

Ylˆf… ªU”±UŠ”l… ¿W‚[lˆ… Y”ˆ„ ªUˆ†xˆ ©gwU ÀUx… i”i‚[…§ ÐÏ

ªU`h”ˆŠ†… Y”lUlc…§ ¿”†c[  Y”Sj`…  ª§NWŠ[…  ÀUx…  i”i‚[…§ ÐÐ

įUp[›§ ¬°ˆŠ…§  €lŠ…  ÀUx…  i”i‚[…§ ÐÑ

Äif ªU`h”ˆŠ vˆ Y”¯Up[› ªU„WUn[…§ ‘~ ªUŠU”W ©gwU ÐÒ




 

¼§in[lœ… Yˆˆpˆ…  ªUˆ†xˆ…  ©gwU Vlc ©±U[fˆ…  ªUœx…  qxW ÐÓ

¼§in[l  ‘~ ‰”]cUW…  Yˆgf… Y”ˆ…Ux…  ªU„Wn…UW v§ˆ  v`§iˆ  ±¯Upˆ ÐÔ

¿W‚[lˆ…§

‡””‚[ Y`hˆŠ…§  ‘ˆ†x…§ ¨Ulc…  ©gc  ¯Ž` ÀUx…  i”i‚[…§ ÐÕ

‘ˆ†x…§ ¨Ulc…  ©gc  UŽ[gw


‘[…§ ¹Ãinˆ…  ªU”`ˆ§iW ÀUx…  i”i‚[…§ ÐÖ

Y`hˆŠ…§Ã

-3-
| XˆgYˆ—  W‚Ul „u Sˆv† „u …‚YS€’„vY  …‚YSraš† °SrYˆ  | W’ƒÂ 
Wdjˆƒ 
Áf‹

 cmu@egypt2020.org ˆÂgY‚ƒ—¦ e’gUƒSU ¾SnY™¦  ©¦°¦aƒ 


 h^

-4-
Introduction to Artificial Intelligence
Artificial Intelligence, or AI for short, is a combination of computer science, physiology, and
philosophy. AI is a broad topic, consisting of different fields, from machine vision to expert
systems. The element that the fields of AI have in common is the creation of machines that can
"think".

In order to classify machines as "thinking", it is necessary to define intelligence, which is


described in more detail below. To what degree does intelligence consist of, for example, solving
complex problems, or making generalizations and relationships? And what about perception and
comprehension? Research into the areas of learning, of language, and of sensory perception has
aided scientists in building intelligent machines. One of the most challenging approaches facing
experts is building systems that mimic the behavior of the human brain, made up of billions of
neurons, and arguably the most complex matter in the universe. Perhaps the best way to gauge
the intelligence of a machine is British computer scientist Alan Turing's test. He stated that a
computer would deserve to be called intelligent if it could deceive a human into believing that it
was human.

Artificial Intelligence has come a long way from its early roots, driven by dedicated researchers.
The beginnings of AI reach back before electronics, to philosophers and mathematicians such as
Boole and others theorizing on principles that were used as the foundation of AI Logic. AI really
began to intrigue researchers with the invention of the computer in 1943. The technology was
finally available, or so it seemed, to simulate intelligent behavior. Over the next four decades,
despite many stumbling blocks, AI has grown from a dozen researchers, to thousands of
engineers and specialists; and from programs capable of playing checkers, to systems designed
to diagnose disease.

AI has always been on the pioneering end of computer science. Advanced-level computer
languages, as well as computer interfaces and word-processors owe their existence to the
research into artificial intelligence. The theory and insights brought about by AI research will set
the trend in the future of computing. The products available today are only bits and pieces of
what are soon to follow, but they are a movement towards the future of artificial intelligence.
The advancements in the quest for artificial intelligence have, and will continue to affect our
jobs, our education, and our lives.

As stated before, in order to classify machines as "thinking", we must first define intelligence.
Intelligence requires both knowledge and reasoning skills. The reasoning portion deduces facts
that are not known to the knowledge portion, and this process produces some sensible course of
action while building experiences. This requirement for AI is an extremely difficult task to
implement.

Why? Computers work on the premise of binary logic. When a computer only knows "yes" and
"no," it is difficult to achieve results that are not rigidly defined. If I wanted to develop an AI
thermostat to heat a house, for example, the program needs to have knowledge of the seasons,
weather conditions like El and the passage of time, plus it must be able to understand
concepts like "warm," "cool" or "too hot.

-5-
Because even the simplest human functions translate to many lines of computer code, current
artificially intelligent systems are designed for one or two specific applications. One of the more
recent examples of an AI application was a chess program running on Deep Blue, IBM's
massively parallel computing system. It was able to successfully beat chess champion Gary
Kasparov because it could search 50 to 100 billion positions in the three minutes each player had
to make a move.

To begin to understand AI, we must first understand how the AI is programmed. Programming a
system to beat a chess champion or drive an autonomous rover on the surface of Mars isn't easy.
There are many ways to code for an AI system. Only the most popular are listed below:

Logic Programming: Simply put, Logic Programming is the use of symbolic logic as a
programming language.

Neural Networks: Neural networks operate by learning from historical data and are ideal when
it is difficult or impossible to formulate hard and fast rules. Their most popular application at
present - but by no means their only one - is in data mining. This involves extracting information
and knowledge from historical data and is often used for cusmohmeder segmentation
applications.

Fuzzy Logic: Unlike classical logic, Fuzzy logic recognizes that real-world propositions are
often not strictly true or false. A liquid does not suddenly change from being "warm" at 39
being "hot" at 40 gic-based industrial controllers are now becoming common place
but the use of the technology in business applications is just starting and has great potential. It is
possible for fuzzy logic systems to learn from past data. Often this is achieved by combining
fuzzy logic and neural nets into a "neuro-fuzzy" system.

Knowledge Based System: Knowledge-based systems - also known as expert systems - is the
most mature, and still the most widely used, of the AI technologies. In a KBS, the knowledge is
made explicit, rather than being implicitly mixed in with the algorithm.

What we all really need to know is what can we do to get our hands on some AI today. How can
we as individuals use our own technology? We hope to discuss this in depth (but as briefly as
possible) so that we can use AI as it is intended.

First, we should be prepared for a change. Our conservative ways stand in the way of progress.
AI is a new step that is very helpful to the society. Machines can do jobs that require detailed
instructions followed and mental alertness. AI with its learning capabilities can accomplish those
tasks but only if the world's conservatives are ready to change and allow this to be a possibility.
It makes us think about how early man finally accepted the wheel as a good invention, not
something taking away from its heritage or tradition.

Secondly, we must be prepared to learn about the capabilities of AI. The more use we get out of
the machines the less work is required by us. In turn less injuries and stress to human beings.
Human beings are a species that learn by trying, and we must be prepared to give AI a chance
seeing AI as a blessing, not an inhibition.
-6-
Finally, we need to be prepared for the worst of AI. Something as revolutionary as AI is sure to
have many kinks to work out. There is always that fear that if AI is learning based, will machines
learn that being rich and successful is a good thing, then wage war against economic powers and
famous people? There are so many things that can go wrong with a new system so we must be as
prepared as we can be for this new technology.

However, even though the fear of the machines are there, their capabilities are infinite Whatever
we teach AI, they will suggest in the future if a positive outcome arrives from it. AI is like a
child that needs to be taught to be kind, well mnadiared, and intelligent. If they're to make
important decisions, they should be wise. We as citizens need to make sure AI programmers are
keeping things on the level. We should be sure they are doing the job correctly, so that no future
accidents occur.

-7-
A LOT OF CONCEPTS
The uses of logic in AI and other parts of computer science that have been undertaken so far do
not involve such an extensive collection of concepts. However, it seems that reaching human
level AI will involve all of the following and probably more.

Logical AI
The idea is that an agent can represent knowledge of its world, its goals and the current
situation by sentences in logic and decide what to do by inferring that a certain action or
course of action is appropriate to achieve its goals.
Logic is also used in weaker ways in AI, databases, logic programming, hardware design and
other parts of computer science. Many AI systems represent facts by a limited subset of logic
and use non-logical programs as well as logical inference to make inferences. Databases
often use only ground formulas. Logic programming restricts its representation to Horn
clauses. Hardware design usually involves only propositional logic. These restrictions are
almost always justified by considerations of computational efficiency.

Epistemology and Heuristics


In philosophy, epistemology is the study of knowledge, its form and limitations. This will do
pretty well for AI also, provided we include in the study common sense knowledge of the
world and scientific knowledge. Both of these offer difficulties philosophers haven't studied,
e.g. they haven't studied in detail what people or machines can know about the shape of an
object the field of view, remembered from previously being in the field of view, remembered
from a description or remembered from having been felt with the hands.
Most AI work has concerned heuristics, i.e. the algorithms that solve problems, usually
taking for granted a particular epistemology of a particular domain, e.g. the representation of
chess positions.

Bounded Informatic Situation


Formal theories in the physical sciences deal with a bounded informatic situation. Scientists
decide informally in advance what phenomena to take into account. For example, much
celestial mechanics is done within the Newtonian gravitational theory and does not take into
account possible additional effects such as outgassing from a comet or electromagnetic
forces exerted by the solar wind. If more phenomena are to be considered, scientists must
make a new theories--and of course they do.
Most AI formalisms also work only in a bounded informatic situation. What phenomena to
take into account is decided by a person before the formal theory is constructed. With such
restrictions, much of the reasoning can be monotonic, but such systems cannot reach human

-8-
level ability. For that, the machine will have to decide for itself what information is relevant,
and that reasoning will inevitably be partly nonmonotonic.

Common Sense Knowledge of the World


Humans have a lot of knowledge of the world which cannot be put in the form of precise
theories. Though the information is imprecise, we believe it can still be put in logical form.
The Cyc project aims at making a large base of common sense knowledge. Cyc is useful, but
further progress in logical AI is needed for Cyc to reach its full potential.

Common Sense Informatic Situation


In general a thinking human is in what we call the common sense informatic situation, as
distinct from the bounded informatic situation. The known facts are necessarily incomplete.
We live in a world of middle-sized object which can only be partly observed. We only partly
know how the objects that can be observed are built from elementary particles in general, and
our information is even more incomplete about the structure of particular objects. These
limitations apply to any buildable machines, so the problem is not just one of human
limitations.
In many actual situations, there is no a priori limitation on what facts are relevant. It may not
even be clear in advance what phenomena should be taken into account. The consequences of
actions cannot be fully determined. The common sense informatic situation necessitates the
use of approximate concepts that cannot be fully defined and the use of approximate theories
involving them. It also requires nonmonotonic reasoning in reaching conclusions. Many AI
texts assume that the information situation is bounded--without even mentioning the
assumption explicitly.
The common sense informatic situation often includes some knowledge about the system's
mental state.
One key problem in formalizing the common sense informatic situation is to make the axiom
sets elaboration tolerant.

Qualitative Reasoning
This concerns reasoning about physical processes when the numerical relations required for
applying the formulas of physics are not known. Most of the work in the area assumes that
information about what processes to take into account are provided by the user. Systems that
must be given this information often won't do human level qualitative reasoning.

Expert Systems
These are designed by people, i.e. not by computer programs, to take a limited set of
phenomena into account. Many of them do their reasoning using logic, and others use
-9-
formalisms amounting to subsets of first order logic. Many require very little common sense
knowledge and reasoning ability. Restricting expressiveness of the representation of facts is
often done to increase computational efficiency.

Elaboration Tolerance
A set of facts described as a logical theory needs to be modifiable by adding sentences rather
than only by going back to natural language and starting over. For example, we can modify
the missionaries and cannibals problem by saying that there is an oar on each bank of the
river and that the boat can be propelled with one oar carrying one person but needs two oars
to carry two people. Some formalizations require complete rewriting to accomodate this
elaboration. Others share with natural language the ability to allow the elaboration by an
addition to what was previously said.

Approximate Concepts
Common sense thinking cannot avoid concepts without clear definitions. Consider the
welfare of an animal. Over a period of minutes, the welfare is fairly well defined, but asking
what will benefit a newly hatched chick over the next year is ill defined. The exact snow, ice
and rock that constitutes Mount Everest is ill defined. The key fact about approximate
concepts is that while they are not well defined, sentences involving them may be quite well
defined. For example, the proposition that Mount Everest was first climbed in 1953 is
definite, and its definiteness is not compromised by the ill-definedness of the exact
boundaries of the mountain.
There are two ways of regarding approximate concepts. The first is to suppose that there is a
precise concept, but it is incompletely known. Thus we may suppose that there is a truth of
the matter as to which rocks and ice constitute Mount Everest. If this approach is taken, we
simply need weak axioms telling what we do know but not defining the concept completely.
The second approach is to regard the concept as intrinsically approximate. There is no truth
of the matter. One practical difference is that we would not expect two geographers
independently researching Mount Everest to define the same boundary. They would have to
interact, because the boundaries of Mount Everest are yet to be defined.

Approximate Theories
Any theory involving approximate concepts is an approximate theory. We can have a theory
of the welfare of chickens. However, its notions don't make sense if pushed too far. For
example, animal rights people assign some rights to chickens but cannot define them
precisely. It is not presently apparent whether the expression of approximate theories in
mathematical logical languages will require any innovations in mathematical logic.

-10-
Ambiguity Tolerance
Assertions often turn out to be ambiguous with the ambiguity only being discovered many
years after the assertion was enunciated. For example, it is a priori ambiguous whether the
phrase ``conspiring to assault a Federal official'' covers the case when the criminals
mistakenly believe their intended victim is a Federal official. An ambiguity in a law does not
invalidate it in the cases where it can be considered unambiguous. Even where it is formally
ambiguous, it is subject to judicial interpretation. AI systems will also require means of
isolating ambiguities and also contradictions. The default rule is that the concept is not
ambiguous in the particular case. The ambiguous theories are a kind of approximate theory.

Causal Reasoning
A major concern of logical AI has been treating the consequences of actions and other
events. The epistemological problem concerns what can be known about the laws that
determine the results of events. A theory of causality is pretty sure to be approximate.

Situation Calculus
Situation calculus is the most studied formalism for doing causal reasoning. A situation is in
principle a snapshot of the world at an instant. One never knows a situation--one only knows
facts about a situation. Events occur in situations and give rise to new situations. There are
many variants of situation calculus, and none of them has come to dominate.

Frame Problem
This is the problem of how to express the facts about the effects of actions and other events
in such a way that it is not necessary to explicitly state for every event, the fluents it does not
affect.

Qualification Problem
This concerns how to express the preconditions for actions and other events. That it is
necessary to have a ticket to fly on a commercial airplane is rather unproblematical to
express. That it is necessary to be wearing clothes needs to be kept inexplicit unless it
somehow comes up.

Projection
Given information about a situation, and axioms about the effects of actions and other events,
the projection problem is to determine facts about future situations. It is assumed that no
facts are available about future situations other than what can be inferred from the ``known
laws of motion'' and what is known about the initial situation. Query: how does one tell a
-11-
reasoning system that the facts are such that it should rely on projection for information
about the future.

Planning
The largest single domain for logical AI has been planning, usually the restricted problem of
finding a finite sequence of actions that will achieve a goal. Planning is somewhat the inverse
problem to projection.

Understanding
A rather demanding notion is most useful. In particular, fish do not understand swimming,
because they can't use knowledge to improve their swimming, to wish for better fins, or to
teach other fish. Maybe fish do learn to improve their swimming, but this presumably
consists primarily of the adjustment of parameters and isn't usefully called understanding. I
would apply understanding only to some systems that can do hypothetical reasoning--if p
were true, then q would be true. Thus Fortran compilers don't understand Fortran.

Discrete processes
Causal reasoning is simplest when applied to processes in which discrete events occur and
have definite results. In situation calculus, the formulas s' = result(e,s) gives the new
situation s' that results when the event e occurs in situation s. Many continuous processes that
occur in human or robot activity can have approximate theories that are discrete.

Continuous Processes
Humans approximate continuous processes with representations that are as discrete as
possible. For example, ``Junior read a book while on the airplane from Glasgow to London.''
Continuous processes can be treated in the situation calculus, but the theory is so far less
successful than in discrete cases. We also sometimes approximate discrete processes by
continuous ones.

Non-deterministic events
Situation calculus and other causal formalisms are harder to use when the effects of an action
are indefinite. Often result(e,s) is not usefully axiomatizable and something like occurs(e,s)
must be used.

-12-
Concurrrent Events
Formalisms treating actions and other events must allow for any level of dependence
between events.

Conjunctivity
It often happens that two phenomena are independent. In that case, we may form a
description of their combination by taking the conjunction of the descriptions of the separate
phenomena. The description language satisfies conjunctivity if the conclusions we can draw
about one of the phenomena from the combined description are the same as the conjunctions
we could draw from the single description. For example, we may have separate descriptions
of the assassination of Abraham Lincoln and of Mendel's contemporaneous experiments with
peas. What we can infer about Mendel's experiments from the conjunction should ordinarily
be the same as what we can infer from just the description of Mendel's experiments. Many
formalisms for concurrent events don't have this property, but conjunctivity itself is
applicable to more than concurrent events.
To use logician's language, the conjunction of the two theories should be a conservative
extension of each of the theories. Actually, we may settle for less. We only require that the
inferrable sentences about Mendel (or about Lincoln) in the conjunction are the same. The
combined theory may admit inferring other sentences in the language of the separate theory
that weren't inferrable in the separate theories.

Learning
Making computers learn presents two problems--epistemological and heuristic. The
epistemological problem is to define the space of concepts that the program can learn. The
heuristic problem is the actual learning algorithm. The heuristic problem of algorithms for
learning has been much studied and the epistemological mostly ignored. The designer of the
learning system makes the program operate with a fixed and limited set of concepts.
Learning programs will never reach human level of generality as long as this approach is
followed. Maybe someone says, ``A computer can't learn what it can't be told.'' We might
correct this, as suggested by Murray Shanahan, to say that it can only learn what can be
expressed in the language we equip it with. To learn many important concepts, it must have
more than a set of weights.

Representation of Physical Objects


We aren't close to having an epistemologically adequate language for this. What do I know
about my pocket knife that permits me to recognize it in my pocket or by sight or to open its
blades by feel or by feel and sight? What can I tell others about that knife that will let them
recognize it by feel, and what information must a robot have in order to pick my pocket of it?

-13-
Representation of Space and Shape
We again have the problem of an epistemologically adequate representation. Trying to match
what a human can remember and reason about when out of sight of the scene is more what
we need than some pixel by pixel representation.

Discrimination, Recognition and Description


Discrimination is the deciding which category a stimulus belongs to among a fixed set of
categories, e.g. decide which letter of the alphabet is depicted in an image.
Recognition involves deciding whether a stimulus belongs to the same set, i.e. represents the
same object, e.g. a person, as a previously seen stimulus.
Description involves describing an object in detail appropriate to performing some action
with it, e.g. picking it up by the handle or some other designated part. Description is the most
ambitious of these operations and has been the forte of logic-based approaches.

Declarative Expression of Heuristics


Expressing heuristics declaratively means that a sentence about a heuristic can be the result
of reasoning and not merely something put in from the outside by a person.

Logic programming
Logic programming isolates a subdomain of first order logic that has nice computational
properties. When the facts are described as a logic program, problems can often be solved by
a standard program, e.g. a Prolog interpreter, using these facts as a program. Unfortunately,
in general the facts about a domain and the problems we would like computers to solve have
that form only in special cases.

Formalized Contexts
Any particular bit of thinking occurs in some context. Humans often specialize the context to
particular situations or theories, and this makes the reasoning more definite, sometimes
completely definite. Going the other way, we sometimes have to generalize the context of
our thoughts to take some phenomena into account.

Rich and Poor Entities


A rich entity is one about which a person or machine can never learn all the facts. The state
of the reader's body is a rich entity. The actual history of my going home this evening is a
rich entity, e.g. it includes the exact position of my body on foot and in the car at each

-14-
moment. While a system can never fully describe a rich entity, it can learn facts about it and
represent them by logical sentences.
Poor entities occur in plans and formal theories and in accounts of situations and events and
can be fully prescribed. For example, my plan for going home this evening is a poor entity,
since it does not contain more than a small, fixed amount of detail. Rich entities are often
approximated by poor entities. Indeed some rich entities may be regarded as inverse limits of
trees of poor entities. (The mathematical notion of inverse limit may or may not turn out to
be useful, although I wouldn't advise anyone to study the subject quite yet just for its possible
AI applications.)

Nonmonotonic Reasoning
Both humans and machines must draw conclusions that are true in the ``best'' models of the
facts being taken into account. Several concepts of best are used in different systems. Many
are based on minimizing something. When new facts are added, some of the previous
conclusions may no longer hold. This is why the reasoning that reached these conclusions is
called nonmonotonic.

Probabilistic Reasoning
Probabilistic reasoning is a kind of nonmonotonic reasoning. If the probability of one
sentence is changed, say given the value 1, other sentences that previously had high
probability may now have low or even 0 probability. Setting up the probabilistic models, i.e
defining the sample space of ``events'' to which probabilities are to be given often involves
more general nonmonotonic reasoning, but this is conventionally done by a person
informally rather than by a computer.
In the open common sense informatic situation, there isn't any apparent overall sample space.
Probabilistic theories may formed by limiting the space of events considered and then
establishing a distribution. Limiting the events considered should be done by whatever
nonmonotonic reasoning techniques are developed techniques for limiting the phenomena
taken into account.

Intentional Stance
Dennett proposes that sometimes we consider the behavior of a person, animal or machine by
ascribing to it belief, desires and intentions.

Creativity
Humans are sometimes creative--perhaps rarely in the life of an individual and among
people. What is creativity? We consider creativity as an aspect of the solution to a problem
rather than as attribute of a person (or computer program).

-15-
A creative solution to a problem contains a concept not present in the functions and
predicates in terms of which the problem is posed. The problem is to determine whether a
checkerboard with two diagonally opposite squares can be removed can be covered with
dominoes, each of which covers two rectilinearly adjacent squares. The standard proof that
this can't be done is creative relative to the statement of the problem. It notes that a domino
covers two squares of opposite color, but there are 32 squares of one color and 30 of the
other color to be colored.
Colors are not mentioned in the statement of the problem, and their introduction is a creative
step relative to this statement. For a mathematician of moderate experience (and for many
other people), this bit of creativity is not difficult. We must, therefore, separate the concept of
creativity from the concept of difficulty.

Before we can have creativity we must have some elaboration tolerance. Namely, in the
simple languagge of A tough nut , the colors of the squares cannot even be expressed. A
program confined to this language could not even be told the solution. Zermelo-Frankel set
theory is an adequate language. In general, set theory, in a form allowing definitions may
have enough elaboration tolerance in general. Regard this as a conjecture that requires more
study.

-16-
Introduction to Logic Programming

Introduction
Logic Programming (LP) is a methodology in programming, which is based in mathematical
logic. A logic program is a set of definitions, which describe a specific problem domain by
means of logic formulas. LP is a paradigm of declarative programming as opposed to procedural
one.

Mathematical Logic
Mathematical Logic is classified into two main categories:
• Propositional Logic, and
• Predicate Logic.
Mathematical logic deals with the derivation of sound conclusions from statements. Formal
proofs can be used to draw conclusions about the world, having as preconditions (premises)
assumptions, axioms or other partially drawn conclusions.

Propositional Logic
Propositional logic is concerned with the Truth of Falsity of propositions. A Proposition is an
expression, which is either true (T) or false (F). In logic, there are logical connectives or
operators, which are used to construct new, more complex propositions. The main logical
Connectives are:

Connective Expresses Notation Meaning


AND Conjunction ∧ and
OR Disjunction ∨ or
NOT Negation ¬ not
EQUIVALENCE Bi-conditional ⇔ if and only if
IMPLICATION Conditional ⇒ if - then

All complex expressions created by using operators are either T or F depending on the operators
used. For example, the conjunction: p ∧ q is T only if both propositions p and q are also T.

Predicate Logic
All "things" in the world are either:
• objects, or
• concepts.

and have:
• properties, attributes, or/and
• relationships among them.

For example:
"ahmed is a teacher" can be written in predicate logic syntax as:
teacher(ahmed) ⇐

-17-
" ahmed frind of mohamed" can be written in predicate logic syntax as:
frind (ahmed, mohamed) ⇐

Thus annotating the name of the property or relationship and their parameters. An expression
may contain variables. For example:
" ahmed is a friend of somebody" can be written in predicate logic syntax as:
friend (ahmed,X) ⇐

We cannot know whether such expressions are true or false, unless variables are assigned with
values. Such variables need quantification, i.e. a notation that is used to indicate whether a
variable refers to all objects or at least one object. The two main quantifiers are:

Quantifier Notation Meaning


UNIVERSAL ∀ for all
EXISTENTIAL ∃ there exists

For example:
" ahmed is a friend of everybody" can be written in predicate logic syntax as:
∀ X: friend (ahmed,X)⇐, whereas

"ahmed is a friend of somebody" can be written in predicate logic syntax as:


∃ X: friend (ahmed,X)⇐

Expressions of the form mentioned above are called predicates. In predicates, the objects
enclosed in brackets are called arguments or parameters and the relation or attributes of objects
is called the predicate name. The number of arguments in a predicate is the arity of the
predicate and is noted by /n, e.g. the predicates teacher/1 and friend/2.

Logic Programming
By using predicates, someone could describe a specific problem. The logic program is a set of
predicate definitions with all the characteristics mentioned above, i.e. constants, variables,
quantifiers, operators etc.

Let us consider a family tree

Ahmed, Nadia are two couples having Mohamed.

Someone could write in predicate logic the following program:


male(ahmed) ⇐
male(mohamed) ⇐
female(nadia) ⇐
parent(ahmed , maohmed) ⇐
parent(nadia, mohamed) ⇐
which can be thought as facts, i.e. predicates that are true without any conditions. Someone could
keep on expressing in predicate logic more relations like:
father(ahmed, mohamed) ⇐
mother(nadia, mohamed) ⇐
-18-
etc. However, one could use the already existing predicates in order to define new predicates.
For example, since we know that any parent who is male is also a father, and this stands for all
people, we could write:
∀ X,Y father(X,Y) ⇐ parent(X,Y) ∧ male(X)

Prolog
Prolog is a programming language, which implements to an extent LP. Prolog (PROgramming
in LOGic) was design in the early 70s. Today, lots of commercial Prologs are available, together
with compilers and user friendly programming environments

The Prolog Syntax


In Prolog the above logic program becomes:
male(ahmed).
male(mohmed).
female(nadia).
parent(ahmed, mohmed).
parent(nadia, mohmed).
father(X,Y) :-
parent(X,Y) ,male(X).

Some syntactic differences are the following:


• All constants start with a lower case letter.
• All predicate names start with a lower case letter.
• All variables start with an upper case letter.
• A clause always ends with a period (.).
• The operator ∧ is replaced with a comma (,
• The operator ⇐ is replace with a colon and a dash ( :-
• There are no quantifiers. All quantifiers appearing in the defined predicate (left part of

• considered as universally quantified.

Therefore, generally speaking, a Prolog program is a set of Horn clauses of the form:

H :- B1 , B2 , ... , Bn

H, B1 , B2 , ... , Bn are predicates. H is the head of the clause. B1 ∧ B2 ∧ ... ∧ Bn is the body of
the clause. Any clause can be read as (Declarative Semantics):

H is true if B1 is true and B2 is true and ... and Bn is true.

There are three forms of clauses:


• Fact or Unit Clause : H.
• Rule : H :- B1 , B2 , ... , Bn
• Goal Clause: :- B1 , B2 , ... , Bn

-19-
Executing a Prolog program
Once a Prolog program is developed, the user can simply ask questions (queries) which can
verify the truth or falsity of certain predicates.
yes no yed. For example:

?- male(ahmed).
yes
?- mother(nadia, mohmed).
yes
?- female(salma).
no

no
not as far as I know om real negation, and it is referred with
the general term Negation as Failure. Variables can also be used in queries. In such case, if the
answer is positive then the values for which the predicate is true are also displayed. For example:

?- father(ahmed, X).
X = mohmed

If there is more than one answer, then all can be gradually produced by typing the semicolon (;).
For example:

?- parent(X,Y).
X= ahmed
Y= mohmed ;
X= nadia
Y= mohmed;
no

no no more answers

Declarative vs. Procedural Programming


LP together with functional programming is a paradigm of declarative programming. In this,
what
how the problem should be solved. The latter is a technique used in
procedural programming, e.g. Pascal, C, Fortran etc.

Consider for example the problem of sorting a list of integers. In a declarative language, one
could specify what sorting means, i.e. iven a list, the corresponding sorted list is any
permutation of the given list as long as the elements in this permutation are in ascending (or
descending) order
machine to solve it. In a pure logic language, the clauses can be arbitrarily written. Also, the
order of the goals in the body of a clause is not important, since all are connected with the
and clause to be true.

In a procedural language, the programmer should write a strict sequence of commands that solve
given a list, in order to produce the corresponding sorted list, the computer
should: compare one by one the neighboring elements of the given list and swap them if the first
is greater than the second. Such procedure should be carried out continuously until no more
-20-
swaps are possible
detail to the machine. In a procedural language the order of the commands is very important
since they express the sequence of actions to be taken.

Facts & Rules


As seen, Prolog programs are sets of facts and rules.
• Facts are predicate definitions that are unconditionally true.
• Rules are predicate definitions that have preconditions.

Facts for Simple Relations


In Prolog program development, we use facts to represent simple relations between objects or
denote simple attributes of objects. For example, assume that objects a, b and c are related with
relation simplerel, as illustrated in the following figure:

A simple fact is then constructed:


simplerel(a,b,c).

If some other objects are associated with the same relation e.g. a, d, and e, then a second fact is
added.
simplerel(a,d,e).

and so on.

In general, there are so many facts for a relation in a Prolog as the number of alternatives.
Therefore, different clauses of the same definition denotes the logical OR.

The order of the arguments in a predicate does not matter, as long as the programmer keeps
As with any formal language, the order of the arguments of a predicate is up to the person who
translates the facts into the language, as long as they are consistent. For example Ahmed is
Kamal's teacher was translated as teacher(ahmed,kamal). If the programmer also has to
translate the sentence Khalil's teacher is Fathi, the order of the terms must be consistent with
-21-
the roles, and not with the superficial order of the words in the sentence. Hence, it would be
expressed as
teacher(fathil,khalil).

In order to remember arbitrary decisions such as the order of terms, programmers add
COMMENTS to programs (all programming language interpreters and compilers provide some
means to mark parts of a file as not belonging to the actual program). For instance, in Prolog, a
percentage sign (%) means that the remaining characters on the line should be ignored. In order to
remember what each of the terms in the teacher/2 relation stands for, the program could
contain the following comment:
% teacher(Teacher,Pupil): Teacher is the teacher of Pupil.
teacher(ahmed,kamal).
teacher(fathil,khalil).

But the best way to be certain about avoiding that problem is in the declaration part we define
what data is. For example
char* teacher, student

And then define predicate as


teacher(teacher, student)

It is important to notice that all the arguments are conceived as either input or output, since a
query may be of any possible combination of variables and constants.

Rules for Complex Relations


In Prolog program development, we use facts as the basic building blocks in order to build more
complicated relations. For example consider the figure, where complexrel is using rel1, rel2 and
rel3 to be defined. These three predicates may have been defined as facts previously or going to
be defined later on.

The predicate complexrel relates any 4 objects. This means that X,Y,Z and W should be
variables. The predicate depends on rel1, rel2 and rel3 and therefore is defined as follows:

-22-
complexrel(X,Y,Z,W):-
rel1(....),
rel2(....),
rel3(....).

The predicates rel1, rel2 and rel3 have their own arguments as shown in the figure above.
Someone could expect that the definition is as follows:
complexrel(X,Y,Z,W):-
rel1(B, C, D),
rel2(A,D,E,G),
rel3(F,G,H).
In the above definition, some objects appear to have different names although they are identical,
for example X, A and B. In Prolog, we always use the same variable name to refer to one object
within a clause . This is because, when a variable becomes bound to a value, then this is kept
for all appearances of the same variable in this clause. This differs from the notion of variables in
imperative languages.
The above definition finally becomes:

complexrel(X,Y,Z,W):-
rel1(X, Y, D),
rel2(X, D, Z, G),
rel3(Z, G, W).

As a conclusion, we could say that the names of the variables are important within the same
clause. However, one could use the same name in two different clauses of the same definition.
Such variables are totally independent. Therefore , the scope of a variable is the clause in which
it appears.

Top-Down vs. Botmohmed-Up Program Construction


There are basically two methodologies for developing Prolog programs.

• Top-Down: Start from the main predicate and start decomposing by defining simpler
relation until facts are defined.
• Botmohmed-Up: Start from the facts and start building more complex relations by
defining the rules for them

-23-
Either way may be used, according to the personal preference of the programmer and the
problem domain. However, in practice a mixture of both is used.

Terms and Predicates


Every object which is not a variable is considered to be a Term in Prolog. Terms may be either:
• simple terms (constants, integers etc), or
• complex terms.

Complex terms consist of a term name (functor) and a number of parameters enumerated within
brackets. For example: a(1,2) is a complex term. The number of parameters in a term is called
the arity of the term. For example the above term has arity 2 and it is referred to as a/2.

Complex terms look like predicates. The difference is that the predicates are either true or false
after their evaluation, whereas terms are simple data structures and they cannot be evaluated.
Terms are characterised as such if they possess the place of an argument in a predicate. For
example, consider the fact the describes a book in a library:
book( author(bratko), title('Prolog Programming'), date(1992) ).

The predicate of the above fact is book/3, whereas author/1, title/1 and date/1 are terms.
Prolog cannot evaluate the arguments of a predicate since these are terms. For example, the goal:
..., p(5+3), ...

cannot result into an arithmetic evaluation of 3+5, since +/2 is an infix term. This may look
strange to someone who is used to functional programming, but Prolog does not have any
functions. Predicates cannot return any value in their place since they are either true or false. If a
predicate is supposed to return a value, this can be achieved only through an argument that will
take this value at the end of evaluation of the predicate.

Unification and Matching


Unification in logic is the process of trying to make two terms identical. In Prolog, this process
is called Matching. In matching the two terms are checked against each other. Matching may
fail or succeed. If matching succeeds the variables that are contained in those terms are
becoming identical to their matches.
Note: Unification and Matching are not exactly the same process.

Unification
A substitution θ is a finite set of the form {v1=t1,...,vn=tn}, where vi is a variable and ti is a term
different form variable vi. Each element vi=ti is called a binding for vi.

If T is term and θ is a substitution, Tθ is called an instance of T, and it is the term obtained from
T by replacing all variables vi in T with the binding ti.

-24-
Two terms are unified by making them identical, i.e. replacing all free variables in the most
general way. The substitution θ resulting from the unification is called the most general unifier
(mgu).

The unification of two terms T1 and T2 is performed as follows:

• If one of T1 or T2 is a variable the unification succeeds (a variable unifies with anything)


• If one of T1 or T2 is a constant the unification succeeds if and only if the other term is
the same constant (a constant unifies only with the same constant)
• If T1 and T2 are compound terms, then the unification succeeds term if and only if:
T1 and T2 have the same functor,
T1 and T2 have the same arity,
each argument of T1 unifies with the corresponding argument of T2.
In all other cases the unification of T1 and T2 fails.
  
If terms T1 and T2 are unified and the mgu L WKH  7 1 L  LGHQWLFD 7 2

Matching
The process of matching is obeying to the following rules:
• A variable can match with any simple or complex term,
• A constant can match only with the same constant
• Two complex terms T1 and T2 match iff:
they consist of the same functor,
they have the same arity,
every argument in T1 matches with the corresponding argument in T2.

Matching is a vital process in a Prolog program evaluation. However, matching can be explicitly
achieved by using the operator =/2. For example:

?- p(1,2) = p(X,Y). meaning that:


X=1 T1 is p(1,2) and T2 is p(X,Y).
Y=2
 ^; 1,Y=2}
yes
T1 S 1,2) and T2 S 1,2)
Therefore, T1 72

?- p(2, X, t(a, Z)) = p(X, Y, t(a, W)) meaning that:


X=2 T1 is p(2,X,t(a,Z)) and T2 is p(X,Y,t(a,W)).
Y=2
 ^; < 2, Z=W}
Z=W
yes T1 S 2,2,t(a,W)) and T2 S 2,2,t(a,W))
Therefore, T1 72

?- p(3, c) = p(X, X). meaning that:


no T1 is p(3,c) and T2 is p(X,X).
7KHU
L  JHQHUD  XQLILH 

-25-
The last matching fails since X cannot match with the integer 3 and the constant c at the same
time.

Resolution and Execution


Resolution of Logic Programs (SLD-Resolution)
Given a goal clause ← B1 ∧ B2 ∧ ... ∧ Bn and a logic program, resolution is the process of
aumohmedatically resolving a logic program, i.e. obtaining a truth value and a substitution of the
goal clause. The following describes one form of resolution of logic programs which is called
SLD-Resolution (Linear Resolution with Selection function for Definite Clauses).

At any stage of the resolution, the conjunction of subgoals ← G1 ∧ G2 ∧ ... ∧ Gn which remain in
order to solve the initial goal is called resolvent.

Given a resolvent ← A1 ∧ A2 ∧ .. ∧ Ai ∧ ... ∧ An a resolution step is defined as follows:


The Computation Rule selects any Ai (1=<i=<n) out of the n candidate subgoals to resolve.

The Search Rule:


1) finds a set of k clauses of the form:
H1 ← B11 ∧ B12 ∧ ... ∧ B1n
...
Hk ← Bk1 ∧ Bk2 ∧ ... ∧ Bkm

such that Hiθi = Aiθi where θi is an mgu for each Hi and Ai


2) replaces Ai in the resolvent with the body of each clause and generates k new resolvents:
← (A1 ∧ A2 ∧ ... ∧ B11 ∧ B12 ∧ ... ∧ B1n ... ∧ An) θ1
...
← (A1 ∧ A2 ∧ ... ∧ Bk1 ∧ Bk2 ∧ ... ∧ Bkm ... ∧ An) θk

3) selects any resolvent ← (A1 ∧ A2 ∧ ... ∧ Bj1 ∧ Bj2 ∧ ... ∧ Bjm ... ∧ An) θj
where 1=<j=<k out of the k candidate resolvents.
The resolution is a sequence of the above described resolution steps. The initial goal Q is called
query . The first resolvent is ←Q. A query is solved when the resolvent becomes empty (←ε).

Example of Resolution
Consider the following Prolog program:
likes(demos,X):- language(X), good_programmer(demos,X).
likes(petros,X):- likes(demos,X).
language(c++).
language(prolog).
good_programmer(demos,prolog).

and the query:


?- likes(petros,X).
-26-
The resolution of the program follows the steps, assuming that the Computation rule always
selects A1 from the current resolvent, and the the Search rule always selects the first clause H1
while the rest are pushed into a stack.

Step #1
The initial resolvent stack is:
←likes(petros,X).
The computation rule selects A1=likes(petros,X).
The search rule finds the clause:
likes(petros,X):- likes(demos,X)
replaces A1 with the body of the clause to generate a new resolvent:
←likes(demos,X)

Step #2
The resolvent stack is:
←likes(demos,X)
The computation rule selects A1=likes(demos,X).
The search rule finds the clause:
likes(demos,X):- language(X), good_programmer(demos,X).
replaces A1 with the body of the clause to generate a new resolvent:
←language(X), good_programmer(demos,X)

Step #3
The resolvent stack is:
←language(X), good_programmer(demos,X)
The computation rule selects A1=language(X).
The search rule finds the clauses:
language(c++)with θ={X=c++}
language(prolog) with θ={X=prolog}
replaces A1 with the body of the clause and applies θ to generate a new resolvent:
←good_programmer(demos,X) θ resulting in ←good_programmer(demos,c++)
←good_programmer(demos,X) θ resulting in ←good_programmer(demos,prolog)

Step #4
The resolvent stack is:
←good_programmer(demos,c++)
←good_programmer(demos,prolog)
The computation rule selects A1=good_programmer(demos,c++).
The search rule finds no clauses that unify with A1 (unification fails).

Step #5
The resolvent stack is:
←good_programmer(demos,prolog)
The computation rule selects A1=good_programmer(demos,prolog).
The search rule finds the clause:
language(prolog).
replaces A1 with the (empty) body of the clause to generate the new resolvent:
←ε

-27-
Step #6
The resolvent stack is:
←ε
So the query is solved with X=prolog

Execution of Prolog Programs


In Prolog, execution of a program follows the LUSH Resolution. (Leftmost Uppermost Selection
Heuristic). In LUSH resolution the corresponding rules to SLD are:

• The Computation rule always selects A1 (Leftmost) from the current resolvent
• The Search rule always selects the first clause H1 (Uppermost) while the rest are pushed
into a stack.

The process of LUSH resolution can be described in terms of an AND/OR-tree as follows:
Execution proceeds by building the execution AND/OR-tree, until the call to
good_programmer(demos,c++) fails:

Then, the execution backtracks to the most recently made choice. All bindings made up the
choice point are undone:

-28-
Since all nodes of the AND/OR tree are solved, the query is solved with X=prolog.

Finally, let us rephrase the rule for variable binding; once a variable is bound to some value, it
can never be unbound, except when backtracking occurs. For example, in the previous program
X=c++ the first time round, but backtracking undoes this binding for variable X, which the second
time round becomes X=prolog.

Constraint Satisfaction Problems


Constraint satisfaction problems are typical in Artificial Intelligence. These problems are usually
NP-complete, meaning that the growth of the problem is exponential even though the increase of
parameters is linear. Therefore, such problems are difficult to solve since the search space grows
so much that makes the solution practically impossible. There exist numerous ways to tackle
such problems. Standard Prolog itself is not among those, since it does not provide efficient
mechanisms to solve constraints.
  W 
+RZHYHU WKL OHVV  DLP UHYLV  OLV SURFHVVLQJYDULDEO  ELQGLQ  D   WK  QD Y
JHQHUDW
QG

test problem solving strategy through problems that require satisfaction of certain constraints.

Solving Constraints with Prolog


A problem consists of an initial description which includes attributes that take their values from a
domain of values. Some constraints between the values of attributes need to be satisfied in order
to find a solution to the problem. A general framework for tackling such problems in Prolog is:

• create a list of the attributes which may form the solution,


• some of the values are unknown, and therefore the attributes are represented as variables,
• define the domain set, i.e. the list of values which attributes can take,
• define the constraints of the problem,
• write Prolog code that uses backtracking in order to bind attributes (variables) to values
and check their validity against the constraints.

If a solution exists, the generate and test mechanism should (labouriously) produce bindings
which are consistent and therefore output a list of values.

-29-
The Zebra Problem

The "who owns the zebra" problem is stated as follows:


"Five men with different nationalities live in the first five houses of a street. They practice five
distinct professions, and each of them has a favorite animal and a favorite drink, all of them
different. The five houses are painted in different colours.
• The Englishman lives in the red house.
• The Spaniard owns a dog.
• The Japanese is a painter.
• The Italian drinks tea.
• The Norwegian lives in the first house on the left.
• The owner of the fox drinks water.
• The owner of the green house drinks coffee.
• The green house is on the right of the white one.
• The sculptor breeds snails.
• The diplomat lives in the yellow house.
• Milk is drunk in the middle house.
• The Norwegian's house is next to the blue one.
• The violinist drinks fruit juice.
• The fox is in a house next to that of the doctor.
• The horse is in a house next to that of the violinist.
Who owns the Zebra?"

The problem is typical of the constraint satisfaction type. The following is a description of the
step by step development of a Prolog program that attempts to solve the problem.
A term should be used to represent a house with five parameters, e.g.:

house( <Colour>, <Nationality>, <Animal>, <Drink>, <Profession> )

and therefore the five houses could be in a list, i.e.:

[ house(_,_,_,_,_),
house(_,_,_,_,_),
house(_,_,_,_,_),
house(_,_,_,_,_),
house(_,_,_,_,_) ]

The actual problem is to find instantiations for all the above variables. However, initially some
instances are known, for example "The Norwegian lives in the first house on the left" and "Milk is drunk in
the middle house". Therefore a partial solution is known, asserted in the program as a template/1 fact:

template( [ house(_, norwegian, _,_,_),


house(_,_,_,_,_),
house(_,_,_, milk, _),
house(_,_,_,_,_),
house(_,_,_,_,_) ]).

One needs to express the relation "next to" since it is stated in the description of the constraints
that "a house such and such is next to another house such and such". The predicate defining the relation is
the following:
-30-
% a house H1 is next to a house H2 if they are subsequent in the list of houses.
next_to(H1, H2, [H1, H2 | _]).
% OR a house H1 is next to a house H2 if they are reversely subsequent
% in the list of houses.
next_to(H1, H2, [H1, H2 | _]).
% OR (otherwise) a house H1 is next to a house H2 if they are next to
% in the rest of the list of houses.
next_to(H1, H2, [_ | RestHouses]):-
next_to(H1,H2,RestHouses).

The rest of the restrictions refer to the coexistence of specific attributes of a specific house in the
list of houses. Therefore, one could use member/2 to express for example that "The Englishman lives
in the red house" as:

member(house(red,englishman,_,_,_), ListofHouses)

The complete program is the following:


owns_zebra(Who) :-
template(ListofHouses),
next_to(house(green, _,_,_,_),house(white, _,_,_,_), ListofHouses),
next_to(house(_,norwegian, _,_,_),house(blue, _,_,_,_), ListofHouses),
next_to(house(_,_, fox, _,_),house(_,_,_,_,doctor), ListofHouses),
next_to(house(_,_, horse, _,_),house(_,_,_,_,violinist), ListofHouses),
member(house(_, _, fox,water,_), ListofHouses),
member(house(red, english, _,_,_), ListofHouses),
member(house(_, spanish, dog, _,_), ListofHouses),
member(house(_, japanese ,_,_,painter), ListofHouses),
member(house(_, italian, _,tea,_), ListofHouses),
member(house(green, _,_,coffee,_), ListofHouses),
member(house(_,_, snails, _, sculptor), ListofHouses),
member(house(yellow, _,_,_, diplomat), ListofHouses),
member(house(_,_,_,fruit_juice, violinist), ListofHouses),
member(house(_,Who,zebra,_,_),ListofHouses).
next_to(H1, H2, [H1, H2 | _]).
next_to(H1, H2, [H1, H2 | _]).
next_to(H1, H2, [_ | RestHouses]):-
next_to(H1,H2,RestHouses).
member(X,[X|_]).
member(X,[_|Y]):-member(X,Y).

The query:
?- owns_zebra(X).

will solve the problem by producing instantiations for all anonymous variables and therefore for
the owner of the zebra (X=italian).

-31-
Knowledge Representation
Knowledge Theory
To be intelligent requires knowledge and reasoning skills. Intelligent behavior implies the
linking of these two together and hence being able to deduce facts that are not explicit in the
knowledge and produce sensible reactions to these facts. In humans there is a consciousness that
enables us to understand concepts such as what and why, that is intentionally. With this ability
we are able to make reasoned judgements and act accordingly. Of course the "reason" within our
decisions is often subjective (and in the same way, our definition of intelligent behavior is
largely subjective). So what forms of reasoning are there? Here are the three main types:

1. Deduction
2. Abduction
3. Induction

The second requirement for intelligent behavior is the knowledge itself. It is impossible to reason
conclusions from knowledge if there is no knowledge. So if we put some facts into a computer
system, use a reasoning program into action and we in theory have an intelligent machine! The
reality is that many of these AI structures will work well in simple "toy" domains but once they
are presented with real world domain problems and give real world values they suddenly begin
to have problems. The problem is that they don't have enough knowledge about the domain and
so can't respond to it. If we attempt to simply solve this problem by stuffing more information
into the system we quickly come across the problem of speed. The specific piece of information
in the database of knowledge cannot be accessed fast enough for a reasonable response using
simple search techniques.

One of the major keys to AI then is being able to store knowledge in an efficient fashion and in
such a way that it is possible to compose programs that can access it in a reasonable time. In an
ideal world all the knowledge in the world would be incorporated into a system, but this leaves
obvious problems. There are no obvious solutions but a number of methods have been proposed
that look at knowledge representation like semantic nets, conceptual graphs, frames, first order
predicate calculus and rules.

A good system for the representation of knowledge in a particular domain should possess the
following four properties:

• Representational Adequacy: The ability to represent all kinds of knowledge that are
needed in that domain.
• Inferential Adequacy: The ability to manipulate the representational structure in such a
way as to derive new structure corresponding to new knowledge inferred from old.
• Inferential Efficiency: The ability to incorporate into the knowledge structure additional
information that can be used to focus the attention of the inference mechanisms in the most
promising direction
• Acquisitional Efficiency: The ability to acquire new information easily.

-32-
There are two types of Knowledge:

1- Static Knowledge: That is called for invariable data and facts that are and non-changeable
data, like
- Ahmed is a male
- Fatema is a female
- Dog is an animal
- Oxygen is essential for life

2- Episodic Knowledge: That is called for Knowledge changes with time. It is formed because
of gaining knowledge from different situations and events. For example
- Going to doctor
- Tacking the bus
- Going to watch play or game

To compensate for its one overpowering asset, indispensability, knowledge possesses some less
desirable properties, including:
• It is volumes.
• It is hard to characterize accurately.
• It is constantly changing.
• It differs from data by being organized

Knowledge Representation
To build intelligence information system we must have what it called "Knowledge Base". That
Knowledge Base has all Static Knowledge and Episodic Knowledge in the specific domain and
to do that we must arrange data, facts, experience in a specific method to handle it and process it.

There are so many methods to represent Knowledge; but the most four popular methods are:

1- Rules
2- Semantic Network
3- Frames
4- Scenario (Script)

-33-
1- Representing Knowledge using Rules

Rules are considered the simplest, oldest, and most important method of representing Static
Knowledge.

In this method Facts are written in form of IF - THEN rules.

For example:

If traffic light is red then stop.

In prolog we write

stop(X):-
traffic_light(X),
X="red".

If traffic light is green then go.

In prolog we write

go(X):-
traffic_light(X),
X="green".

And these rules can be more and more complex.

For example a trivial knowledge-based system could be a series of conditional statements:

IF
the animal is a bird
it does not fly
it swims
it is black and white
THEN it is a penguin.

-34-
2- Representing Knowledge using Semantic Network

Semantic Network is a method of representing Static Knowledge. The main idea behind
Semantic Net is that the meaning of concept comes from the way in which it is connected to
other concepts. In a Semantic Net information is represented as set of nodes connected to each
other by a set of labeled arcs, which represent relation ship among the nodes.

For example

Plant

Is a

Seed Tree

Contains Comes from

Fruit

Is a

Apple
Taste It's Color

Sweet red or yellow

-35-
3- Representing Knowledge using Frames
Frames are a method of representing Static Knowledge. A frame is a collection of attributes
(usually called slots) and associated values (and possibly constraints on values) that describe
some entity in the world. Sometimes a frame describes an entity in some absolute sense;
sometimes it represents the entity from particular point of view.

A single frame taken alone is rarely useful. Instead we build frames systems out of collection of
frames that are connected to each other that the value of an attribute of one frame my be another
frame.

For example

Unit Apple
Color Red or Yellow
Shape Semi spherical
Taste Sweet
Season Winter
Type Fruit

4- Representing Knowledge using Scenarios

Scenario (Script) is a method of representing Episodic Knowledge resulting from exposure to


unexpected situations.
A Scenario is a structure that describes a stereotyped sequence of events in a particular context.
A Script consists of set of slots. Associated with each slot may be some information about what
kinds of values it my contained as we as a default value to be used of no other information is
available.

Important components of Scenario

Entry conditions Conditions that must in general be satisfied before the events
described in the scenario can occur.

Result Conditions that will in general be true after the events described in
the scenario have occurred.

Props Slots representing objects that are involved in the events described in
the scenario. The presence of these objects can be inferred even if
they are not mentioned explicitly.

Roles Slots representing people who are involved in the events described in
the scenario. Also the presence of these people can be inferred even
if they are not mentioned explicitly.

-36-
Track The specific variation on a more general patterns that is represented
by this particular scenario. Different tracks of the same scenario will
share many but not all components.

Scenes The actual sequences of events that occur. The events are represented
in conceptual dependency formalism.

Scenarios are useful because, in the real world, there are patterns to the occurrence of events.
These patterns arise because of causal relationship between events. Agents will perform on
action so that they will then be able to perform another.

The events described in Scenario form a giant Causal Chain. The beginning of the chain is the
set of entry I the set of entry conditions which enables the first events of the script to occur.
The end of the chain is the set of results which my enable later events or event sequences
(possibly described by other scenario) to occur.
Within the chain, events are connected both to earlier events that make them possible and to
later events that they enable.

If a particular script is known to appropriate in a given situation, then it can be very useful in
predicting the occurrence of events that were not explicitly mentioned. Scenarios can be also
useful by indicating how events that were mentioned related to each other.

For example what is the connection between someone's ordering food and someone's eating that
food?

But before a particular scenario can be applied, it must be activated i.e. must be selected as
appropriate to current situation.

There are two ways in which it may be useful to activate a scenario, depending on how important
the scenario is likely to be.

• For fleeing scenario it may be sufficient merely to store a pointer to the scenario so it can
be accessed later if necessary.

• For nonfleeting scenario it is appropriate to active the scenario fully and to attempt to fill
in its slots with particular object and people involved in the current situation. The headers of
a scenario (its preconditions, its preferred locations, its props, its roles, and its events) can all
serve as indicators that the script should be activated.

Example of Scenario

-37-
Scenario : Purchasing a Book
Scene 1: Entering
Track : Bookshop
Customer gets into Bookshop
Props : Shelves Customer look for the specific section of book subject
Publication list ( He may ask the worker if he failed to find that section )
Books ( If he did not find that section he go to Scan 4 )
Money
Customer goes to specific section of book subject
Roles : Customer
Worker
Cashier Scene 2: Searching
Owner

Entry conditions : Customer start searching for the Book in the Shelves
Customer need Book Customer find the Customer did not find Customer did not find the
Customer has Money book the book book

Results : Customer ask Worker Customer ask Worker for


Customer has less Money for Publication list Publication list
Owner has more money
Customer gets the Book Customer search Customer search
Customer is pleased (Optional) Publication list for the Publication list for the book
book

Customer find the book (Customer did not find the


in list book in the list

Customer ask worker to


Or
get the book
(Customer find the book in
Worker get the book to
the list but Worker did not
Customer
find book in storehouse)

Go to scan 4 for leaving or


Back to scan 2 if Back to scan 2 if you back to scan 2 if you would
you would like to would like to search for
like to search for another
search for another another book book
book

Scenes 3 : Paying out

Customer get the book


Customer go to Cashier
Customer pay for the book

(Optional if Customer forget to get another thing (book(s)) he return to Scene


2 to get more otherwise go to Scene 4)

Scenes 4 : Exiting

Customer takes his book(s) and receipt and leaves the Bookshop.

Or

Customer leaves Bookshop if he did not find what he wants.

-38-
Why Logic Programming
The requirements on Logic Programming are of two kinds: to enable such problems to be
modeled simply and naturally; and to enable the resulting problem model to be solved
efficiently.

Logic programming is peculiarly apt for modeling problems for two reasons.
• It is based on relations
• It supports logical variables
Since every combinatorial problem is naturally modeled as a set of variables and a set of
constraints (i.e. relations) on those variables, the facilities of logic programming precisely match
the requirements for modeling combinatorial problems.

Why is complexity well handled in Logic Programming?


Programs in Logic Programming are 'described' top down. This enables complex concepts to be
broken down to as many detailed sub-concepts as is necessary to 'describe' the problem.

This in turn enables us to add further sub-concepts as the problem expands or becomes better
understood.

-39-
References
http://lcs.www.media.mit.edu/people/lieber/PBE/

http://logos.uwaterloo.ca/

http://www.ifcomputer.de/Products/IFProlog/home_en.html

http://www.comlab.ox.ac.uk/archive/logic-prog.html

http://www.logic-programming.org/

http://www.primenet.com/pcai/

http://www.primenet.com/~terry/New_Home_Page/ai_info/pcai_prolog.html

http://www.comlab.ox.ac.uk/archive/logic-prog.html

http://www.cs.mu.oz.au/research/mercury/

http://www.primenet.com/pcai/

http://www.well.com/user/jax/rcfb/prolog.html

http://www.cit.gu.edu.au/~mjm/lfai/prolog.html

http://burks.bton.ac.uk/burks/language/prolog/index.htm

http://burks.bton.ac.uk/burks/language/prolog/intro/node1.htm

http://www-ksl.stanford.edu/knowledge-sharing/README.html

http://www.cs.utexas.edu/users/mfkb/index.html

http://www.cs.utexas.edu/users/mfkb/related.html

http://logic.stanford.edu/selt/selt.html

-40-