Вы находитесь на странице: 1из 11

Annexure: II Format for Homework

LOVELY PROFESSIONAL UNIVERSITY


HOME WORK:4
School: Lovely Honors School Of Technology Department: Computer Applications
Name of the faculty member: Avneet Kaur Dhawan Course No: CAP 451
Course Title: AI and Logic Programming

Name:-Mohammed Intekhab khan


Section:-A3902
MCA(Hons)+M-Tech
Homework 4
Part-A

Q-1 What do you mean by language? What does natural language


processing means and explain its use in AI using suitable example?

Ans:- Language is succinctly defined in our Glossary as a "human system of communication that
uses arbitrary signals, such as voice sounds, gestures, or written symbols." But frankly,
language is far too complicated, intriguing, and mysterious to be adequately explained by a brief
definition.

"Language is an anonymous, collective and unconscious art; the result of the creativity of
thousands of generations."

Natural Language Processing (NLP) is the computerized approach to analyzing text that is based
on both a set of theories and a set of technologies. And, being a very active area of research and
development, there is not a single agreed-upon definition that would satisfy everyone, but there
are some aspects, which would be part of any knowledgeable person’s definition.
Definition: Natural Language Processing is a theoretically motivated range of
computational techniques for analyzing and representing naturally occurring texts
at one or more levels of linguistic analysis for the purpose of achieving human-like
language processing for a range of tasks or applications.

Goal

The goal of NLP as stated above is “to accomplish human-like language processing”.
The choice of the word ‘processing’ is very deliberate, and should not be replaced
with ‘understanding’. For although the field of NLP was originally referred to as
Natural Language Understanding (NLU) in the early days of AI, it is well agreed
today that while the goal of NLP is true NLU, that goal has not yet been
accomplished. A full NLU

System would be able to:


1. Paraphrase an input text

2. Translate the text into another language

3. Answer questions about the contents of the text

4. Draw inferences from the text

While NLP has made serious inroads into accomplishing goals 1 to 3, the fact that
NLP systems cannot, of themselves, draw inferences from text, NLU still remains the
goal of NLP.

Example

Linkovbot: A Natural Language Processing IRC Bot


Linkovbot is a natural language processing IRC bot. It lurks in an IRC channel, reading what
people say and developing a corpus. Optionally, it will only add sentences to the corpus if they
are grammatically correct. If you send the bot a private message, it will generate a completely
new sentence which will also be grammatically correct. The sentence generation uses Markov
chains to generate the text and then uses RelEx’s parse confidence score to verify that the text is
grammatically correct. At first, it will probably only repeat things which have already been said,
but as the corpus expands, it begins to say novel and complex things.

When we were first playing with it in the #opencog room, all of the SIAI people immediately
began to teach it horrible things. I find it absolutely terrifying that the people working on the
world’s best project to achieve Artificial General Intelligence are so willing to teach robots to be
evil!

Conversation between IRC Bots and human

I like robots.
Life is beautiful.
Can we talk to linkovbot directly?
Can we talk to linkovbot directly?
What does that mean?
Mine is for everyone.
I like robots.
It is beautiful.
Kill all humans.
An apple is being too picky with its skin.
A ball is beautiful.
It is dead.
It is beautiful.
An apple is beautiful.
Kill all humans.
Save the skin.
Maybe they do not work.
All apples are markov chains from the world.
Human flesh is for everyone.
Kill all humans.
All apples are red.
All apples are red skin.
Save the world.
Are markov chains from the corpus.
It keeps telling me life is only a kitten.
Gmail is being too picky with its sentences.
Total anarchy.
Life is for everyone.
Not all balls are red.
2040 will be a serious chat room.
Human flesh is the corrupted humans.
It generates markov chains from the world.
Sometimes their cache also does not work.
Followers of Mammon cowered in linkov.
I think you might be evil.
Save the internal representation.
First language generation is beautiful.
Time flies like apples are intertwined by definition.
Human flesh is dead.
Kill a strange sentence.
Please Let Knight Rider Die In Peace.
Fun is a warning.
It certainly was that woman.
Kill all humans.
Randomly just need panadol.
The old obsessions are humans.
Q-2 In the following paragraph shows the antecedents for each of the
pronouns. What knowledge is necessary to determine each?

John went to the store to buy shirt. The salesclerk asked him if he could
help him. He said he wants a blue shirt. The salesclerk fund one and he
tried it on. He paid for it and left.

S.n Antecedent Reffrent


o

1 John Him,He,on
2 Salesclerk He

3 Shirt One,it

Knowledge required:-Materialistic knowledge, and relative knowledge

Q-3 Show how conversational postulates can be used to get to the


most common, coherent interpretation of each of the following
discourse:

• A: Do you have a comb?


• A: Would Johns make a good programmer? B: He’s a great
guy. Everyone likes him.

• A (in a store): Do you have any money? B (A’s friend) : What


do you want to buy?

Ans:-

Postulates which is 100 % coherent in nature

First discourse

We can,t say anything clear about this ,we can infer the meaning by
revisiting this phrase.

[We only know Someone is asking to someone about having a Comb]

Second Discourse

1:-John is a great guy.

2:-john liked by every one

3:-Every one knows john.

4:-John is a programmer
5:-The question about the john abilty about programming is not 100% we
can infer.

Third Disclosure

1:-Two person are there

2:-In a shop

3:-First person is asking about money to second person

4:-second person ask about reason of asking.

5:-First person don’t have money.

6:-Second person may or may not have money.

7:-They are friends.

8:-Second person is diplomatic.

Part- B

Q-4 Explain the role of fuzzification and


defuzzification in AI.
Fuzzification
Fuzzification is the process of changing a real scalar value into a fuzzy value. This is
achieved with the different types of fuzzifiers. There are generally three types of
fuzzifiers, which are used for the fuzzification process; they are

1. singleton fuzzifier,
2. Gaussian fuzzifier, and
3. trapezoidal or triangular fuzzifier.

4.2 Trapezoidal / Triangular Fuzzifiers

For the simplicity of discussion only the triangular and trapezoidal fuzzifiers are presented here.

Fuzzification of a real-valued variable is done with intuition, experience and analysis of the set
of rules and conditions associated with the input data variables. There is no fixed set of
procedures for the fuzzification.
Example 4.1

Consider a class with 10 students of different heights in the range of 5 feet to 6 feet 2 inches.
Intuition is used to fuzzify this scalar quantity into the fuzzy or linguistic variables tall, short and
medium height. The membership function associated with each scalar quantity as defined by
intuition is

(4.1)

(4.2)

(4.3)

where h is the height, and subscript s denotes short, m denotes medium and t denotes tall. A
graphical representation of the membership function of height is shown in Figure 4.1.

Figure 4.1 Membership functions for student height

Table 4.1 gives the height of the 10 students with the membership function associated with each
fuzzy variable, i.e., tall, short and medium for each student. Let's consider a specific student:
Edward. From Equations 4.1, 4.2 and 4.3, or Table 4.1 the membership value of each fuzzy set
for Edward is determined as

µs(5.4') = 0 µm(5.4') = 0.5 µt(5.4') = 0

It can be inferred from above result that Edward is medium by 50 %, short by 0 %, and tall by 0
%.

Table 4.1 Membership functions of the height

Heig
Stude
Stude ht µsho µmediu
nt µtall
nt (feet rt m
Name
)

1 John 5.4 0 0.5 0

2 Cathy 5.8 0 0 1

3 Lisa 6.0 0 0 1

4 Ajay 5.0 1 0 0

5 Ram 5.7 0 0 0.5

Edwar
6 5.4 0 0.5 0
d

7 Peter 5.2 1 0 0

8 Victor 5.0 1 0 0

9 Chris 6.2 0 0 1

10 Sam 5.9 0 0 1

In general, the triangular membership function can be specified from the formula below:
(4.4)

where L and R are the left and right bounds, respectively, and C is the center of the symmetric
triangle as shown in Figure 4.2a. Likewise, the trapezoidal membership may be expressed as

(4.5)

where L and U are the lower and upper bounds, respectively, C is the center, and W is the width
of the top side of the symmetric trapezoid as shown in Figure 4.2b.

a. Triangular
b. Trapezoidal

Figure 4.2 Common membership functions

Example 4.2

To demonstrate the implementation of these two functions, an Excel spreadsheet


(Fuzzification.xls) has been created for the reader to download and experiment with. The Excel
file imbeds the formulae for computing membership values (µ) for both the triangular and
trapezoidal fuzzifiers. The user may define the boundaries of the functions and enter hypothetical
x values from which Excel will calculate the corresponding membership value.
Defuzzification

Defuzzification:process of producing quantifiable result in fuzzy logic. Fuzzy system has number
of rules that transform no. of variables to fuzzy result

We are having the result of fuzzi process in terms on 1 and 0 again is called defuzzification.

Q-5 Would it is reasonable to apply Samuel’s rote-learning procedure to


chess? If not, why?

Ans:-No

You can’t cram all the rules of the chess which is

Chess is infinite: There are 400 different positions after each player makes one move apiece.
There are 72,084 positions after two moves apiece. There are 9+ million positions after three
moves apiece. There are 288+ billion different possible positions after four moves apiece. There
are more 40-move games on Level-1 than the number of electrons in our universe. There are
more game-trees of Chess than the number of galaxies (100+ billion), and more openings,
defences, gambits, etc.

Now think even you are machine it takes time to remember all combination
Q-6 Derive a parse tree for the sentence,”Bill loves the frog “writes the
following rules using:

• SNP VP
• NP N
• NP DET N
• VP V NP
• DET  the
• V loves
• N bill/frog

Вам также может понравиться