Вы находитесь на странице: 1из 28

Genetic Algorithms: Evolutionary

Algorithms
• What are they?
– Evolutionary algorithms that make use of operations like
mutation, recombination, and selection
• Uses?
– Difficult search problems
– Optimization problems
– Machine learning
– Adaptive rule-bases

1
Genetic algorithm learning methods are based
on models of natural adaptation and evolution.
These learning systems improve their
performance through processes which model
population genetics and survival of the fittest.
In the field of genetics, a population is
subjected to an environment which places
demand on the members. The members which
adapt well are selected for mating and
reproduction. The offsprings of these better
performers inherit genetic traits from both
their parents.
Members of this second generation of offspring
which also adapt well are then selected for mating
and reproduction and the evolutionary cycle
continues. Poor performers die off without leaving
offspring . Good performers produce good
offspring and they, in turn, perform well. After
some number of generations, the resultant
population will have adapted optimally or atleast
very well to the environment. Genetic algorithm
systems start with a fixed size population of data
structures which are used to perform some given
tasks.
After requiring the structures to execute the
specified tasks some number of times, the
structures are rated on their performance and
a new generation of data structures is then
created. The new generation is created by
mating the higher performing structures to
produce offsprings. These offsprings and their
parents are then retained for the next
generation while the poorer performing
structures are discarded.
Genetic Algorithm
Set time t = 0
Initialize population P(t)
While termination condition not met
Evaluate fitness of each member of P(t)
Select members from P(t) based on fitness
Produce offspring from the selected pairs
Replace members of P(t) with better offspring
Set time t = t + 1

5
There are three genetic operations possible:

1.Crossover: A bit position is selected


randomly and the head of one parent is
concatenated to the tail of the second parent
to produce the offspring. Let the two parent
strings be xxxxxxxx and yyyyyyyy. After
crossover, two offsprings are xxxxyyyy and
yyyyxxxx
2.Inversion: Inversion is a transformation applied
to a single string. A bit position is selected at
random and when applied to a structure, the
inversion operation concatenates the tail of the
string to the head of the same string. Let the
parent string be x1x2x3x4x5x6x7x8.If the bit position
selected is 5, then the inverted string would be
x6x7x8 x1x2x3x4x5.
3.Mutation: The mutation operation insures that
the selection process may not caught in a local
minimum. It selects a bit position at random and
changes it. E.g. 01101001 if 3rd bit is selected, then
the string becomes 01001001

These three operators modify the parent moves to


give new offspring move sequences. Each offspring
inherits a utility value from one of the parents.
Population members having low utility values are
discarded to keep the population size fixed.
Traveling Salesman Problem
• To use a genetic algorithm to solve the traveling
salesman problem we could begin by creating a
population of candidate solutions
• We need to define mutation, crossover, and selection
methods to aid in evolving a solution from this
population
• At random pick two solutions and combine them to
create a child solution, then a fitness function is used
to rank the solutions

10
Traveling Salesman Problem
• For crossover we might take two paths (P1 and P2)
break them at arbitrary points and define new
solutions Left1+Right2 and Left2+Right1
• For mutation we might randomly switch two cites in
an existing path

11
Solving TSP using GA
Steps:
1. Create group of random tours
• Stored as sequence of numbers (parents)
2. Choose 2 of the better solutions
• Combine and create new sequences (children)

Problems here:
City 1 repeated in Child 1
City 5 repeated in Child 2

12
Modifications Needed
• Algorithm must not allow repeated cities
• Also, order must be considered
– 12345 is same as 32154
• Based upon these considerations, a computer
model for N cities can be created
• Gets quite detailed

13
Natural Language Processing

Developing programs to understand natural


language is important in AI because a natural form
of communication with systems is essential for user
acceptance. A program understands a natural
language if it behaves by taking a correct or
acceptable action in response to the input. For
example, when a child demonstrates understanding
if it responds with the correct answer to a question.
An understanding of linguistics is not a prerequisite
to the study of natural language understanding, but
a familiarity with the basics of grammar is certainly
important.

We must understand how words and sentences are


combined to produce meaningful word strings
before we can expect to design successful language
understanding systems.
In a natural language, the sentence is the basic
language element.

A sentence is made up of words which express a


complete thought.
Levels of knowledge used in language understanding

Several components of the natural language


understanding process are :
1. Prosody: It deals with rhythm and intonation of
language.

2. Phonology: This is the study of relating sounds to the


words. A phoneme is the smallest unit of sound.
Phonemes are aggregated into word sounds.

3. Morphology: This is the study of word constructions


from basic units called morphemes. A morpheme is the
smallest unit of meaning. For example the construction of
friendly from the root words friend and ly. Individual
words are analyzed into their components, and nonword
tokens, such as punctuation are separated from the words.
4. Syntactic Analysis: Linear sequences of words are
transformed into structures that show how the words
relate to each other. Here the aim is to form grammatically
correct sentences in the language.

5. Semantic Analysis: This is concerned with the meanings


of words and phrases and how they combine to form
sentence meanings.
6. Pragmatic Analysis: That structure representing what
was said is reinterpreted to determine what was actually
meant. This is the high-level knowledge which relates to
the use of sentences in different contexts and how the
context affects the meaning of the sentences.

7. Discourse Integration or world knowledge: The


meaning of the individual sentence may depend on the
sentences that precede it and may influence the meanings
of the sentences that follow it.
Syntax analysis:
Rules for specifying a grammar are given by
Chomsky hierarchy classes of formal languages:
1. Regular language- the grammar is defined using a finite
state machine (not powerful enough to represent the
syntax of most programming language)

2. Context free languages- Rules contains only one non


terminal on left hand side
- regular languages are a proper subset of context free
languages
- transition network parsers can be used to parse class of
context free languages
3. Context sensitive languages:
-Proper superset of context free languages
-rules may contain more than one symbol on left hand side
of a rule but the right hand side must be atleast as long as
left hand side

4. Recursively enumerable languages:


-Superset of context sensitive languages
- defined using unconstrained rules
-not useful in defining syntax of natural languages
Top-down parsing for context free grammar:

In this type of parsing, we start with sentence S until a


string of terminal symbols is reached. Scan the string from
the left to the right until a nonterminal is reached replace
it using a rule and repeat until no nonterminals are left. In
case of more than one rule for replacing a particular non-
terminal exists, any rule can be applied.

All terminal-only strings produced by the grammar are


well-formed sentences.
Bottom up parsing

In this type of parsing, we start with terminals. Scan the


string from the left to the right and replace the terminal
using a rule with a non terminal and repeat until no
terminals are left. In case of more than one rule for
replacing a particular terminal exists, any rule can be
applied. When S is reached, the sentence is parsed.
Example: Consider the given context free grammar
example:

S ↔NP VP
NP ↔ Art N
NP ↔ N
VP ↔ V
VP ↔ V NP
Art ↔ a
Art ↔ the
N ↔ man
N ↔ dog
V ↔ bites
V ↔ likes
Parse the sentence : the dog bites the man using top
down parsing.

S ↔NP VP
S ↔Art N VP
S ↔ Art N V NP
S ↔ Art N V Art N
S ↔ the N V Art N
S ↔ the dog V Art N
S ↔ the dog bites Art N
S ↔ the dog bites the N
S ↔ the dog bites the man
Parse the sentence : the dog bites the man using bottom
up parsing.
S ↔ the dog bites the man
S ↔ the dog bites the N
S ↔ the dog bites Art N
S ↔ the dog V Art N
S ↔ the N V Art N
S ↔ Art N V Art N
S ↔ Art N V NP
S ↔Art N VP
S ↔NP VP
Natural Language Applications

1.Story understanding and Question Answering :We can


use a program using natural language processing
technique that can read a story or other piece of natural
language text and answer questions about it.

2.A Database front End: We can design database front end


that translate a question in natural language into a well-
formed query in the database language.
3.An Information Extraction and Summarization System for
the web: We can design an information extraction system
using natural language understanding that takes as input
unrestricted text and summarizes it with respect to a pre
specified domain or topic of interest.

4.Using Learning Algorithms to Generalize Extracted


information: Cardie and Mooney and Nahm have
suggested that information extracted from text may be
generalized by machine learning algorithms and the result
reused in the information extraction tasks.

Вам также может понравиться