Вы находитесь на странице: 1из 38

AIT Important Questions

1) Explain the concept of biological neuron model with the help of a neat
diagram?

Answer:

The biological neural network consists of nerve cells (neurons) as shown in above
Fig., which are interconnected as in Fig. given below. The cell body of the neuron,
which includes the neuron's nucleus is where most of the neural computation takes
place.
Neural activity passes from one neuron to another in terms of electrical
triggers which travel from one cell to the other down the neuron's axon, by means of
an electro-chemical process of voltage-gated ion exchange along the axon and of
diffusion of neurotransmitter molecules through the membrane over the synaptic
gap.

The axon can be viewed as a connection wire. However, the mechanism of


signal flow is not via electrical conduction but via charge exchange that is
transported by diffusion of ions. This transportation process moves along the
neuron's cell, down the axon and then through synaptic junctions at the end of the
axon via a very narrow synaptic space to the dendrites and/or soma of the next
neuron at an average rate of 3 m/sec., as in Fig. given below.

It is important to note that not all interconnections, are equally weighted.


Some have a higher priority (a higher weight) than others. Also some are excitory
and some are inhibitory (serving to block transmission of a message). These
differences are effected by differences in chemistry and by the existence of chemical
transmitter and modulating substances inside and near the neurons, the axons and
in the synaptic junction.

2) Name the different learning methods and explain any one method of
supervised learning?

Answer:

Basic Learning rules

There are five basic learning rules: They are

1. Error-correction learning
2. Memory-based learning
3. Hebbian learning
4. Competitive learning
5. Boltzmann learning
6.
1. Error-Correction Learning

Consider the simple a neuron k constituting the only computational node in


the output layer of a feedforward neural network as shown in figure.

Neuron k is driven by a signal vector x(n) produced by one or more layers of


hidden neurons, which are themselves driven by an input vector applied to the
input layer of the neural network.

The output signal of neuron k is denoted by yk(n).

This output signal, representing the only output of the neural network, is
compared to a desired response or target output, denoted by dk(n).

An error signal, denoted by ek(n), is represented as ek(n) = dk(n) yk(n)

The error signal ek(n) actuates a control mechanism.

The corrective adjustments are designed to make the output signal yk(n)
come closer to the desired response dk(n) in a step-by-step manner.

This objective is achieved by minimizing a cost function or index of


performance, (n), defined in terms of the error signal ek(n) as

n ek2 n
1
2
(n) is the instantaneous value of the error energy.

The step-by-step adjustments to the synaptic weights of neuron k are


continued until the system reaches a steady state (i.e., the synaptic weights are
essentially stabilized). At that point the learning process is terminated.

The learning process is referred to as error-correction learning.

2. Memory-Based Learning

In memory-based learning, all or most of the past experiences are explicitly


stored in a large memory of correctly classified input-output examples: xi , d i iN1 .
Where xi denotes an input vector and di denotes the corresponding desired response.

All memory-based learning algorithms involve two essential ingredients:

Criterion used for defining the local neighborhood of the test vector xtest.
Learning rule applied to the training examples in the local neighborhood of
xtest.

In a simple yet effective type of memory-based learning known as the nearest


neighbor rule.

3. Hebbian Learning

Hebb's postulate of learning is the oldest and most famous of all learning
rules; it is named in honor of the neuropsychologist Hebb (1949). According to Hebb

When an axon of cell A is near enough to excite a cell B and repeatedly or


persistently takes part in firing it, some growth process or metabolic changes take
place in one or both cells such that A's efficiency as one of the cells firing B, is
increased.

Above rule can be reframed as

If two neurons on either side of a synapse (connection) are activated


simultaneously (i.e., synchronously), then the strength of that synapse is
selectively increased.
If two neurons on either side of a synapse are activated asynchronously, then
that synapse is selectively weakened or eliminated. Such a synapse is called a
Hebbian synapse.
A Hebbian synapse: Hebbian synapse is a synapse that uses a time-dependent,
highly local, and strongly interactive mechanism to increase synaptic efficiency as a
function of the correlation between the presynaptic and postsynaptic activities.

Hebb's hypothesis

The simplest form of Hebbian learning is described by

kj n y k nx j n

Where is a positive constant that determines the rate of learning.

Above Equation clearly emphasizes the correlational nature of a Hebbian synapse.


It is sometimes referred to as the activity product rate.

4. Competitive Learning

The output neurons of a neural network compete among themselves to become


active (fired).

There are three basic elements to a competitive learning rule.

A set of neurons that are all the same except for some randomly
distributed synaptic weights, and which therefore respond differently to a
given set of input patterns.
A limit imposed on the "strength" of each neuron.
A mechanism that permits the neurons to compete for the right to respond
to a given subset of inputs, such that only one output neuron, or only one
neuron per group, is active (i.e., "on") at a time. The neuron that wins the
competition is called a winner-takes-all neuron.

Accordingly the individual neurons of the network learn to specialize on


ensembles of similar patterns; in so doing they become feature detectors for
different classes of input patterns.

In the simplest form of competitive learning, the neural network has a single
layer of output neurons, each of which is fully connected to the input nodes. The
network may include feedback connections among the neurons, as indicated in
below Figure. In the network architecture described herein, the feedback
connections perform lateral inhibition with each neuron tending to inhibit the
neuron to which it is laterally connected. In contrast, the feedforward synaptic
connections in the network shown below Fig. are all excitatory.
For a neuron k to be the winning neuron, its induced local field vk for a
specified input pattern x must be the largest among all the neurons in the network.
The output signal yk of winning neuron k is set equal to one; the output signals of
all the neurons that lose the competition are set equal to zero.

Where the induced local field vk represents the combined action of all the
forward and feedback inputs to neuron k.

Let wkj denote the synaptic weight connecting input node j to neuron k.
Suppose that each neuron is allotted A fixed amount of synaptic weight (i.e., all
synaptic weights are positive), which is distributed among its input nodes; that is
A neuron then learns by shifting synaptic weights from its inactive to active
input nodes. If a neuron does not respond to a particular input pattern, no learning
takes place in that neuron. If a particular neuron wins the competition, each input
node of that neuron relinquishes some proportion of its synaptic weight, and the
weight relinquished is then distributed equally among the active input nodes.
According to the standard competitive learning rule, the change wkj applied to
synaptic weight wkj is defined by

Where is the learning-rate parameter. This rule has the overall effect of
moving the synaptic weight vector wk of winning neuron k toward the input pattern
x.

5. Boltzmann Learning

The Boltzmann learning rule, named in honor of Ludwig Boltzmann, is a


stochastic learning algorithm derived from ideas rooted in statistical mechanics. A
neural network designed on the basis of the Boltzmann learning rule is called a
Boltzmann machine.

In a Boltzmann machine the neurons constitute a recurrent structure, and


they operate in a binary manner since, for example, they are either in an "on" state
denoted by +1 or in an "off" state denoted by -1. The machine is characterized by an
energy function, E, the value of which is determined by the particular states
occupied by the individual neurons of the machine, as shown by

Where xj is the state of neuron j, and wkj is the synaptic weight connecting
neuron j to neuron k. The fact that j k means simply that none of the neurons in
the machine has self-feedback. The machine operates by choosing a neuron at
randomfor example, neuron kat some step of the learning process, then flipping
the state of neuron k from state xk to state xk at some temperature T with
probability
Where Ek is the energy change (i.e., the change in the energy function of the
machine) resulting from such a flip. Notice that T is not a physical temperature, but
rather a pseudo temperature, as explained in Chapter 1. If this rule is applied
repeatedly, the machine will reach thermal equilibrium.

The neurons of a Boltzmann machine partition into two functional groups:


visible and hidden. The visible neurons provide an interface between the network
and the environment in which it operates, whereas the hidden neurons always
operate freely. There are two modes of operation.

Clamped condition, in which the visible neurons are all clamped onto specific
states determined by the environment.
Free-running condition, in which all the neurons (visible and hidden) are
allowed to operate freely.

According to the Boltzmann learning rule, the change wkj applied to the
synaptic weight wkj from neuron to neuron k is defined by

Where is a learning-rate parameter. Note that both kj+ and kj- range in
value from -1 to +1.

3) Develop McCulloch-Pitts neuron model to implement AND, OR logics


for 2 inputs?

Answer:
The AND function gives the response "true" if both input values are "true";
otherwise the response is "false." If we represent "true" by the value I, and "false" by
0, this gives the following four training input, target output pairs:

The OR function gives the response "true" if either of the input values is "true";
otherwise the response is "false." This is the "inclusive or," since both input values
may be "true" and the response is still "true." Representing "true" as 1, and "false"
as 0, we have the following four training input, target output pairs:

4. Name different activation functions used in neuronal networks and


explain those networks?

Answer:

The different neural networks use a considerable number of different


activation functions. Here we shall introduce some of the most common ones, and
indicate which neural networks employ them.

1. Sign, or Threshold Function


2. Piecewise-linear Function
3. Linear Function
4. Sigmoid Function
5. Hyperbolic tangent
6. Gaussian functions
7. Spline functions

1. Sign, or Threshold Function

For this type of activation function, we have:

This activation function is represented in fig.

2. Piecewise-linear Function

This activation function is described by:


3. Linear Function

This is the simplest activation function. It is given simply by:

A linear function is illustrated in fig.


4. Sigmoid Function

This is by far the most common activation function. It is given by:

It is represented in the following figure.


5. Hyperbolic tangent

Sometimes this function is used instead of the original sigmoid function. It is


defined as:

It is represented in the following figure.

6. Gaussian functions

This type of activation function is commonly employed in RBFs. Using this


notation, the activation function can be denoted as:
7. Spline functions

These functions, as the name indicates, are found in B-spline networks.

The following figure illustrates univariate basis functions of orders o=14.


For all graphs, the same point x=2.5 is marked, so that it is clear how many
functions are active for that particular point, depending on the order of the
spline.
5) Name different architectures of neuronal networks and explain them
with the help of neat diagram?

Answer:

According to the flow of the signals within an ANN, we can divide the
architectures into feedforward networks, if the signals flow just from input to
output, or recurrent networks, if loops are allowed. Another possible classification is
dependent on the existence of hidden neurons, i.e., neurons which are not input nor
output neurons. If there are hidden neurons, we denote the network as a multilayer
NN, otherwise the network can be called a singlelayer NN. Finally, if every neuron
in one layer is connected with the layer immediately above, the network is called
fully connected. If not, we speak about a partially connected network.

1. Singlelayer feedforward network

The simplest form of an ANN is represented in fig. below. In the left, there is the
input layer, which is nothing but a buffer, and therefore does not implement any
processing. The signals flow to the right through the synapses or weights, arriving
to the output layer, where computation is performed.

2. Multilayer feedforward network

In this case there is one or more hidden layers. The output of each layer
constitutes the input to the layer immediately above. For instance, a ANN [5,4, 4,
1] has 5 neurons in the input layer, two hidden layers with 4 neurons in each one,
and one neuron in the output layer.
3. Recurrent networks

Recurrent networks are those where there are feedback loops. Notice that any
feedforward net-work can be transformed into a recurrent network just by
introducing a delay, and feeding back this delay signal to one i nput, as represented
in fig.

6) What is meant by perceptron model and how a perceptron model can be


classified?

Answer:

The Perceptron Model

The Perceptron model is the simplest type of neural network developed by Frank
Rosenblatt in 1962. This type of simple network is rarely used now but it is
significant in terms of its historical contribution to neural networks. A very simple
form of Perceptron model is shown in Fig. below. It is very much similar to the MCP
model discussed in the last section. It has more than 1 inputs connected to the node
summing the linear combination of the inputs connected to the node. Also, the
resulting sum goes through a hard limit er which produces an output of +1 if the
input of the hard limiter is positive. Similarly, it produces an output of -1 if the
input is negative. It was first developed to classify a set of externally inputs into 2
classes of C1 or C2 with an output +1 signifies C1 or C2.

A 2-layer Perceptron network can be used for classifying linear non-separable


problems. First, we consider the classification regions in a 2 dimensional space.
Fig.2 shows the 2-D input space in which a line is used to separate the 2 classes C1
and C2 .
The region below or on the right of the line is

Thus the region above or on the left of the line is


7. What do you mean by knowledge representation? Where it is used in
neural networks.

Answer:

Knowledge is the information about a domain that can be used to solve


problems in that domain. To solve many problems requires much knowledge, and
this knowledge must be represented in the computer. As part of designing a
program to solve problems, we must define how the knowledge will be represented.

Figure: Role of representations in solving Problems.

A representation scheme is the form of the knowledge that is used in an agent.


A representation of some piece of knowledge is the internal representation of the
knowledge.
A representation scheme specifies the form of the knowledge.
A knowledge base is the representation of all of the knowledge that is stored by
an agent.
The general framework for solving problems by computer is given in
Once you have some requirements on the nature of a solution, you must
represent the problem so a computer can solve it.
Computers and human minds are examples of physical symbol systems.
A symbol is a meaningful pattern that can be manipulated.
Examples of symbols are written words, sentences, gestures, marks on paper, or
sequences of bits.
A symbol system creates copies, modifies, and destroys symbols.
Essentially, a symbol is one of the patterns manipulated as a unit by a symbol
system.
The term physical is used, because symbols in a physical symbol system are
physical objects that are part of the real world, even though they may be
internal to computers and brains.
They may also need to physically affect action or motor control.
8. How do you differentiate between learning process and learning tasks?

Answer:

Learning is the ability of an agent to improve its behavior based on experience.


This could mean the following:

The range of behaviors is expanded; the agent can do more.


The accuracy on tasks is improved; the agent can do things better.
The speed is improved; the agent can do things faster.

The ability to learn is essential to any intelligent agent. As Euripides pointed,


learning involves an agent remembering its past in a way that is useful for its
future.

Machine learning tasks are typically classified into three broad categories,
depending on the nature of the learning "signal" or "feedback" available to a
learning system.
They are

1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning

Supervised Learning

The machine is presented with an example of inputs and their desired outputs.
These are given by a teacher and the goal of learning is to learn the general rule
that maps inputs and outputs.

Unsupervised Learning

No labels are given to the learning system; the learning system has to find out its
own structure to the input. Unsupervised learning can be a goal in itself or a means
towards an end.

Reinforcement Learning

The machine program continuously interacts with a dynamic environment in which


it must perform a certain goal. Without a teacher explicitly telling it whether it has
came close to its goal or not.

Example: - Driving a car or a vehicle, learning to play a game by playing against


an opponent.
9. In what way the humans are better than computer? Explain.

Answer:

Human and computers behaves in a completely different manner in


information processing, analysis and decision making process.
Humans Use knowledge in the form of rules of thumb or heuristics to solve
problems in a narrow domain.
Whereas computers Process data and use algorithms, a series of well-defined
operations, to solve general numerical problems.
In a human brain, knowledge exists in a compiled form.
Whereas in computers not possible to separate knowledge from the control
structure to process this knowledge.
Human beings are Capable of explaining a line of reasoning and providing
the details.
Computers do not explain how a particular result was obtained and why
input data was needed.
Humans Use inexact reasoning and can deal with incomplete, uncertain and
fuzzy information.
Computers Work only on problems where data is complete and exact.
Humans can make mistakes when information is incomplete or fuzzy.
Computers provide no solution at all, or a wrong one, when data is
incomplete or fuzzy.
Humans enhance the quality of problem solving via years of learning and
practical training. This process is slow, inefficient and expensive.
Enhance the quality of problem solving by changing the program code, which
affects both the knowledge and its processing, making changes difficult.
From above listed points we can clearly say that humans are better than
computers.
Neural Networks and Fuzzy Logic
UNIT VI Fuzzy Sets

Introduction to classical sets - properties, Operations and


relations;

Consider a universe of discourse, X, as a collection of objects all having the same


characteristics. The individual elements in the universe X will be denoted as x.
The features of the elements in X can be discrete, countable integers, or continuous
valued quantities on the real line.

Examples of elements of various universes might be as follows:

The clock speeds of computer CPUs;


The operating currents of an electronic motor;
The operating temperature of a heat pump (in degrees Celsius);
The Richter magnitudes of an earthquake;
The integers 1 to 10.

A useful attribute of sets and the universes on which they are defined is a metric
known as the cardinality, or the cardinal number.
The total number of elements in a universe X is called its cardinal number;
denoted nx, where x again is a label for individual elements in the universe.
Discrete universes that are composed of a countably finite collection of elements will
have a finite cardinal number; continuous universes comprises an infinite collection of
elements will have an infinite cardinality.
Collections of elements within a universe are called sets, and collections of
elements within sets are called subsets. Sets and subsets are terms that are often used
synonymously, since any set is also a subset of the universal set X.
The collection of all possible sets in the universe is called the whole set.

Example

For crisp sets A and B consisting of collections of some elements in X, the following
notation is defined:
x X a x b e l o n g s t o X
x A a x b e l o n g s t o A
x A a x d o e s n o t b e l o n g t o A

For sets A and B on X, we also have

A B a A i s f u l l yA, thenc o x n B)
t a i n e d i n B ( i f x
A B a A i s c o n t a i n e d i n o r i s e q u i v a
( A c B B) andaB A A (A is equivalent to B)

Null set, , as the set containing no elements, and the whole set, X, as the set of all
elements in the universe.
All possible sets of X constitute a special set called the power set, Denoted by P(X).
The cardinality of the power set, denoted nP(X), is found as nP(X) = 2nX.
If the cardinality of the universe is infinite, then the cardinality of the power set is also
infinity
Operations on Classical Sets

Let A and B be two sets on the universe X.


Union:
The union between the two sets, denoted A B, represents all those elements in the
universe that reside in (or belong to) the set A, the set B, or both sets A and B. (This operation
is also called the logical or; another form of the union is the exclusive or operation.
Intersection
The intersecti o n o f t h e t w o s e t s , d e n o t e d A
universe X that simultaneously reside in (or belong to) both sets A and B.
Complement of a set
The complement of a set A, denoted A, is defined as the collection of all elements in the
universe that do not reside in the set A.
Difference of a set A
The difference of a set A with respect to B, denoted A | B, is defined as the collection of
all elements in the universe that reside in A and that do not reside in B simultaneously.

Union A B ={x | x A or x B}.


I n t e r s e c t i oAnand xA B} g B = { x | x
Complement A ={x | x A, x X}
Difference A | B ={x | x A and B}.

These four operations are shown in terms of Venn diagrams as

Union of sets A and B (logical or)

Intersection of sets A and B.

Complement of set A.
Difference operation A| B.

Properties of Classical or Crisp Sets

Another name for classical sets is that crisp sets.

The most appropriate properties for defining classical sets and showing their similarity
to fuzzy sets are:

Commutativity A B=B A
A g B = B g A .
Associativity A (B C ) = (A B) C
A g ( B g C. ) = ( A g B ) g C
Distributivity A ( B g CB ) =gC) (( AA
A g C ( B) = (( A
A gg B
C )) .
Idempotency A A=A
A g . A = A
Identity A =A
A g X = A
A g = .
A X=X.
Transitivity If A B and B C , then A C.
Involution = A.

Two special properties of set operations are known as the excluded middle axioms and
D e M principles.
o r g a These
n s properties are enumerated here for two sets A and B.
The excluded middle axioms are very important because these are the only set
operations that are not valid for both classical sets and fuzzy sets.

The first, called the axiom of the excluded middle, deals with the union of a set A and
its complement;
A =X
The second, called the axiom of contradiction, represents the intersection of a set A and
its complement.
A =g

D e M o r g a n s p r i n c i p l e s
= .
= g .
D e M o r g a n s p r i n c i p l e s c a n b e s t a t e d f

= U U
= g
Mapping of Classical Sets to Functions

Suppose X and Y are two different universes of discourse (information). If an element x


is contained in X and corresponds to an element y contained in Y, it is generally termed a
mapping from X to Y, or f : X a Y .
As a mapping, the characteristic (indicator) function A(x) is defined as

W h e Ar e x p r e s s e s m e m b e r s h i puniverse.
i n s e t A f o

Fuzzy Sets

In classical, or crisp, sets the transition for an element in the universe between
membership and non membership in a given set is abrupt and w e l l d e f i n e d ( s
A fuzzy set, is a set containing elements that have varying degrees of membership in
the set.
Elements of a fuzzy set are mapped to a universe of membership values using a
function-theoretic form.
Fuzzy sets are denoted in this text by a set symbol with a tilde under strike.
would be the fuzzy set A. This function maps elements of a fuzzy set to a real
numbered value on the interval 0 1.
If an element in the universe, say x, is a member of fuzzy set , then this mapping is
given by Equation

When the universe, X, is continuous and infinite, the fuzzy set is denoted as

The numerator in each term is the membership value in set associated with the
element of the universe indicated in the denominator.

Fuzzy Set Operations

Fuzzy set is mapped to a real numbered value in the interval 0 to 1. If an element of


universe, say x, is a member of fuzzy set , then the mapping is given by [0 , 1].
Considering three fuzzy sets , and on the
universe X. For a given element x of the universe, the
function theoretic operations for the set theoretic
operations unions, intersection and complement are
defined for , and on X

Union:

The union of these two sets in terms of function theoretic


form is given as

Here V indicates the maximum operator

Intersection

The intersection of two sets in function theoretic form is given


as

H e r e i n d i c a t e s t h e m i n i m u m o p e r a t o r

Complement

The complement of single set on universe X, say A is given by

Properties of Fuzzy Sets

Fuzzy sets follow the same properties as crisp sets. Because of this fact and because the
membership values of a crisp set are a subset of the interval [0, 1], classical sets can be
thought of as a special case of fuzzy sets.
The properties of fuzzy sets are
Classical Relations and Fuzzy Relations

Relations represent the mapping of the sets. Relations are intimately involved in logic,
approximate reasoning, classification, rule-based systems, pattern recognition, and control.
In the case of crisp sets relation there are only two degrees of relationship between the
e l e m e n t s o f s e t s i n a c r i s p r re el a l at te id o n. , i
A crisp relation represents the presence or absence of association, interaction, or
interconnectedness between the elements of two or more sets.
Fuzzy relations have infinite number of relationship between the extremes of
completely related and not related between the elements of two or more sets considered.
Degrees of association can be represented by membership grades in a fuzzy relation by
membership grades in a fuzzy relation in the same way as degrees of set membership are
represented in the fuzzy set.
Crisp set can be viewed as a restricted case of the more general fuzzy set concept.

Cartesian product of Crisp Relation

An ordered sequence of n elements is called as ordered n-tuple. The ordered sequence is


in the form of a1,a2,...,an. An unordered sequence is that it is a collection of n elements
without restrictions in the order.
T h e n called
t uaspanl ordered
e i spair when n = 2. For the crisp sets A1 ,A2 ,...,An,
the set of n-tuples a1,a2,...,an, where a1 A1-,a2 A2,...,an An, is called the Cartesian
product of A1 ,A2 ,...,An.
The Cartesian product is denotedbyA1 A2 An. In Cartesian product the first
element in each pair is a member of x and the second element is a member of y formally,
x y = {(x, y)/x X and y Y },
if x y then x y y x.
If all the An are identical and equal to A, then the cartesian product of A1 ,A2 ,...,An
becomes An.

Cardinality of Crisp Relation

Suppose n elements of the universe X are related to m elements of the universe Y. If


the cardinality of X is nx and the cardinality of Y is ny , then the cardinality of the relation R ,
between these two universe nxy = nx ny
The cardinality of the power set describing this relation, P (X Y ) in then np(x y )=2nxny
Fuzzy Cartesian product and Composition

Let be a fuzzy set on universe X and be a fuzzy set on universe Y, then the
Cartesian product between fuzzy sets and will result in a fuzzy relation which is
contained with the full Cartesian product space

Where the fuzzy relation has membership function.

Features of Membership Function

Membership function describes the information contained in a fuzzy set and is useful to
develop a lexicon of terms to describe various special features of this function.
For purposes of simplicity, the functions shown in the figures will all be continuous, but
the terms apply equally for both discrete and continuous fuzzy sets.
The sketch of a membership function consists of 3 regions
1. Core
2. Support
3. Boundary

Core
The core of a membership function for
some fuzzy set is defined as that region of
the universe that is characterized by
complete and full membership in the set
.That is, the core comprises those
elements x of the universe such that

Support

It is the region of the universe that is


characterized by nonzero membership in the set .That is, the support comprises those
elements x of the universe such that .

Boundaries

The boundaries of a membership function for some fuzzy set are defined as that
region of the universe containing elements that have a nonzero membership but not complete
membership.
The boundaries comprise those elements x of the universe such that
These elements of the universe are those with some degree of fuzziness, or only partial
membership in the fuzzy set .
Crossover point

The crossover point of a membership function is the elements in universe whose


membership value is equal to 0.5, = 0.5
Height

The height of the fuzzy set is the maximum value of the membership function, max ( )
The membership functions can be symmetrical or asymmetrical. Membership value is
between 0 and 1.

Based on the membership functions fuzzy sets are classified into two categories

Normal fuzzy set: If the membership function has at least one element in the universe
whose value is equal to 1, then that set is called as normal fuzzy set.
Subnormal fuzzy set: If the membership function has the membership values less than 1,
then that set is called as subnormal fuzzy set.

Convex fuzzy set: If the membership function


has membership values those are
monotonically increasing, or, monotonically
decreasing, or they are monotonically
increasing and decreasing with the increasing
values for elements in the universe, those fuzzy
set is called convex fuzzy set.

Nonconvex fuzzy set: If the membership


function has membership values which are not
strictly monotonically increasing or monotonically
decreasing or both monotonically increasing and
decreasing with increasing values for elements in
the universe, then this is called as nonconvex
fuzzy set.
The membership functions can have
different shapes like triangle, trapezoidal,
Gaussian, etc.
UNIT - VII
Fuzzy Logic System Components

Fuzzification: Fuzzification is the process where the crisp quantities are converted to fuzzy
(crisp to fuzzy). By identifying some of the uncertainties present in the crisp values, fuzzy
values are formed.
The conversion of fuzzy values is represented by the membership functions. Thus
fuzzification process involve assigning membership values for the given crisp quantities.
In the real world, hardware such as a digital voltmeter generates crisp data, but these
data are subject to experimental error. The representation of imprecise data as fuzzy sets is a
useful but not mandatory step when those data are used in fuzzy systems.
For representing this kind of data, data is considered as a crisp. The crisp data is
compared with a fuzzy set.

Membership Value Assignments

There are various methods to assign the membership values or the membership
functions to fuzzy variables.
Intuition
Inference,
Rank ordering,
Angular fuzzy sets,
Neural networks,
Genetic algorithms, and
Inductive seasoning

Intuition

I n t u i t i o n i s b a s e ligence
d o and
n understanding
t h e h uto m
develop
a n the
s o
membership functions. The thorough knowledge of the problem has to be known, the
knowledge regarding the linguistic variable should also
be known.
For example, consider the speed of a dc-motor.
The shape of the universe of speed given in rpm is
shown in Figure. The curves represent membership
function corresponding to various fuzzy variables. The
range of speed is splitted into low, medium, and high.
The curves differentiate the ranges, said by humans.
The placement of curves is approximate over the
universe of discourse; the number of curves and the
overlapping of curves is an important criteria to be
considered while defining membership functions

Inference

This method involves the knowledge to perform deductive reasoning. The membership
function is formed from the facts known and knowledge.
Rank Ordering

The polling concept is used to assign membership values by rank ordering process.
Preferences are above for pair wise comparisons and from this the ordering of the membership
is done.

Angular Fuzzy Sets

The angular fuzzy sets are different from the standard fuzzy sets in their coordinate
description. These sets are defined on the universe of angles, hence are repeating shapes
e v e r y 2 c y c l e s . A n g u l a r f u z z y s e t s a
variables known truth-values. When membership of value 1 is true and that of 0 is false, then
i n b e t w e e n true
0 or a
partially
n d false.
1 i s p a r t i a l l y

Neural Networks

The fuzzy membership function may be created for fuzzy classes of an input data set.
For a given problem the number of input data values is selected. Then the data is divided into
training data set and testing data set. The training data set is used to train the network. After
full training and testing process is completed, the neural network is ready and it can be used
to determine the membership values of any input data in the different regions.

Genetic Algorithm

Genetic algorithm (G A ) u s e s t h e c o n c e p t o D
f a Dr w
a ri
t h e o r y i s b a s e d o n t h e r u l e , s u r v i v a l

The steps involved in computing membership functions using GA are:


1. For the given functional mapping of a system, some membership functions and their
shapes are assumed for various fuzzy variables to be defined.
2. These membership functions are then coded as bit stings.
3. These bit strings are then concatenated (joined).
4. Similar to activation function in neural networks, GA has a fitness function.
5. This fitness function is used to evaluate the fitness of each set of membership functions.
6. These membership functions are the parameters that define that functional mapping of
the system

Inductive Reasoning

The membership can also be generated by the characteristics of inductive reasoning.


The induction is performed by the entropy minimization principle, which clusters the
parameters corresponding to the output classes.

Formation of Rules

For any linguistic variable, there are three general forms in which the canonical rules
can be formed.
(1) Assignment statements
(2) Conditional statements
(3) Unconditional statements
Assignment statements
These statements are those in which the variable is assignment with the value. The
variable and the value assigned are combined by the assignment operator = . T
assignment statements are necessary in forming fuzzy rules. The value to be assigned may be
a linguistic term.
The assignment statement is found to restrict the value of a variable to a specific
equality.

Conditional statements
In these statements, some specific conditions are mentioned; if the conditions are
satisfied then it enters the following statements, called as restrictions.

Unconditional statements
There is no specific condition that has to be satisfied in this form of statements.

Fuzzy (Rule-Based) Systems

In the field of artificial intelligence (machine intelligence), there are various ways to
represent knowledge. Perhaps the most common way to represent human knowledge is to
form it into natural language expressions of the type
IF premise (antecedent), THEN conclusion (consequent).
The form in Expression given in above statement is commonly referred to as the IF
THEN rule-based form; this form is generally referred to as the deductive form. It typically
expresses an inference such that if we know a fact (premise, hypothesis, antecedent), then we
can infer, or derive, another fact called a conclusion (consequent).
This form of knowledge representation, characterized as shallow knowledge, is quite
appropriate in the context of linguistics because it expresses human empirical and heuristic
knowledge in our own language of communication.
It does not, however, capture the deeper forms of knowledge usually associated with
intuition, structure, function, and behavior of the objects around us simply because these
latter forms of know ledge are not readily reduced to linguistic phrases or representations;
The fuzzy rule-based system is most useful in modeling some complex systems that can
be observed by humans because they make use of linguistic variables as their antecedents and
consequents; as described her e these linguistic variables can be naturally represented by
fuzzy sets and logical connectives of these sets.

The canonical form for a fuzzy rule-based system.


Rule 1: IF condition C1, THEN restriction R1
Rule 2: IF condition C2, THEN restriction R2
.
Rule r : IF condition Cr, THEN restriction Rr.

Decomposition of Rules An Example

An example for a compound rule structure is


IF x = y THEN both are equal
ELSE
IF x y
THEN
IF x>y THEN X is highest
ELSE
IF y>x THEN Y is highest
ELSE
The fuzzy rule-based system may involve more than one rule. The process of obtaining
the overall conclusion from the individually mentioned consequents contributed by each rule
in the fuzzy rule this is known as aggregation of rule.

Properties of Set of Rules

The properties for the sets of rules are


Completeness,
Consistency,
Continuity, and
Interaction.
(a) Completeness
A set of IF THEN
rules is complete if any combination of input values result in an
appropriate output value.
(b) Consistency
A set of IF THEN
rules is inconsistent if there are two rules with the same rules-
antecedent but different rule-consequents.
(c) Continuity
A set of IF THEN
rules is continuous if it does not have neighboring rules with output
fuzzy sets that have empty intersection.
(d) Interaction
In the interaction property, suppose that is a r u l e , I F x , this
i s A
meaning is represented by a fuzzy relation R , then the composition of A and R does not
2

deliver B

Defuzzification

Defuzzification means the fuzzy to crisp conversions. The fuzzy results generated
cannot be used as such to the applications, hence it is necessary to convert the fuzzy
quantities into crisp quantities for further processing.
This can be achieved by using defuzzification process. The defuzzification has the
capability to reduce a fuzzy to a crisp single-valued quantity or as a set, or converting to the
form in which fuzzy quantity is present.
Defuzzification can a l s o b e c a l l e d a s r o u n d i n g
collection of membership function values in to a single sealer quantity.
Defuzzification of fuzzy sets is done with the help of Lambda Cuts for Fuzzy Sets.

Lambda Cuts for Fuzzy Sets

Consider a fuzzy set , then the lambda cut set can be denoted by A , where r a n
b e t w e e n 0 aThe
n set
d A 1is going
( 0 to be
a crisp set.
This
1 crisp
) . set is called the
lambda cut set of the fuzzy set , where

The value of lambda cut set is x, when the membership value corresponding to x is
greater that o r e q u a l t o t h e s p e c i fbeicalled
e d as alpha
. cutTset.
h i s
L a m b d a the interval r a [0,
n 1].
g e s i n
Defuzzification methods

There are seven methods used for defuzzifying the fuzzy output functions.
They are:
1. Max-membership principle,
2. Centroid method,
3. Weighted average method,
4. Mean max membership,
5. Centre of sums,
6. Centre of largest area, and
7. First of maxima or last of maxima

Max-membership-principle

This method is given by the expression,

This method is also referred as height method.

Centroid method

This is the most widely used method. This can be called as center of gravity or center of
area method. It can be defined by the algebraic expression i s u s e d f o r a l g

Weighted average method

Weighting each membership function in the obtained output by its largest membership
value forms this method. This method cannot be used for asymmetrical output membership
functions, can be used only for symmetrical output membership functions.
The evaluation expression for this method is

Mean max-membership

This method is related to max-membership principle, but the present of the maximum
membership need not be unique, i.e., the maximum membership need not be a single point, it
can be a range. This method is also called as middle of maxima method the expression is
given as
Centre of sums

It involves the algebraic sum of individual output fuzzy sets. The intersecting areas of the
fuzzy are added twice.
The defuzzified value z is given as

Center of largest area

If the fuzzy set has two convex sub regions, then the entire of gravity of the convex sub
region with the largest area can be used to calculate the defuzzification value. The equation is
given as

Where cm is the convex region with largest area. The value z is same as the value z obtained
by centroid method. This can be done even for non-convex regions.

First of maxima or last of maxima


Answers to Important questions

1. Write the mathematical expression of the membership function and sketch of the
membership function.

Membership function describes the information contained in a fuzzy set and is useful to
develop a lexicon of terms to describe various special features of this function.
For purposes of simplicity, the functions shown in the figures will all be continuous, but
the terms apply equally for both discrete and continuous fuzzy sets.
The sketch of a membership function consists of 3 regions
4. Core
5. Support
6. Boundary

Core
The core of a membership function for
some fuzzy set is defined as that region of
the universe that is characterized by
complete and full membership in the set
.That is, the core comprises those
elements x of the universe such that

Support

It is the region of the universe that is


characterized by nonzero membership in the set .That is, the support comprises those
elements x of the universe such that .

Boundaries

The boundaries of a membership function for some fuzzy set are defined as that
region of the universe containing elements that have a nonzero membership but not complete
membership.
The boundaries comprise those elements x of the universe such that
These elements of the universe are those with some degree of fuzziness, or only partial
membership in the fuzzy set .

2. Write the properties of fuzzy set theory and explain

Fuzzy sets follow the same properties as crisp sets. Because of this fact and because the
membership values of a crisp set are a subset of the interval [0, 1], classical sets can be
thought of as a special case of fuzzy sets.
The properties of fuzzy sets are
3. Give and explain the properties of crisp sets

The most appropriate properties for defining classical sets and showing their similarity
to fuzzy sets are:

Commutativity A B=B A
A g B = B g A .
Associativity A (B C ) = (A B) C
A g ( B g C. ) = ( A g B ) g C
Distributivity A ( B g CB ) =gC) (( AA
A g C ( B) = (( A
A gg B
C )) .
Idempotency A A=A
A g . A = A
Identity A =A
A g X = A
A g = .
A X=X.
Transitivity If A B and B C , then A C.
Involution = A.

Two special properties of set operations are known as the excluded middle axioms and
D e M o r g a n s p r i n c i p l e s . T h e s e p r o p e r t i
The excluded middle axioms are very important because these are the only set
operations that are not valid for both classical sets and fuzzy sets.

The first, called the axiom of the excluded middle, deals with the union of a set A and
its complement;
A =X
The second, called the axiom of contradiction, represents the intersection of a set A and
its complement.
A =g

4. What do you mean by CRISP Relations? Explain with an example max-min


composition relation.

Classical (crisp) relations structures are the relations or structures that represent the
presence or absence of correlation, interaction, or propinquity between the elements of two or
more crisp sets.
There are only two degrees of relationship between elements of the sets in a crisp
relation: t h e r e l a t i o n s h i p s c o m p l e t e l y r e l a t
Fuzzy relations are then developed by allowing the relationship between elements of
two or more sets to take on an infinite number of degrees of relationship between the
extremes o f c o m p l e t e l y r e l a t e d a n d n o t r e
Fuzzy relations are to crisp relations as fuzzy sets are to crisp sets; crisp sets and
relations are constrained realizations of fuzzy sets and relations.
Operations, properties, and cardinality of fuzzy relations are Cartesian products and
compositions of fuzzy relations.
Let R be a relation that relates, or maps, elements from universe X to universe Y, and
let S be a relation that relates, or maps, elements from universe Y to universe Z. A useful
question we seek to answer is whether we can find a relation, T, that relate the same
elements in universe X that R contains to the same elements in universe Z that S contains. It
turns out that we can find such a relation using an operation known as composition.
F r o m t h e S a g i t t a l d i a g r a m i n relation
F i gR u r e
and relation S is the two routes that start at x1 a n d e n d a t z a2 n( id . ex . 1,
z2). Hence, we wish to find a relation T that relates the ordered pair (x1,z2), that is, (x1,z2)
T.
In this example, R ={(x1, y1), (x1, y3), (x2, y4)} . S ={(y1, z2), (y3, z2)} .
The max min composition is defined by the set-theoretic and membership function-
theoretic expressions
Uncertainty

In the traditional view of science, uncertainty represents an undesirable state, a state


that must be avoided at all costs.
The situation of uncertainty is avoided with the help of statistical mechanics or
probability theory in scientific research.
Probability theory dominated the mathematics of uncertainty for over five centuries.
The twentieth century saw the first developments of alternatives to probability theory
and to classical Aristotelian logic as paradigms to address more kinds of uncertainty than just
the random kind.
Jan Lukasiewicz developed a multi valued, discrete logic (circa 1930). In the 1960s,
Arthur Dempster developed a theory of evidence, which, for the first time, included an
assessment of ignorance, or the absence of information. In 1965, Lotfi Zadeh introduced his
seminal idea in a continuous-valued logic that he called fuzzy set theory.
I n t h e 1 9 7 0 s , G l e n n S h a f e r e x theory
t e n ofd e d
evidence dealing with information from more than one source, and Lotfi Zadeh illustrated a
possibility theory resulting from special cases of fuzzy sets.
Later, in the 1980s, other investigators showed a strong relationship between evidence
theory, probability theory, and possibility theory with the use of what was called fuzzy
measures (Klirand Wierman, 1996), and what is now being termed monotone measures.
Uncertainty can be thought of in an epistemological sense as being the inverse of
information.
Information about a particular engineering or scientific problem may be incomplete,
imprecise, fragmentary, unreliable, vague, contradictory, or deficient in some other way (Klir
and Yuan, 1995). When we acquire more and more information about a problem, we become
less and less uncertain about its formulation and solution.

Вам также может понравиться