Вы находитесь на странице: 1из 15

1

Learning
Learning is a relatively durable change in behavior or knowledge that is due to experience
(Wayne Weiten)
OR
It is the modification of behavior through permanent change.
OR
A relatively permanent change in knowledge, behavior or understanding that results from
experience. Innate behaviors, maturation and fatigue are excluded. (Dictionary of psychology)
OR
The process leading to a permanent behavioral change or potential behavioral change.
IMPORTANCE OF LEARNING / Y WE NEED LEARNING
i.

To adapt to environment

ii.

A source of survival

iii.

Enhances our ability to change.

iv.

Through learning we find food, shelter, make friends etc.

ASPECTS OF LEARNING
i.

Learning is change in behavior, for better or worse.

ii.

Relatively permanent change

iii.

Only that change which comes through experience. Changes through growth,
maturation, injury etc. are excluded.

MEASUREMENT
Intangible phenomenon, some people tried to measure it through measuring performance but it
is not an objective test because performance can be tinged / tainted by fatigue, motives, emotions etc.
it can be inferred form behavior.
An important thing over here is that short term changes in behavior are not due to learning.
Rather they are caused by fatigue, lack of effort etc.
Permanent change learning
Temporary change fatigue, emotions, motives etc.

TYPES OF LEARNING

2
i.

Habituation

ii.

Associative learning
a. Classical conditioning
b. Instrumental/ Operant conditioning

iii.

Cognitive learning
a. Insight learning
b. Latent learning
c. Observational learning / imitation / modeling

iv.

Verbal learning

v.

Imprinting

vi.

Motor learning

vii.

Shaping

CLASSICAL CONDITIONING / PAVLOVIAN OR RESPONDENT CONDITIONING


CONDITIONING
Refer to the ways in which events, stimuli and behavior become associated with one another.
OR
Conditioning involves learning associations between events that occur in an organisms
environment.
CLASSICAL CONDITIONING
It is a type of learning in which a neutral stimulus acquires the ability to evoke a response that
was originally evoked by another stimulus.
OR
A type of learning in which a behavior (conditioned response) comes to be elicited by a
stimulus (conditioned stimulus) that has acquired its power through an association with a biologically
significant stimulus (unconditioned stimulus)
Ivan Pavlov, (1900) a Russian physiologist is considered to be the father of this concept. He
discovered this phenomenon through an experiment which he conducted on a dog.
The term conditioning comes from Pavlovs determination to discover the conditions that
produce this kind of learning.
THE EXPERIMENT

3
While studying the role of saliva in the digestive processes of dogs, Pavlov stumbled onto what
he called Psychic reflexes. His subjects were restrained dogs whose saliva were collected by the
means of a tube implanted in their salivary gland when offered meat powder. Pavlov, during his
experiment, found that dogs accustomed to the procedure start salivating before the meat powder was
presented as in response to the clicking sound made by the device used to present the meat powder.
To clarify, what was happening, he paired the presentation of the meat powder with various
stimuli like an auditory tone presented together a number of times with the meat powder. Interestingly,
when tone was represented alone, it also caused the dog to salivate.
Thus, the tone started out as a neutral stimulus but it managed to produce the salivation when
paired with a stimulus that did produce the salivation response automatically.
What Pavlov had demonstrated was how learned associations-- which were viewed as the
basic building blocks of the entire learning process-- were formed by events in an organisms
environment. Based on this insight, he built a broad theory of learning that attempted to explain
aspects of emotion, temperament, neuroses, and language.
TERMINOLOGY AND PROCEDURES
There is a special vocabulary associated with classical conditioning.
UNCONDITIONED ASSOCIATION
The bond Pavlov noted between the meat powder and salivation was a natural, unlearned
association. It did not have to be created through conditioning. It is therefore called an unconditioned
association.
UNCONDITIONED STIMULUS (UCS)
In classical conditioning, UCS is a stimulus that evokes an unconditioned response without
previous conditioning.
UNCONDITIONED RESPONSE (UCR)
In classical conditioning, UCR is an unlearned reaction to an unconditional stimulus (UCS)
that occurs without previous conditioning.
CONDITIONED ASSOCIATION
The link between the tone and salivation was established through conditioning. It is therefore
called conditioned association.
CONDITIONED STIMULUS (CS)
In classical conditioning, The conditioned stimulus is a previously neutral stimulus that has
through conditioning, acquired the capacity to evoke a conditioned response.
CONDITIONED RESPONSE (CR)

4
The CR is a learned reaction to conditioned stimulus that occurs because or previous
conditioning.
NEUTRAL STIMULUS
A stimulus that does not bring out any response without conditioning is known as neutral
stimulus.
TRIAL
In classical conditioning, a trial consists of any presentation of a stimulus or pair of stimuli.
Psychologists are interested in how many trials are required to establish a particular
conditioned bound.
BASIC PROCESSES IN CLASSICAL CONDITIONING
Classical conditioning is often referred as irresistible forcea mechanical process that
inevitably leads to a certain result. This view reflects the fact that most conditioned responses are
reflexive and difficult to control-- phobias are difficult to suppress; it is hard to make dogs stops their
salivation. But this view is misleading, as it fails to consider many factors involved in classical
conditioning. Some basic processes of Classical conditioning include the following:
ACQUISITION: FORMING NEW RESPONSES
Acquisition refers to the initial stage of learning something.
Acquisition can be defined as the initial stage of learning during which a response is established and
gradually strengthened. In classical conditioning it refers to the phase in which a stimulus comes to
evoke a conditioned response.
Pavlov theorized that the acquisition of conditioned response depends on stimulus contiguity.
Stimuli are contiguous if they occur together in space and time.
EXTINCTION: WEAKENING CONDITIONED RESPONSES
A newly formed stimulus-response bond doesnt necessarily last indefinitely. If it did, learning
would be inflexible and organisms would have difficulty adapting to new situations. Instead, the right
circumstances produce extinction, the gradual weakening and disappearance of a conditioned
response. The time to extinguish a conditioned response depends on factors, particularly, the strength
of the conditioned bond when extinction begins.
SPONTANEOUS RECOVERY: RESURRECTING RESPONSES
Some conditioned responses display the ultimate in tenacity by reappearing form the dead
after having been extinguished. Learning theorists use the term spontaneous recovery to describe such
a resurrection. Spontaneous recovery, in classical conditioning, is the reappearance of an extinguished
response after a period to non-exposure to the conditioned stimulus (rest period).
RENEWAL EFFECT

5
Recent studies have demonstrated that if a response is extinguished in a different environment
that it was acquired, the extinguished response will reappear if the organism is returned to the original
environment where acquisition took place. This phenomenon is called the renewal effect.
Renewal effect and spontaneous recovery suggests that extinction somehow suppresses a
conditioned response rather than erasing a learned association. i.e., extinction doesnt appear to lead to
unlearning.
STIMULUS GENERALIZATION
Stimulus generalization occurs when an organism that has learned a response to a specific
stimulus responds in the same way to new stimuli that are similar to the original stimulus.
Generalization has adaptive value given that organisms rarely encounter the exact same stimulus more
than once.
The likelihood and amount of generalization to a new stimulus depend on the similarity
between the new stimulus and the original CS. As little Albert, in an experiment performed by John B
Watson, generalized white things like a rabbit, a dog, a far coat, a Santa Claus mask, and Watsons
hair for a white rat.
STIMULUS DISCRIMINATION
Stimulus discrimination is just the opposite of stimulus generalization. Stimulus discrimination
occurs when an organism learns to respond differently to stimuli that (differ from the conditioned
stimulus on some dimension) are similar to the conditioned stimuli.
HIGHER ORDER CONDITIONING
Higher conditioning is a type of conditioning in which a conditioned stimulus functions as if it
were an unconditioned stimulus.

OPERANT CONDITIONING
INTRODUCTION
Classical conditioning best explains reflexive responding that is largely controlled by stimuli
that precede the response. But organisms make many responses that dont fit this description. For
instance, studying is not a reflex. The stimuli that govern it (exams, grades etc.) dont precede it.
Instead, studying is mainly influenced by stimulus events that follow the response specifically its
consequences.

6
In 1930s this kind of learning christened Operant Conditioning by B.F. Skinner. The term was
derived from his belief that in this type of responding, an organism operates on the environment
instead of simply reacting to stimuli. Learning occurs because responses come to be influenced by the
outcomes that follow them.
OPERANT CONDITIONING
Operant conditioning is a form of learning in which responses come to be controlled by their
consequences. It is a type of learning in which behavior is strengthened if followed by reinforcement
or diminished it followed by punishment.
THORNDIKES LAW OF EFFECT
Another name for operant conditioning is instrumental learning, a term introduced earlier by
Edward L. Thorndike who wanted to emphasize that this kind of responding is often instrumental in
obtaining some desired outcome.
He experimented on cats. A hungry cat was placed in a small cage or puzzle box with food
available just outside. The cat could escape to obtain the food by performing a specific response such
as pulling a wire or depressing a lever. After each escape, the cat was rewarded with a small amount of
food and then returned to the cage for another trial. Thorndike monitored how long it took the cat to
get out of the box over a series of trails. If the cat could think, Thorndike reasoned, there would be a
sudden drop in the time required to escape when the cat recognized the solution to the problem.
Instead of a sudden drop, Thorndike observed a gradual, uneven decline in the time it took cats
to escape from his puzzle boxes. The decline in solution time showed that the cats were learning. But
the gradual nature of this decline suggested that this learning did not depend on thinking and
understanding. Instead, Thorndike attributed this learning to a principle he called the law of effect.
According to the law of effect, if a response in the presence of a stimulus leads to satisfying effects,
the association between the stimulus and the response is strengthened.
Thorndike viewed instrumental learning as a mechanical process in which successful responses
are gradually stamped in by their favorable effects.
His law of effect became the cornerstone of Skinners theory of operant conditioning, although
skinner used different terminology.

LAW OF EFFECT
A response that is followed by satisfying consequences becomes more probable and a response
that is followed by dissatisfying consequences becomes less probable.
SKINNERS DEMONSTRATION: ITS ALL A MATTER OF CONSEQUENCES
B.F. Skinner embraced Thorndikes view that environmental consequences exert a powerful
effect on behavior. He outlined a program of research whose purpose was to discover, by systematic
variation of stimulus conditions, the ways that various environmental conditions affect the likelihood
that a given response will occur. Skinners analysis was experimental rather than theoretical.

7
To analyze behavior experimentally, Skinner developed operant conditioning procedures, in
which he manipulated the consequences of an organisms behavior in order to see what effect they had
on subsequent behavior.
An operant is any behavior that is emitted by an organism and can be characterized in terms of
the observable effects it has on the environment, literally operant means affecting the environment, or
operating on it (Skinner, 1938). Operants are not elicited by specific stimuli, as classically conditioned
behaviors are. Pigeons peck, rats search for food, babies cry and coo etc. The probability of these
behaviors occurring in the future can be increased or decreased by manipulating the effects they have
on the environment. If for example, a babys coo prompts desirable parental contact, the baby will coo
more in the future. Operant conditioning, then, modifies the probability of different types of operant
behavior as a function of the environmental consequences they produce.
REINFORCEMENT
Reinforcement occurs when an event following a response increases an organisms tendency to
make that response.
BASIC PROCESSES IN OPERANT CONDITIONING
Although the principle to reinforcement is strikingly simple, many other processes involved in
Operant conditioning make this form of learning just as complex as classical conditioning. In fact,
some of the same processes are involved in both types of conditioning.
ACQUISITION AND SHAPING
In Operant conditioning, acquisition refers to the initial stage of learning during which a
reinforced response is established and strengthened. Operant responses are usually established through
a gradual process called shaping which consists of the reinforcement of closer and closer
approximations of a desired response.
Shaping is necessary when an organism doesnt on its own emit the desired response e.g., a rat
in a skinner box shaped by releasing food with every close step towards the lever.

EXTINCTION (OPERANT)
In operant conditioning extinction refers to the gradual weakening and disappearance of a
response tendency because the response is no longer followed by a reinforcer. Extinction begins
whenever previously available reinforcement is stopped. When the extinction process is begun a brief
surge often occurs, followed by a gradual decline in response rate until it approaches zero.
RESISTANCE TO EXTINCTION
A key issue in O.C. is how much resistance to extinction an organism will display when
reinforcement is halted. Resistance to extinction occurs when an organism continues to make a
response after delivery of the reinforcement has been terminated. Resistance to extinction depends on
a variety of factors. Chief among them is the schedules of reinforcement used during acquisition.
STIMULUS CONTROL: GENERALIZATION AND DISCRIMINATION
Operant responding is ultimately controlled by its consequences as organisms learn responseoutcome (R-O) associations. However stimuli that precede a response can also exert considerable
influence over operant behavior.
When a response is consistently followed by a reinforcer in the presence of a particular
stimulus, that stimulus comes to serve as a signal indicating that the response is likely to lead to a
reinforcement. Once an organism learns the signal, it tends to respond accordingly. For example, a
pigeons disk pecking may be reinforced only when a small light behind the disk is lit. When the light
is out, pecking does not lead to rearward. Pigeons quickly learn to peck the disk only when it is lit.
The light that signals the availability of reinforcement is called a discriminative stimulus. Thus
Discriminative stimuli are cues that influence operant behavior by indicating the probable
consequences (reinforcement or non-reinforcement) of a response.
Discriminative stimuli play a key role in the regulation of operant behavior. For example,
children learn to ask for sweets when their parents are in a good mood. Drivers learn to slow down
when the highway is wet.
STIMULUS GENERALIZATION AND STIMULUS DISCRIMINATION
Reactions to a discriminative stimulus are governed by the processes of stimulus
generalization and stimulus discrimination. Just like reactions to a CS in classical conditioning. For
instance, envision a cat that comes running into the kitchen whenever it hears the sound of a can
opener because that sound has become a discriminative stimulus signaling a good chance of its getting
fed. If the cat also responded to the sound of a new kitchen appliance (say a blender), this response
would represent generalization-- responding to a new stimulus as if it were original. Discrimination
would occur if the cat learned to respond only to the can opener and not to the blender.
REINFORCEMENT: CONSEQUENCES THAT STRENGTHEN BEHAVIOR
In operant conditioning, reinforcement refers to any event following a response that
strengthens the tendency to make that response.
Something that is clearly reinforcing for an organism at one time may not function as a
reinforce later and may also not be reinforcing for another organism e.g. food will reinforce only a
hungry organism.

9
DELAYED REINFORCEMENT
In O.C., if a delay in reinforcement occurs, the response may not be strengthened. The longer
the delay between the designated response and the delivery of reinforcer, the more slowly conditioning
proceeds.
SECONDARY /CONDITIONED REINFORCEMENT
Reinforcers can be either unlearned (primary) or learned/ conditioned (secondary).
Primary reinforcers are events that are inherently reinforcing because they satisfy biological
need e.g. food, water, warmth, etc.
Secondary reinforcers are events that acquire reinforcing qualities by being associated with
primary reinforcers. The events that function as secondary reinforcers vary among members of a
species because they depend on learning.
INTERMITTENT REINFORCEMENT: EFFECTS OF BASIC SCHEDULES
In real world, most responses are reinforced only some of the time. This affects the potency of
reinforcers. To study this, operant psychologists have devoted an enormous amount of attention to
how intermittent schedules of reinforcement influence operant behavior.
SCHEDULE OF REINFORCEMENT
A schedule of reinforcement determines which occurrences of a specific response result in the
presentation of a reinforcer. These are the patterns of delivering and withholding reinforcement. The
simplest pattern is continuous reinforcement.
CONTINUOUS REINFORCEMENT
Continuous reinforcement occurs when every instance of a designated response is reinforced.
In laboratory, it is used in shaping a new response before moving on to more realistic schedules
involving intermittent reinforcement.
PARTIAL (INTERMITTENT) REINFORCEMENT
Partial or intermittent reinforcement occurs when a designated response is reinforced only
some of the time. Studies have showed that partial reinforcement makes a response more resistant to
extinction than continuous reinforcement. Reinforcement schedules come in many varieties, but four
particular types of intermittent schedules have attracted the most interest.

A)

RATIO SCHEDULES

Ratio schedules require the organism to make the designated response a certain number of
times to gain each reinforcer.
I.

FIXED RATIO (FR) SCHEDULE

10
With a fixed-ratio schedule, the reinforcer is given after a fixed number of non-reinforced
responses e.g. a rat is reinforced for every tenth lever press.
II.

VARIABLE RATIO (VR) SCHEDULE

With a variable ratio schedule, the reinforcer is given after a variable number of non-reinforced
responses. The number of non-reinforced responses varies around a predetermined average. E.g. a slot
machine in casino pays off once every six tries on the average. The number of non-winning responses
between payoffs varies greatly from one time to the next.
B.

INTERVAL SCHEDULES
Internal schedules require a time period to pass between the presentation of reinforcers.

III.

FIXED INTERVAL (FI) SCHEDULE

With a fixed interval schedule, the reinforcer is given for the first response that occurs after a
fixed time interval has elapsed. E.g. a rat presses a lever, gets reinforcer, and then gets the reinforcer
after 2 minutes and again have to wait 2 minutes before being able to earn the next reinforcement.
IV.

VARIABLE INTERVAL (VI) SCHEDULE

With a variable interval schedule the reinforcer is given for the first response after a variable
time interval has elapsed. The interval length varies around a predetermined average e.g. a person
repeatedly dials a busy phone number (getting through is the reinforcer).
Variable interval schedules create more resistance to extinction as compared to their counter
parts. In fixed interval schedule there is usually a pause in responding after each reinforcer e.g. in
studying after exams and then responding becomes very rapid as each interval comes to an end (e.g.
exams time). Variable-ratio schedules yield steady responding and greater resistance to extinction.
CONCURRENT SCHEDULES OF REINFORCEMENT
To gain insight into how organisms make choices among operant responses, researchers have
studied concurrent schedules of reinforcement. Concurrent schedules of reinforcement consist of two
or more reinforcement schedules that operate simultaneously and independently, each for a different
response e.g. a pigeon placed in Skinner box and required to peck either of the disks, one reinforced
on a verbal interval schedule and the other on a fixed ratio schedule, will distribute its responses to
each disk corresponding closely to the relative amount of overall reinforcement each disk can yield.
This is called the matching law.
THE MATCHING LAW
The matching law states that under concurrent schedules of reinforcement, organisms relative
rate of responding to each alternative tends to match each alternatives relative rate of reinforcement.
Moreover, if the magnitude or quality of reinforcement earned by each alternative is manipulated,
organisms will adjust their responding to match up well with these factors as well.
REINFORCEMENT

11
An event following response that strengthens the tendency to make that response.
REINFORCER
Any stimulus that, when made contingent upon a response, increases the probability of that
response.
POSITIVE REINFORCEMENT
Positive reinforcement occurs when a response is strengthened because it is followed by the
presentation of a rewarding stimulus e.g. Good grades, tasty meals, paychecks etc.
NEGATIVE REINFORCEMENT
Negative reinforcement occurs when a response is strengthened because it is followed by the
removal of an aversive (unpleasant) stimulus e.g. pain killer tablets remove the painan unpleasant
stimulus.
NEGATIVE REINFORCEMENT AND AVOIDANCE BEHAVIOR
We often noticed that many people tend to avoid facing awkward situations, difficult
challenges and sticky personal problems.Consistent reliance on avoidance is not a very effective
coping strategy. How do people learn to rely on such a strategy? In large part, it may be through
negative reinforcement.
ESCAPE LEARNING
The roots of avoidance lie in escape learning. In escape learning an organism acquires a
response that decreases or ends some aversive stimulation e.g. raising an umbrella during a heavy
downpour.
EXPERIMENT
Psychologists often study escape learning in laboratory with rats that are conditioned in a
shuttle box. The shuttle box has two compartments connected by a doorway, which can be opened and
closed by the experimenter.
In a typical study, an animal is placed in one compartment and an electric current in the floor
of that chamber is turned on, with the doorway open. The animal learns to escape the shock by running
to the other compartment. This escape response leads to the removal of an aversive stimulus (shock)
so it is strengthened through negative reinforcement. If somebody were to leave a party where he was
getting picked on by peers, he would be engaging in an escape response. Escape learning does not
necessarily entail leaving the scene of the aversive stimulation. Any behavior that decreases or ends an
ongoing aversive stimulation (for example, turning on the air conditioner to get rid of stifling heat)
represents escape learning. It is a get me out of here or shut this off reaction, aimed at escape from
pain or annoyance.
How can escape conditioning be converted into avoidance conditioning?
AVOIDANCE LEARNING / CONDITIONING

12
Escape learning often leads to avoidance learning. In avoidance learning an organism acquires
a response that prevents some aversive stimulation form occurring e.g. to buckle your seatbelt every
time so that buzzer would not sound.
In laboratory studies of avoidance learning, the experimenter simply gives the animal a signal
that an aversive stimulus (as a shock) is forthcoming. The typical signal is a light that goes on a few
second prior to the shock. At first the rat runs only when shocked (escape learning). Gradually,
however, the animal learns to run to the safe compartment as soon as the light comes on,
demonstrating avoidance learning. Similarly, if somebody were to quit going to parties because of his
concern about being picked on, this would represent avoidance learning, turning on A.C. before a
room gets hot would also represent avoidance learning.
Avoidance learning presents an interesting puzzle for learning theorists. Avoidance responses
tend to be long lasting even though the mechanism of continuing reinforcement is obscure.
For example, when an animal in a shuttle box learns to avoid shock entirely, it seems to have
no opportunity for continued negative reinforcement. After all, the animal cant remove shock that
never occurs. In theory, the avoidance responses should gradually extinguish, because it is no longer
followed by the removal of an aversive stimulus. However, avoidance response usually remain strong.
The best explanation of this paradox appears to be the two process theory of avoidance, which
integrates the processes of classical and operant conditioning.
EXPERIMENTAL SETTINGS
Avoidance learning belongs to negative reinforcement schedules. The subject learns that a
certain response will result in the termination or prevention of an aversive stimulus. There are two
kinds of commonly used experimental settings:
i.

Discriminated avoidance learning

ii.

Free-operant avoidance learning

DISCRIMINATED AVOIDANCE LEARNING:


In discriminated Avoidance learning, a novel stimulus such as light or a tone is followed by an
aversive stimulus such as a shock (cs-ucs, similar to classical conditioning). Whenever the animal
performs the operant response, the CS respectively the US is removed. During the first trials (called
escape-trials) the animals usually experience both the CS and US, showing the operant response to
terminate the aversive stimulus (UCS). By the time, the animal will learn to perform the response
already during the presentation of the CS thus preventing the aversive US from occurring. Such trials
are called avoidance trails.
FREE OPERANT AVOIDANCE LEARNING
In this experimental session, no discrete stimulus is sued to signal the occurrence of the
aversive stimulus. Rather the aversive stimuli (like shocks) are presented without explicit warning
stimuli. There are two crucial time intervals determining the rate of avoidance learning.
TWO PROCESS THEORY OF AVOIDANCE
According to two process theory of avoidance, the warning light that goes on in the shuttle box
becomes a CS (through classical conditioning) eliciting conditioned fear in the animal. The behavior
of fleeing to the other side of the box is an operant response that produces negative reinforcement,

13
even after shock is no longer experienced, because it reduces conditioned fear, i.e. avoidance
responses remove internal aversive stimuli than external aversive stimuli, such as shocks to make
avoidance behavior so persistent.
Many other theories were proposed against two process theory but it remains a benchmark
against which all other explanations of avoidance behavior are always measured. One of its strengths
is that it provides simple and compelling account for why avoidance behaviors-- such as phobias-- are
so resistant to extinction. According to two process theory, phobias are highly resistant to extinction
because of 2 reasons:
a)

A
phobia
usually
leads
to
negative reinforcement each time it is made

an

avoidance

response

that

earns

b)

Avoidance behavior prevents opportunities to extinguish the phobic conditioned response


because the person does not get much exposure to the conditioned (phobic) stimulus.

PUNISHMENT: CONSEQUENCES THAT WEAKEN RESPONSES


In Skinners model of operant behavior punishments are consequences that decrease an
organisms tendency to make a particular response. Punishment occurs when an event following a
response weakens the tendency to make that response.
Punishments can be either positive or negative.
POSITIVE PUNISHMENT
When a behavior is followed by the delivery of an aversive stimulus, the event is called
positive punishment.
NEGATIVE PUNISHMENT
When a behavior is followed by the removal of an appetitive stimulus, the event is referred to
as negative punishment.
Punishment, by definition, always reduces the probability of a response occurring again;
reinforcement, by definition, always increases the probability of a response recurring.
PROPERTIES OF REINFORCERS
Reinforcers are the power brokers of operant conditioning they change or maintain behavior.
Biological Constraints on Conditioning:

Instinctual drift
Imprinting
Preparedness

1. Instinctive drift:
One biological constraint on learning is instinctive drift. Instinctive drift occurs when an
animals innate response tendencies interfere with conditioning processes. Instinctive drift was first
described by the Brelands, the operant psychologist who went into the business of training animals for
commercial purposes. They have described many amusing examples of their failures to control
behavior through conditioning. For instance, they once were training some raccoons to deposit coins

14
in a piggy bank. They were successful in shaping the raccoons to pick up a coin and put it into a small
box, using food as the reinforcer. However when they gave the raccoons a couple of coins, an
unexpected problem arose: the raccoons wouldnt give the coins up! In spite of the reinforcers
available for depositing the coins, they would sit and rub the coins together like so many misers.
What had happened to disrupt the conditioning program? Apparently, associating the coins
with food had brought out the raccoons innate food-washing behavior. Raccoons often rub things
together to clean them. The Brelands report that they have run into this sort of instinct-related
interference on many occasions with a wide variety of species.
Preparedness:

15
Factors affecting learning / economy in learning:
Internal/personal factors/ subjective factors:
I.
Motivation
II.
Age
III.
Intelligence
IV.
Attention
V.
Interest
VI.
Mental and physical health
VII.
Fatigue
VIII.
Active involvement
IX.
Ability to manipulate things
X.
Emotion
XI.
Personality
XII.
Maturation
External factors/ methods of learning:
I.
Nature of knowledge (interesting, etc)
II.
Value of reciting
III.
Meaningful vs. meaningless material
IV.
Exercise and repetition
V.
Whole learning vs. part learning
VI.
Reward and punishment
VII.
Feedback
VIII.
Conceptualization
IX.
Auxiliary relevant information
X.
Guidance
XI.
Association and rhythm
XII.
Observation and review
XIII.
Distributed practice vs. massed practice
XIV.
Zeigarnik effect
Phobia:
Phobias are irrational fears of specific objects or situations, e.g. to cats, dogs, doctors etc.
chances are, they are acquired through classical conditioning.

Вам также может понравиться