Академический Документы
Профессиональный Документы
Культура Документы
NUMBER 1
(JULY)
In the first two experiments, subjects' choices to earn points (exchangeable for money) either by
competing with a fictitious opponent or by not competing were studied. Buskist, Barry, Morgan, and
Rossi's (1984) competitive fixed-interval schedule was modified to include a second response option,
a noncompetitive fixed-interval schedule. After choosing to enter either option, the opportunity for
reinforcers became available after the fixed-interval's duration had elapsed. Under the no-competition
condition, points were always available after the interval had elapsed. Under the competition condition,
points were available based on a predetermined probability of delivery. Experiments 1 and 2 examined
how reinforcer probabilities and reinforcer magnitudes affected subjects' choices to compete. Several
general conclusions can be made about the results: (a) Strong preferences to compete were observed
at high and moderate reinforcer probabilities; (b) competing was observed even at very low reinforcer
probabilities; (c) response rates were always higher in the competition component than in the nocompetition component; and (d) response rates and choices to compete were insensitive to reinforcermagnitude manipulations. In Experiment 3, the social context of this choice schedule was removed to
determine whether the high levels of competing observed in the first two experiments were due to a
response preference engendered by the social context provided by the experimenters through instructions. In contrast to the first two experiments, these subjects preferred the 60-s fixed-interval schedule
(formerly the no-competition option), indicating that the instructions themselves were responsible for
the preference to compete. This choice paradigm may be useful to future researchers interested in the
effects of other independent variables (e.g., drugs, social context, instructions) on competitive behavior.
Key words: competition, choice, reinforcer probability, reinforcer magnitude, social context, instructions, button press, humans
Operant researchers have studied competi- jects' preference to compete for reinforcers
tive behavior in the laboratory by using either when alternatives are concurrently available.
a single reinforcement schedule (Buskist, Barry, Three choice paradigms have been studied: (a)
Morgan, & Rossi, 1984; Buskist & Morgan, competing or working alone (Schmitt, 1987);
1987; Schmitt, 1987) or a concurrent rein- (b) competing or cooperating (Schmitt, 1976,
forcement schedule in which subjects choose 1987); and (c) competing or sharing (Olvera
between competing and another alternative & Hake, 1976; Hake, Olvera, & Bell, 1975).
(Hake, Olvera, & Bell, 1975; Olvera & Hake, Both the single- and concurrent-schedule ap1976; Schmitt, 1976, 1987). In the single- proaches have proven to be useful in the deschedule case (Buskist et al., 1984; Buskist & termination of variables that control competing
Morgan, 1987), a pair of subjects compete for behavior and choices to compete.
Several generalizations can be made about
a single reinforcer in the absence of other alternatives. Usually a fixed-interval (FI) or the variables that have been found to control
fixed-ratio (FR) schedule is in effect, and the competitive responding in studies using these
1st subject to complete the schedule's require- paradigms. In general, subjects respond at high
ment receives the reinforcer; the other subject rates on all competition schedules including FI
does not receive a reinforcer. In the concurrent- (Buskist et al., 1984), variable-interval (VI)
schedule case, researchers have studied sub- (Schmitt, 1987), and FR schedules (Olvera &
Hake, 1976; Hake, Olvera, & Bell, 1975), with
on the Fl schedules distributed in a
responses
This research was supported by a grant from the Nabreak-and-run
pattern (unlike conventional
M.
Donald
tional Institute on Drug Abuse (DA 05154).
Dougherty was supported by a postdoctoral fellowship human FI schedule performance; see Buskist
(DA 07247) from the National Institute on Drug Abuse. et al., 1984; Weiner, 1964, 1969). A number
Reprint requests should be directed to Don R. Cherek at of other variables affect competitive respondthe Department of Psychiatry and Behavioral Sciences,
University of Texas, Houston Health Science Center, 1300 ing and subjects' preference to compete when
given other response alternatives. Elaborated
Moursund Street, Houston, Texas 77030.
133
CHOICES TO COMPETE
role instructions (or social context) played in
subjects' preference for the competition option.
It became apparent during our initial experiments that probability and reinforcer magnitude could not adequately account for subjects' strong preference for competing. Previous
research has shown that providing either a
social or nonsocial context through instructions
not only can influence responding but also can
modulate the effects of drugs on responding
(Cherek, Spiga, Roache, & Cowan, 1991;
Cherek, Steinberg, Kelly, Robinson, & Spiga,
1990; Kelly & Cherek, 1993).
In the first two experiments, a discrete-choice
procedure was used in which subjects could
choose to earn monetary reinforcers either by
competing with a fictitious opponent or by not
competing. Buskist et al.'s (1984) competitive
FI schedule procedure was modified to include
another response option, a noncompetitive FI
schedule. On either schedule, the subject's first
response after the interval's duration had
elapsed, and the reinforcer became available,
produced a reinforcer. Under no-competition
conditions, reinforcers were always available
after every interval had elapsed; under competition conditions, reinforcers were available
after an interval had elapsed according to a
predetermined probability of delivery. In Experiment 1, interest was in how subjects would
distribute their choices between competing and
not competing when given the opportunity to
compete under three reinforcer-probability
(.25, .50, and .75) conditions. In Experiment
2, interest was in whether or not subjects'
choices to compete were sensitive to gradually
changing reinforcer probabilities that occurred
within a single session at different reinforcer
magnitudes (similar to the procedure used by
Buskist & Morgan, 1987, Experiment 3). And
in Experiment 3, a nonsocial version of our
concurrent-choice schedule was used to see how
subjects would distribute their choices between
the two alternative reinforcement schedules (FI
60 s and probability Fl 30 s) without the stimuli and instructions that provided a social context in the previous experiments.
EXPERIMENT 1
The main purpose of Experiment 1 was to
determine how subjects would distribute their
choices between the competition and no-competition alternatives at different reinforcer
probabilities (i.e., the probability of "winning"
135
Subjects
Three males, between 19 and 40 years old,
participated in the experiment. Two subjects
(S-815 and S-816) reported that they had not
participated in research previously, and the
other (S-407) reported previous participation
in a tobacco-cigarette smoking study in our
laboratory several years earlier. These subjects' educational levels ranged between 10 and
12 years.
Apparatus
During experimental sessions, subjects sat
in a sound-attenuating chamber (1.32 m by
1.62 m by 2.23 m) that contained a response
panel and a computer monitor. The response
panel was a metal box (43.2 cm by 26.0 cm
by 10.2 cm) containing three push buttons labeled A, B, and C. The panel's wire lead was
of sufficient length to allow the subject to move
the panel onto his lap or to place it on a shelf
(28.0 cm wide) that extended across the full
length of the wall (83.5 cm above the chamber's floor) in front of the subject's chair. Located just behind this shelf, at the subject's eye
level, was an Apples monochrome monitor.
Also located in the chamber was a ventilation
fan (its noise served to mask extraneous sounds)
and a ceiling light. Experimental events were
controlled and responses were monitored by
an Apple IIGSO computer equipped with an
CHOICES TO COMPETE
Table 1
Parameter values of the competition/no-competition schedules used in each of the three experiments.
Competition component
No-competition component
Total per
Reinforcer
component
magnitudes
entry
($)
($)
Probability
conditions
Reinforcer
magnitudes
($)
Average per
component
entry
($)
.25
.50
.75
0.10
0.10
0.10
0.10
0.20
0.30
0.10
0.10
0.10
0.20
0.20
0.20
probability held
constant at .70
0.05
0.10
0.20
0.14
0.28
0.56
0.05
0.10
0.20
0.10
0.20
0.40
probability decreased
from .70 to .10
0.05
0.10
0.20
0.10
0.10
0.10
0.10
0.20
0.30
0.05
0.10
0.20
0.10
0.10
0.10
0.10
0.20
0.40
0.20
0.20
0.20
Experiment
1
2 Baseline sessions
Probe sessions
.25
.50
.75
Experiment
Subject
S-407
S-815
S-816
Condition
.25
.50
.75
.25
.50
.75
.25
.50
.75
S-820
S-847
3
S-885
S-898
S-909
$0.10
$0.20
$0.05
$0.10
$0.20
$0.05
$0.10
$0.20
.25
.50
.75
.25
.50
.75
.25
.50
.75
Sessions
of exposure
23
15
20
17
20
15
16
15
15
33
30
31
30
35
30
31
30
34
15
15
18
15
18
17
16
15
19
EXPERIMENT 2
In Experiment 1 choices to compete were
affected by different reinforcer probabilities
that were held constant across a minimum of
15 hour-long sessions. In Experiment 2 we
were interested in determining whether choices
to compete and rates of responding would be
sensitive to reinforcer-probability changes occurring within a single session. To do this, we
modified our competition/no-competition
schedule to include moment-to-moment control over the reinforcer probability in the competition option. The subject's behavior was first
stabilized in a constant reinforcer-probability
condition in which most of the reinforcers were
earned in the competition option, and then a
"probe session" was introduced in which the
probability began at the baseline level and was
then gradually lowered throughout the session.
Of interest were two aspects of sensitivity to
the gradual changes in reinforcement probability: the point at which subjects switched to
the no-competition option and the rates of responding during the probe sessions. This procedure was conducted twice at three different
reinforcer magnitudes. We manipulated reinforcer magnitude because we have previously observed this variable to produce systematic increases in rates of responding and changes
in choices between progressive-ratio and fixedtime schedules (Cherek & Dougherty, in press).
Specifically, we were interested in whether
subjects will choose to spend more time in the
competition option as the reinforcer magnitudes become larger.
139
CHOICES TO COMPETE
S47500
100
E 0
400-
870
350-
~60
~300
~~~~~~~~250
50
~200
0540
1DU
o 30
~20
3ioo
O 10
0
.25
.50
.75
.25
70
060
400
350
40co
3040
o 20
120
5100i
05)
.25
.50
.75
.25
100
S-816
0)90 90~~~~~~~~~~~D25
E 80
.50
.75
-1
~~~~~~~~~~20
co
cL
Co15
05 40
~1
o 30
0) 20
4) 10
DL
Er] o-C~ompetition
2 isoo
1_0
870
.75
~~~~~~ Competition
300
a____250_
2500_
050
o5
.50
S-815
E 80~ ~ ~
Co
S85500
S-810450
100
90
-S-407
cCo
1
05
0
.75
.50
Probabilty of Reinforcement
.25
.75
.50
Probability of Reinforcement
.25
Fig. 1. The mean percentage of choices (between a competition and no-competition option) for each subject are
shown on the left. Each bar in these graphs represents the mean percentage of the subject's total choices to enter the
competition component; the error bars represent the standard error of the mean. In the right panel, each subject's rates
of responding in both the competition and no-competition components are shown for each of these same reinforcerprobability conditions. Data are taken from the last three sessions of exposure.
140
METHOD
Subjects and Apparatus
Three experimentally naive males, between
19 and 30 years old, participated in this experiment. All were recruited and treated in a
manner identical to that described in Experiment 1. The apparatus used was also identical
to that described previously.
Procedure
All subjects began in a baseline condition in
which the reinforcer probability in the competition component was fixed at .70. This
probability level was selected during a preliminary pilot study for two reasons: (a) This
value reliably produced high levels of competitive responding, and (b) this value allowed
a sufficient range of lower probability for comparisons within a probe session (see below).
After a minimum of 15 sessions, and after
meeting the same stability requirements as in
Experiment 1, a probe session was introduced.
In a probe session, the probability began at
.70 and was decreased by .01 after every minute of the 60-min session (from .70 to .10 in
a session). After exposure to the probe session,
the subject's behavior was again stabilized on
the baseline schedule and another probe session was introduced. This procedure was repeated until subjects experienced two probe
sessions at each of three different reinforcer
magnitudes: $0.05, $0.10, and $0.20. All subjects were exposed to the three reinforcer magnitudes in the following order (twice): $0.10,
$0.20, and $0.05. This order of exposure was
used because initial use of $0.05 might not
maintain participation. The no-competition
schedule was the same as in the previous experiment (two FI 60-s schedules); at any given
time, the reinforcer magnitudes present in this
component were equal to the magnitude of the
reinforcers available in the competing component. A summary of this experiment's parameter manipulations appears in Table 1.
RESULTS
The 3 subjects met the stability requirement
at or shortly after the minimum 15 sessions of
exposure both times the three reinforcer magnitudes were used. The total number of sessions at each reinforcer magnitude ranged from
30 to 35 sessions. The number of sessions of
100
o0 90
80
S-812
20
18
0n
6
14
70
S2810
8560 c12
1.
*~50
E 40
20
0
1o0 cents
5 cents
.f
100
90
S-820
20
18
S16
~~~80
60
e70 c1
5 cents
1a 0 cents
20 cents
60
C
~ ~ ~ ~ ~ ~Probel1
__Probe_ 2 _
12
BBaseline
S-820
~16_
20 cents
10
40
~~~~~~~~~
0206
200
102
0
0
5 cents 1o 0 cents
5 cents
20 cents
1 0 cents
20 cents
100 S-847
C
.2 90
20
~~~~~~~~~0S-847
18
BO
0'
~14
70
60
(D
40
H30
12
(, 10
CLo4
16
7U
020
0
0
5 cents
1 0 cents
20 cents
Reinforcer Magnitude
Fig. 2. The percentages of time each subject spent in the competition option and their rates of responding during
baseline and the two probe sessions in Experiment 2. The reinforcement probability in the competition component
remained at .70 during baseline conditions. In probe sessions the reinforcement probability was gradually lowered
from .70 to .10 (.01 per minute). In the left panel is the percentage of the session's time each subject spent in the
competition component during baseline and probe sessions, and in the right panel are the rates of responding in the
competition component for these same sessions. Four bars appear at each reinforcer magnitude: White bars represent
the mean percentage of time spent in the competing component during baseline, and the solid gray and black bars
represent the percentage of time spent in the competition component during two probe sessions (bars represent one
standard error of the mean).
METHOD
EXPERIMENT 3
In Experiment 3, a nonsocial version of our
discrete-choice schedule was used to see how
subjects would distribute their choices between
the two alternative reinforcement schedules
without the social context present in the previous experiments. The stimuli and instructions referring to other subjects and to competition were omitted. This study examined
whether the differences between the two
schedules used in the competition/no-competition components may have been responsible
for subjects' preference for the component labeled "competing." Subjects may simply prefer
the less predictable outcomes in the competition component over the more predictable outcomes in the no-competition component. In
addition, the difference between the durations
of the two schedules may have contributed to
the observed preference for competition. In the
competition component, a stimulus change occurred after every 30 s; in the no-competition
component, a stimulus change occurred after
every 60 s. To assess possible preference for
either of the schedules, we replicated ExperYou can choose to earn points, later exchangeiment 1 using two reinforcement schedules and
able for money, in one of two ways: (a) you
did not provide instructions relating to the socan earn points by choosing Option C; or (b)
cial context. The reinforcement probability in
you can earn points by choosing Option B. Evthe FI 30-s schedule (formerly the competition
ery few minutes you will be given the choice
component) was either .25, .50, or .75. In this
to enter either option. Your earnings will be
experiment, the most important probability was
displayed on a counter appearing in the middle
.50, because at this level both options were
of your monitor, and the large letters appearing
equally profitable. At the .50 probability level
on your screen correspond with the buttons that
in the previous competition experiments, subare effective.
jects preferred the competition option. In the
RESULTS
absence of competition-related instructions or
stimuli, how would subjects distribute their
The 3 subjects met the stability requirement
choices between the two options?
at or shortly after the minimum 15 sessions of
143
CHOICES TO COMPETE
100
-S-885
90
e80
70
60
50
10
o 30
20
S 10
10"
o
a._I
.25
100
~90
.50
.75
1
cc
EE 70
60
) 50
0
5 40
o 30
20
S 10
.50
.75
.75
Fl 30-s Component
Fl 60-s Component
..~~
.25
.50
.75
50
S-909
,
40
zv 30
' 20
, 10
40
o 30
,20
a
10
a-
60
a) 50
0
S-898 .
-
E]
._
.25
co90
6 80
70
.50
.25
S-898
6 80
100
m
._
40
S-885
c:
10
a O
.25
.50
.75
Probabilty of Reinforcement
.50
.75
.25
Probability of Reinforcement
Fig. 3. The left panel shows the mean percentage of total choices for Option C, in which reinforcers were sometimes
available at the completion of an Fl 30-s schedule and delivered according to a predetermined probability: either .25,
.50, or .75. In the alternative, Option B, reinforcers were always available at the completion of an FI 60-s schedule.
The error bars represent the standard error of the mean. In the right panel, each subject's rate of responding in both
Option C and Option B is shown for each of these same reinforcer-probability conditions.
144
DISCUSSION
Using a concurrent-choice procedure with
competition and no-competition response options, we investigated how subjects' choices to
engage in either option were affected by manipulations of reinforcer magnitude and probability (i.e., the probability of "winning" while
competing). Reinforcer probability systematically affected subjects' percentage of choices
to compete. In Experiment 1, three reinforcer
probabilities-.25, .50, and .75-were manipulated in the competition component and were
CHOICES TO COMPETE
most of our subjects not only chose to compete
most of the time but also continued to compete
for many sessions.
Some reasons for these observed differences
may be due to procedural differences. In 01vera and Hake's (1976) study, session-to-session fluctuations in the number of trials won
146
ied 3 additional subjects on a comparable nonsocial schedule that matched our competition/
no-competition schedule in terms of the reinforcement contingencies but differed in terms
of the instructions and stimuli provided. On
this schedule, all discriminative stimuli and
instructions concerning competing and not
competing were omitted. The only stimuli that
appeared on the monitor were large letters,
and the instructions were limited to telling the
subject that he could earn money in either of
two alternatives. Under conditions in which
the alternatives were equally profitable, subjects preferred the former no-competition component.
The dramatically different results obtained
from the social and nonsocial versions of the
competition/no-competition schedule suggest
a powerful effect of the instructions relating
to the social context on choice and response
characteristics. In the social version of this procedure, manipulations of reinforcer probability and reinforcer magnitude alone cannot fully
account for the results observed. Using this
version of the procedure, subjects preferred the
competition option even when it was disadvantageous monetarily in the long run: High
levels of competing were maintained at low
reinforcement probabilities. Alternatively, in
the nonsocial version of this schedule, subjects
preferred the schedule equal to the former nocompetition schedule. This difference illustrates the impact of the social context (or instructions) provided by the experimenters.
As an aside, it is worth noting that the results obtained from Experiments 1 and 2, in
which social instructions were provided, have
much in common with the research examining
preference between variable and fixed schedules of reinforcement. Studies in this area have
demonstrated that organisms show preference
for variable schedules over fixed schedules of
reinforcement, given equal average rates of reinforcement (e.g., Herrnstein, 1964; Killeen,
1968; Trevett, Davison, & Williams, 1972).
The results from Experiment 3 were not consistent with this generalization.
The social context provided by the experimenters has been found to be crucially important in determining patterns of responding
and even in modulating the effects of drugs on
behavior in previous experiments. For example, using an operant paradigm to measure
147
CHOICES TO COMPETE
ing against another individual, but in our experiments the subject was competing against
a fictitious subject.
Besides investigating the effects of reinforcement magnitude and probability on choices to
compete in the first two experiments, we were
also interested in the effects of these two variables on rates of responding. We found that
response rates within subjects were always
higher in the competition component than in
the no-competition component. Between subjects, however, response rates varied considerably. Some subjects responded at high rates,
emitting more than 300 responses per minute,
and some subjects responded at lower rates,
emitting fewer than 25 responses per minute.
The effects of the various manipulations of
reinforcer probability and magnitude on rates
of responding can be summarized as follows:
(a) The reinforcer-probability manipulations
in Experiment 1 had no consistent effect on
rates of responding, and (b) the reinforcermagnitude manipulations in Experiment 2 had
no consistent effect on rates of responding. In
Experiment 3, using the nonsocial version of
the competition/no-competition schedule, no
orderly relationships were found between these
variables and rates of responding.
REFERENCES
Appel, J. B. (1963). Aversive aspects of a schedule of
positive reinforcement. Journal of the Experimental
Analysis of Behavior, 6, 423-428.
Azrin, N. H. (1961). Time-out from positive reinforcement. Science, 133, 382-383.
Buskist, W. F., Barry, A., Morgan, D., & Rossi, M.
(1984). Competitive fixed interval performance in humans: Role of "orienting" instructions. The Psychological Record, 34, 241-257.
Buskist, W., & Morgan, D. (1987). Competitive fixedinterval performance in humans. Journal of the Experimental Analysis of Behavior, 47, 145-158.
Cherek, D. R., & Dougherty, D. M. (in press). Motivational effects of marijuana: Humans' choices to work
or not work for reinforcers. Proceedings of the Committee
on Problems of Drug Dependence, NIDA Research Monograph-Problems of Drug Dependence 1993. Washington, DC: U.S. Government Printing Office.
Cherek, D. R., Spiga, R., Roache, J. D., & Cowan, K.
A. (1991). Effects of triazolam on human aggressive,
escape and point-maintained responding. Pharmacology
Biochemistry and Behavior, 40, 835-839.
Cherek, D. R., Steinberg, J. L., Kelly, T. H., Robinson,
D. E., & Spiga, R. (1990). Effects of acute diazepam
and d-amphetamine administration on aggressive and
escape responding of normal male subjects. Psychopharmacology, 100, 173-181.
Galizio, M. (1979). Contingency-shaped and rule-governed behavior: Instructional control of human loss
avoidance. Journal of the Experimental Analysis of Behavior, 31, 450-459.
Hake, D. F., Olvera, D., & Bell,J. C. (1975). Switching
from competition to sharing or cooperation at large
response requirements: Competition requires more responding. Journal of the Experimental Analysis of Behavior, 24, 343-354.
Hake, D., Vukelich, R., & Olvera, D. (1975). The measurement of sharing and cooperation as equity effects
some relationships between them. Journal of the Experimental Analysis of Behavior, 23, 63-79.
Harzem, P., Lowe, C. F., & Bagshaw, M. (1978). Verbal control in human operant behavior. The Psychological Record, 28, 405-423.
Herrnstein, R. J. (1964). Aperiodicity as a factor in
choice. Journal of the Experimental Analysis of Behavior,
7, 179-184.
Kaufman, A., Baron, A., & Kopp, R. E. (1966). Some
effects of instruction on human operant behavior. Psychonomic Monograph Supplements, 1, 243-250.
Kelly, T. H., & Cherek, D. R. (1993). The effects of
alcohol on free-operant aggressive behavior. Journal of
Studies on Alcohol, 11, 40-52.
Killeen, P. (1968). On the measurement of reinforcement frequency in the study of preference. Journal of
the Experimental Analysis of Behavior, 11, 263-269.
Lowe, C. F. (1979). Determinants of human operant
behaviour. In M. D. Zeiler & P. Harzem (Eds.), Advances in analysis of behaviour: Vol. 1. Reinforcement and
the organization of behaviour (pp. 159-192). Chichester,
England: Wiley.
Matthews, B. A., Shimoff, E., Catania, A. C., & Sagvolden, T. (1977). Uninstructed human responding:
148