Вы находитесь на странице: 1из 4

Why use experiments for program evaluation?

David W. Nickerson
david.w.nickerson@gmail.com

Jared P. Barton
jared_p_barton@yahoo.com

Broad Overview and Explanation


One of the challenges of managing fieldwork is evaluating the success of the
program. Program implementers want to replicate successful programs, and also to learn
from (and avoid repeating) unsuccessful programs. It is also valuable to understand how
costly achieving a goal is, so organizations can plan cost-effectively in the future.
Experiments can help answer both questions of success and of cost-effectiveness, but to
do so the research protocols need to be designed and implemented prior to the fieldwork.

When nonprofit organizations evaluate their operations, they often rely upon
observational data passively collected during the course and at the end of their programs.
For instance, suppose an organization selected precincts in which to mobilize voters for
an upcoming election. The group selected precincts on the basis of the density of Asian-
American population and the lack of secure apartment buildings to facilitate canvassing.
A typical observational study would take careful notes on the activities in the selected
precincts and then compare the rates of turnout of these particular precincts to the
neighboring precincts. Suppose the group discovered that in its selected precincts turnout
was 5 percentage points higher than in neighboring precincts -- even after "controlling
for" age and past turnout. Can the group safely conclude that its program was a success?

Not necessarily. There may be other reasons that these precincts had higher
turnout that are completely unrelated to the mobilization efforts of the group. For
instance, perhaps the requirement that the precincts have few secure apartment buildings
pushed the campaign into slightly wealthier neighborhoods with more single-family
housing. Since income is positively correlated with voter turnout, it is possible that the
type of neighborhood selected is responsible for the 5 percentage point difference. Thus,
the difference in turnout could be the result of unobserved factors and not the result of the
mobilization campaign. The potential causes could be even more subtle and unseen. For
instance, suppose the Asian-American neighborhoods populated by the campaign are
gentrifying. The higher voting rate might be a result of the changing demographics of the
community rather than the mobilization campaign. In short, it is impossible to know
whether the boost in turnout was the result of the mobilization campaign.

Moreover, the campaign managers do not know how much of the success can be
attributed to each technology. Was it the door-to-door knocking? Posters? Telephone
calls? How much did each technology contribute to the success of the campaign? Were
the early efforts conducted in September as effective as the efforts in the week prior to the
election? Was it even worth working in the field a month prior to the election? These are
questions that an observational study is unable to answer.

In contrast, a properly designed experiment can provide answers to each of these


questions. At its most basic, an experiment manipulates the application of campaign
techniques using the following steps:
1) The list of targets are identified and listed -- these are the people the campaign
would like to contact with the technique;
2) The people listed as targets are randomly divided into a treatment group, where
people are contacted using the technique, and a control group that is not contacted
by the campaign (note: the division need NOT be 50-50);
3) The campaign then goes about its business contacting the treatment group (and
leaving the control group alone);
4) The outcome variable (e.g., registration or turnout or support for a ballot
proposition) is then measured.

Because the assignment to treatment and control groups is random, they should be
composed identically. They should be the same age, the same education, the same vote
history, and the same income on average. Due to randomization, the treatment and control
group should also be identical with regards to matters that the campaign can't know such
as interest in the election, personal connections to the campaign, or convenience of
driving to the polls on Election Day. The only difference between the two groups should
be who receives the treatment. We say “should be” because the similarity of the two
groups due to randomness depends upon the size of the overall population; we need a
sufficiently large group to use this property in assuming sameness. Thus, to measure the
effectiveness of the campaign, the manager need only compare the rate of turnout in the
treatment group to the control group. If one voted at 45% and the other 37%, then the
campaign boosted turnout in the treatment group by 8%. Moreover, we can then calculate
the expenses of the campaign and determine how much it cost to generate each "new"
vote. For instance, say the campaign that boosted turnout by 8 percentage points spent $5
on each attempt to contact a member of the treatment group. Then, we could conclude
that each new vote cost $62.50. The manager now has the ability to decide whether to
employ the same strategy in the next election knowing roughly how cost effective it was
in the past election.

Requirements
The requirements to run an experiment are not onerous:
1) An identifiable subject population. For instance, Asian-American identified on
the VAN or streets in neighborhoods with a high density of Asian-Americans are
clearly defined. People at a street fair or festival are not a well-defined group.
2) A treatment that can be randomized. Mail, phone, and doors are very easy to
manipulate. A national TV campaign or web broadcasts would be difficult to
study.
3) The ability to measure the outcome variable of interest. Registration and turnout
are public records that can be gathered. Vote choice would probably require a
post-election survey, as would attitudes about the electoral process.
4) The willingness to extract a control group and abide by the agreed upon protocol.

Sadly, it is this last requirement that trips up most organizations, and causes the loss of
many opportunities to learn from past field work. We recognize that your desire to learn
from experiments needs to be balanced by the goals of your program. We will work with
you through the series of trade-offs and compromises that will inevitably come up.
Experiments that are difficult to implement (politically or logistically) are often difficult
for organizations to execute properly and thus lose a learning opportunity for both the
organization and the evaluators. Thus, our overriding goal when designing experiments
to test your field operations is to tailor the study to fit your organization as closely as
possible.
What does this mean in terms of how you would go about conducting an
experiment with us? The process can be broken into a series of steps (note: the steps
below assume that you are using the VAN and conducting phone or door-to-door
canvassing):
1) Tell us what you plan to do, how your organization is structured, the logistical
details behind your field operation, and any political constraints imposed by
coalition members, donors, or past history (e.g., we have to hit 1000 doors in
Precinct 5). This information will help us tailor an experiment to your need. You
should also tell us what questions you would like answered in this election cycle
and concerns you have about the experimental process, so we can be sure to
address these issues up front.
2) We will work with you to come up with an evaluation plan that imposes as few
burdens on you as possible.
3) You email a list of the places or people that you are targeting and will be subjects
in the experiment. If you are using the VAN, you simply request an export of
your target list.
4) We randomly assign the experimental subjects to the treatment and control groups
and return the assignments to you. If you are using the VAN, we will return a file
with the VANID, a flag for the treatment group, and perhaps a message
assignment as well.
5) Upload the file onto the VAN.
6) When pulling your walk or phone lists, in addition to the targeting (e.g., precinct
5, Asian, under 50), select only the treatment individuals (you just check a box).
7) If there is a messaging component to the experiment, make sure the walk or call
sheet you formatted includes the randomly assigned message flag (e.g., “E” for
early voting and “D” for Election Day) in addition to name and address/phone.
8) Print the walk or call sheets.
9) Perform the walking and calling.
10) Be sure to enter in the contact information (e.g., not home, left message,
contacted).
11) The day after Election Day, send us: a) any information on how you targeted
neighborhoods; b) any information on what canvassers/callers said; c) copies of
printed materials like flyers, mailers, or pamphlets; d) all the contact information;
e) reports of any problems or colorful anecdotes.
12) Once vote history is released, we will check registration and turnout for both the
treatment and control groups to perform the analysis.
13) We will write up the results and send you a draft.
14) Send us your edits. Everything excepting the results is fair game.
15) We will return an official white paper. You can distribute the white paper as you
see fit.
16) You tell us whether you would like to be identified in our use of the experiments
or remain anonymous.

The list looks daunting, but each step is straightforward and many of them are standard
actions for any voter mobilization campaign. The steps for mail can be even easier, since
we can mail the names and addresses directly to the mail house on your behalf (or format
it for a mail merge as you require). For projects where the VAN is not an option like
voter registration (or groups without a subscription to VAN), we can format your call and
walk sheets for you so that all you have to do is press “print” and hand the sheets out to
your canvassers. We can tailor an experiment to fit your program and organization.

Why are you performing this service?


We are academics who need to publish articles in academic journals for career
advancement. Our consulting services are very expensive, but we our services are free if
you allow us to use the data for academic publishing. You can say whether your group is
identified or anonymous in any publication. The nature of the review process and
publishing cycle means that any academic work will likely be published no earlier than
2011. Furthermore, academic articles aren’t read by many people. So our academic
writing won’t be good advertising for your group, but it will secure you top notch
consulting services for free.

Вам также может понравиться