Вы находитесь на странице: 1из 3

Quantitative or Qualitative, That is the Question

As the supply of funds from foundations, donors, and government entities decreases, the demand for
accountability has increased. This increase of accountability has caused stringent evaluation requirements for
non-profit agencies to prove that their funded programs are affecting the populations that they serve. The non-
profit should also be concerned about whether or not the program is effective and serving the agency mission.
An evaluation can range from the dissemination of surveys to comprehensive research projects that determine
long-term outcomes or the effectiveness of a program. The two most common types of evaluations are
qualitative and quantitative evaluations.
Qualitative Methods
These assess a program’s effectiveness and to provide feedback for the improvement of the program. They
include:
Narrative Interviewing – Interviewing a group of people using open-ended questions. This method tells a story
about a problem or need.
Example: Interviewing Native Americans regarding barriers to accessing social services.
Pros: The “richness” of the data. Ability to receive anecdotal stories. Allows for open learning outside of
old paradigms or theories.
Cons: Time consuming to gather and analyze. Researchers can impose their own biases and opinions on the
data collected.
Participant Observation – Evaluators listen, observe, and record data but also participate in the events and
activities as they happen.
Example: Travel with a social worker to observe the impact of AA meetings on alcoholics. Document
observations and experiences by participating in the meeting.
Pros: Ability to collect multiple sources of data and verify the credibility throughout the process. Participate
in the agency program to experience the perspective of a client in the program.
Cons: Time consuming to gather and analyze. The researcher runs the risk of immersion in the activities or
populations.

1
Secondary Content Data –Use of existing text data to answer newly developed evaluation questions.
Example: Using academic records to compare truancy rates between suburban and inner city children.
Pros: Information already exists.
Cons: Data is recorded for other purposes so it will be limited in scope. It may not contain the level of detail
required to answer the evaluation questions.
Qualitative research is highly criticized. Opponents claim that it is too subjective; it is difficult to replicate
because the procedures are too vague; broad conclusions are made from too little evidence; and it is difficult to
tell if the conclusions are truly supported by the data. Qualitative reports are also generally lengthier and the
data is more difficult to summarize. If you choose to include this type of evaluation in your grant, it is best to
use it only within the initial years of project or to use it in conjunction with quantitative methods.

Quantitative Evaluations
These assess a program’s outcomes and effectiveness, as well as the program’s impact on the population it
serves. Quantitative methods have the ability to compare multiple variables as well as simultaneously answer
specific questions related to outcomes and performance. Quantitative methods could include:
Survey Questionnaires – Collect data from a population in order to describe them as a group as well as to obtain
their perceptions regarding a program or need. The most common are standardized survey questionnaires that
provide the participant with choices for their answers, (i.e. satisfied, not satisfied). Usually administered
through the mail, over the telephone, or in person.
Example: A satisfaction survey provided to seniors who participate in a non-profit’s transportation program.
Pros: Relatively inexpensive. Easily reach a large number of people. Provide specific data. Efficient
collection of data. Input into a computer for analyzing.
Cons: Low response rate. Shallow data.
Structured Observation – A trained observer records the interactions of others in a specific place over an agreed
upon time using specific procedures and measurements.
Example: Observing employees in state assistance divisions interact with minorities to determine if the
interactions are a barrier to receiving state services.
Pros: Provides precise and reliable data for complex interactions. Little contamination of the data by the
observer due. Provides a factual account.
Cons: Time and resources needed to train the observers and collect the data. Limited data in scope.
Secondary Data – Use of existing data to answer new research questions.
Example: The use of census data to determine if a state program has influenced the housing rates of
minorities.
Pros: Unobtrusive. Inexpensive. The data is easily accessible and analyzed.
Cons: Research questions are limited to what is available in the data sets.

2
Quantitative methods come in a variety of formats, one-time survey, longitudinal studies, comparisons, and
statistical research, just to name a few. The ability to illustrate to a funder that the agency gathered concrete
data to support their claim that a program positively affects a community or population is a strong argument for
on-going funding.

Challenges in Program Evaluation


Evaluation has its challenges beyond writing the proposal. Implementing a strong evaluation plan will mean
considering what challenges lie ahead. It becomes necessary for the non-profit to discuss the attitudes,
behaviors, culture, and capabilities of the populations the program serves and consider these attributes when
building an evaluation plan. You do not want to propose handing out surveys to an illiterate population or using
overly obtrusive methods in a quiet reserved neighborhood. It is not difficult to create a sense of apathy or
resentment towards the evaluation. This can skew the data and make it difficult to evaluate the program in the
future.

An organization also needs to consider their capacity to implement the evaluation plan. If current staffing is not
capable due to workload issues, instead of compromising the integrity of the evaluation, consider writing an
evaluator into the grant. Alternatively, explore other possibilities like volunteers, interns, temporary workers, or
collaboration with another agency. An evaluation plan does not need to be grandiose to measure the
effectiveness of a program or its affects in the community. If the agency has limited capacity to evaluate its
program, review the grant requirements and scale the evaluation down to analyze the main attributes of the
program, such as are the clients satisfied; provide usage information; ask staff to provide anecdotes regarding
the impact of the program; and provide a record of staff activities in implementing the program. An evaluation
plan does not have to be a research project, it just needs to illustrate to the funders that their money is positively
affecting the community.

2010 Copyrighted material. Please seek permission before reprinting.

Вам также может понравиться