Вы находитесь на странице: 1из 15

Participant-oriented Evaluation

Approaches: Stakes Countenance


Emily Howard
Program Evaluation and Policy Analysis

Responsive Evaluation
Grew out of dislike for mechanical and
preordinate evaluation methods in the
late 1960s.
Characteristics include :

1. Depends on inductive reasoning

2. Uses a multiplicity of data

3. Does not follow a standard plan

4. Records multiple rather than
single realities

Fitzpatrick, Sanders, Worthen 2004
Quick Vocabulary Lesson
Antecedent: A condition existing prior to instruction that may relate to
outcomes. (Inputs, resources, etc.) Example: Teacher background.
Transaction: Successive engagements or dynamic encounters constituting
the process of instruction. (Activities, processes, etc.) Example: Behavioral
interactions.
Outcomes: The effects of the instructional experience. (Including
observations and unintentional outcomes.) Example: Teacher performance.
Stake and his Countenance
The two basic acts of evaluation are
description and judgment.
Insert Matrix Here
What Does it Do?
Stresses importance of being
responsive to realities in program
and concerns of participants rather
than relying on preconceptions.
The ultimate test of an evaluations validity is
the extent to which it increases the audiences
understanding of the entity that was evaluated.
Responsive evaluators in continuous
communication with stakeholders.
Disinterested in formal objectives and
formal data collection.
Affords the evaluator information
needed to analyze the levels of
congruency.
Events in Stakes Countenance
clock image here
Advantages
Evaluators look at the needs for
those whom the program serves.
Attempts to reflect the
complexity of the
program as realistically
as possible.

Has great potential for gaining new
insights and theories about the field and
program it evaluates.
Disadvantages
Approach accused of being too
subjective.
Possibly over-minimizes the
importance of data collection
instruments and quantitative
evaluation.
Can be cost prohibitive and labor
intensive.
Case Study:
Evaluating an Environmental Education
Professional Development Course
Purpose: Evaluate an environmental education professional development
course using Stakes Countenance Model as the organizational framework.


Case Background
Evaluation of a Chesapeake Watershed
Ecology course.

Course designed to educate teachers about
research and instructional strategies used
to investigate community environmental
issues.

Course included laboratory procedures,
data collection trips, and data analysis.

Evaluation Methodology
Criterion levels were established to judge discrepancies between what was
intended and what was observed to occur.
Antecedents:
Teacher background
Appropriate curriculum
Resource availability


Transactions:
Component participation
Behavioral interactions
Course choreography




Outcomes:
Improved performance
Teacher attitudes
Intent to use




Data Collection Instruments:
1. Pretest
2. Posttest
3. Teacher opinion survey
4. Expert opinion questionnaire
5. Attendance records
6. Background information
7. Teacher journals
8. Instructor journal



Unexpected Outcomes:
Enhanced professional confidence
Not enough time to study and reflect
Administrative barriers to
implementing what they learned
The table shows the
outstanding
characteristics of the
course.




The table compares
intents to
observations and
describes the
judgment standards
and the
judgment of the
evaluator.
Countenance Matrix
Evaluation Results & Summary
Benefit of using Stakes
Countenance:

Facilitated in-depth understanding
of the course.

Revealed unanticipated
consequences as well as reasons
and consequences for the effects.
Results of Evaluation:

1. Teachers were familiar with basic
concepts but not advanced
techniques.

2. Established importance of ties
between perceived resource ability,
class participation, and curricular
choices.

3. Linked knowledge gains and
improved professional confidence
expressed by the teachers.

Quality of the Case Study
Would different
techniques have yielded
different results?
Would other techniques have been
more or less helpful?
Some of the judgments could
have possibly been culled from
survey results as well.
Did not see voice of the evaluator.
Judgments largely a result of
participant experience and rating.
Does the evaluator do more than
facilitate? Does the evaluator
make big picture observations?
Is the technique more
than the matrix, and is an
evaluator necessary?
Case study did not tackle a
complex issue, hard to judge
the technique.
Tool seemed well-suited to case;
in education evaluation should
be participant-oriented.
Questions Observations
Questions?
Graphic
Emily Howard
Participant-oriented Evaluation: Stakes Countenance

Вам также может понравиться