Вы находитесь на странице: 1из 70

Evaluation

What is Evaluation?
 Brown (1989) defines evaluation as “the systematic collection
and analysis of all relevant information neccessary to promote
the improvement of a curriculum and assess its effectiveness
within the context of the particular institutions involved.
Approaches to program
evaluation
Brown (1989) points out that these approaches can be categorized in:

 Goal-attainment approaches
 Static-characteristic approaches
 Process-oriented approaches
 Decision-facilitation approaches
Goal-attainment approaches
 Also called “product-oriented approaches”
 The focus of the evaluation is on the goals and instructional objectives
 Its purpose is to determine whether it is achieved
 One of the main advocates was Tyler (1942)
 From his perspective took the purpose of evaluation as determining goals
and objectives
Static characteristic
approaches
 Evaluation is also performed to determine the effectiveness of a particular program.

 It is conducted by outside experts that inspect a program by examining various


accounting and academic records.
 As well as static characteristics for example the number of library books.

 The number and types of degrees held by the faculty, the student-teacher ratio, the
number and seating capacity of classrooms.
 These aspects and criteria are taken into account in order for a institution to be
certified.
Process-oriented approaches

• Limits are not set on studying


Goal-free the expected effects of the
evaluatio program.
n
• Focus on products in favor of
process-oriented evaluation.

 Scriven’s (1967)
Countenance model

3. End with
2.Fix on
judgement
Begin with a descriptive
operations
rationale. operations (intents
(standards and
and observations).
judgements).

Antecedents Transactions Outcomes


Decision-facilitation approaches

Evaluators gather
information that will
Evaluators attempt help the
Help in making
to avoid making administrators to
decisions
judgements. make their own
judgements and
decisions.
CIPP
• Rationale for objectives.
Context

• Utilization of resources for achieving objectives.


Input

• Periodic feedback to decision makers.


Process

• Measurement and interpretation of attainments


Product during and at the end of a program.
1. Evaluation is performed in
the service of decision Key elements in performing
making.
program evaluation
2. Evaluation is cyclic

3. The evaluation process


includes:
• Delineating
• Obtaining
• Providing

4. The delineating and providing


steps in the evaluation process
Stufflebeam (1974) are interface activities requiring
collaboration.
CSE (Center for the Study of
Evaluation)

1. Systems assessment

2. Program planning
Evaluations should provide
information for different 3. Program implementation
categories of decisions:
4. Program improvement

5. Program certification
Alking (1969)
Discrepancy model
1. Program description stage

2. Program Installation stage

3. Treatment adjustment
stage (process).

4. Goal achievement analysis


stage.

5. Cost-benefit analysis.
Provus (1971)
THREE DIMENSIONS
THAT SHAPE POINT
OF VIEW ON
EVALUATION
Formative vs
summative

Process vs product

Quantitative vs
qualitative
PURPOSE OF THE
INFORMTION
Formative evaluation:

Ongoing process

To collect and analyze


information

The type of
decisions made
Formative vs
result in summative
modifications to
the curriculum
PURPOSE OF THE
INFORMTION
Summative evaluation:

It occurs t the end of


the program

Success, efficiency
and effectiveness

To make important
changes in the Formative vs
program summative
• Both of them can be used in
combination

• Formative: To gather information to


change, develop and upgrade the
program

• Summative: It can be a “pause” to


assess success, effectiveness and Formative vs
summative
efficiency
TYPES OF THE
INFORMTION
Process
evaluatio • It focuses on the workings of the
n program

Product • It focuses on on whether the


evaluatio goals have been achieved
n Consider
including
both
TYPES OF DATA AND
ANALYSES
• Quantitative data: Quantitativ
e vs
qualitative
• Countable bits of information
• Results are in the form of numbers
• E.g.: Tests, quizzes, grades, number
of ss in a class, number of males and
females, and so forth.
TYPES OF DATA AND
ANALYSES
• Qualitative data: Quantitativ
e vs
qualitative
• Holistic information based on
observations
• They can’t be quantities or numbers
• E.g.: Journals, minutes from
meetings, classroom observations,
and so forth.
INTERACTION AMONG
DIMENSIONS
The stance taken in
one dimension will
affect the others

They tend to
interact
DOING PROGRAM
EVALUATION
• Based on the approaches and
dimensions, evaluators decide which
combinations of both will work best
GATHERING
EVALUATION DATA Quantitative evaluation studies

• Importance: Analysis of data since it


should be carried out in a way that
patterns emerge

• These patterns will help to make sense


of the results and assess the quality of
the program
• Subjects: Students
• Subjects are divided in two groups
(experimental and control group)
• Experimental: Receives treatment
• Control: No treatment
• To determine whether the treatment has
been effective
• Observations: Tally, comparison of
rankings, test scores, and so forth.
Program components
as data sources
Effective?
Effective Efficient Attitudes
Needs

Objectives

Testing

Material

Teaching

Evaluation
E.g. Needs
 What were the original perceptions of the students’ needs?
 How accurate was this initial thinking (now that we have more experience
with the students and their relationship to the present program)?
 Which of the original needs, especially as reflected in the goals and
objectives, are useful and which are not useful?
 What newly perceived needs must be addressed? How do these relate to
those perceptions that were found to be accurate? How must the goals and
objectives be adjusted accordingly?
Program Effectiveness
Questions Primary data sources

Which of the needs that were identified turned out to be accurate (now
that the program has more experience with students and their All original needs analysis
Needs
relationships to the program) in terms of what has been learned in testing, documents
developing materials, teaching and evaluation?

Which of the original objectives reflect real student needs in view of the
Criterion referenced test
Objectives changing perceptions of those needs and all of the other information
(diagnostic)
gathered in testing, materials development, teaching and evaluation?
Criterion referenced tests
To what degree are the students achieving the objectives of the courses?
Testing (achivement) test
Were the norm referenced and criterion referenced test valid?
evaluation procedures
How effective are the materials (whether adopted, developed, or adapted) Material evaluation
Material
at meeting the needs in the objectives? procedures
Classroom observation
Teaching To what degree is instruction effective?
and student evaluation
Efficient?
Questions Primary data sources

Original needs analysis


documents and criterion
Which of the original student needs turned out to be most e efficiently
Needs referenced tests (both
learned? Which are superfluous?
diagnostic and
achievement)

Which objectives turned out to be needed by the students and which did Criterion referenced test
Objectives
they already know? (diagnostic)
Where the norm-referenced and criterion-referenced tests efficient and Test evaluation
Testing
reliable? procedures
Material blueprint and
How can materials resources be reorganized for more efficient use by
Material scope and sequence
teachers and students?
charts
Orientation documents
Teaching What types of support are provided to help teachers? Are they efficient? and administrative
support structure
Attitudes
Questions Primary data sources

What are the students’, teachers’, and administrators attitudes and feelings Needs analysis
Needs about the situational and language needs of students? Before the program? questionaries and any
After? resulting documents

What are the students’, teachers’, and administrators attitudes and feelings
Evaluation interviews and
Objectives about the usefulness of the objectives as originally formulated? Before the
questionaries
program? After?
What are the students’, teachers’, and administrators attitudes and feelings
about the usefulness of the tests as originally developed? Before the program? Evaluation interviews and
Testing
After? questionaries

What are the students’, teachers’, and administrators attitudes and feelings
Evaluation interviews and
Material about the usefulness of the materials as originally adopted, and/or adapted?
questionaries
Before the program? After?
What are the students’, teachers’, and administrators attitudes and feelings
Evaluation interviews and
Teaching about the usefulness of the teaching as originally delivered? Before the program?
questionaries
After?
Steps in an evaluation
All of the early steps in evaluation aim at deciding why the
evaluation is being done and if it is posible to do it
STEPS
1. Find who the
evaluation is for and what
kind of evaluation they
need.

2. Find what the results of


the evaluation will be used
for.
3. Decide if the evaluation
is neccesary or if the
needed information is
already available.

4. Find how much time


and money are available
to do the evaluation.
5. Decide what kinds of
information will be
 mount of learning gathered
 Quality of learning

 Quality of teaching

 Quality of curriculum design

 Quality of course administration

 Quality of support services

 Teacher satisfaction

 Learner satisfaction

 Sponcer satisfaction

 Later success of graduates of the course

 Financial profitability of the course


6. Try to gain the support
of the people involved in
the evaluation.

7. Decide how to gather


the information and who
will be involved in the
gathering of the
information .

8. Decide how to present


the findings.

9. Decide if a follow up
evaluation is planned to
check the implementation
of the findings.
Why is an evaluation being done?
 At the end of the prepatory stage the evaluator should be
able to tell the person commissioning the evaluation:

-Whether the evaluation is worth doing.

-Whether the evaluation is posible.

-How long it might take.

-Whether the evaluator is willing to do it.

-What kind of evidence the evaluator will


gather.
The type and focus of the evaluation
Distinctions

Formative/summative evaluation

Long or short term

Process/product

Cognitive, affective and resource factors


• Forming or
Formative shaping the curse
to improve it.

• Making a summary
Summative
on the
quality/adequacy
of the course.
Formative Summative

 Purpose Improve the course. Judge the course.

 Type of data Causes, processes, Results, standard,


individuals. groups.

 Use of data Counselling, mentoring, Make decisions on


setting goals. adequacy.

Presented and
 Presentation discussed with Presented in a report.
of findings individuals.
• Planned as part of
Long curriculum
design.

• Quality of
teaching and
Short learner
achievement
cannot be validly.
• How engaged
learners are in their
tasks.
Process • Interaction’s
quality.
• Language used.

• What was learned


Produc and how much was
t learned.
• Learning, teaching
and the gaining of
Cognitive knowledge.
• Application.

• Feelings of
Affective satisfaction and
attitudes.

• Costs, profit,
availability and
Resource quality of teaching
and learning
resources.
Full-scale evaluation could
be an enormous
undertaking.

Importan
t

What the
Have a small amount
evaluation will
of relevant data.
focus on.
Gaining support for the evaluation

Effective People Wide range


evaluation involved of staff
• Honest data • The • Better
available. evaluation is informed
worthwhile evaluation.
and not
personal.
Assumptions behind an evaluation

1 • Course improvement.

• People involved is capable of


2 improving it.

• Freedom and flexibility to make


3 changes to the course.

4
• Improvements will make it a
better course for all.
Gathering the
information
Evaluation can look at…
Administrative
Environment
procedure

Resources Outsiders view


Interviews
Interviews are usually conducted on one-to-one basis, but is
sometimes useful to interview a committee or to use a staff
meeting as a way of gathering data.

Structured Unstructured

• The interviewer has a • The course of the


procedure and a set interview depends on
of questions to follow the wishes of the
and generally keeps interviewer and
to these interviewee and is
largely unpredictable
Self-report scales
These are open-ended.
When are self-report scales very
efficient?
Need to survey a large number of people
Large number of pieces of information
There are very clear focuses for the information
Need to summarize the data to get a general picture
Dangers of self-report scales
Result in average results
Involve pre-determined questions and type of answers
Misinterpretation
The influence made by what has immediately preceded.
OBSERVATION
AND
CHECKLISTS
Analysing the
Observing teaching
course book

Observing learning Observing ss’


in lessons performance after
the course
Unstructure
Structured
d

Observatio
n
STRUCTURED
OBSERVATION
• A checklist is needed

• It makes sure that everything that was


thought to be important is looked at
UNSTRUCTURED
OBSERVATION
• To see what is there without many preconceptions

• It allows the observer to pick up important features that might not


be included in checklists

Both of them
are
important
CHECKLISTS
• Yes/No answers

• Scaled responses Reliable Practical

• A space for comments on each


item

Valid
RELIABLE
CHECKLISTS
• The items must be clearly understood

• A balanced number of items


VALID CHECKLISTS
• Based on a well-thought and well-researched system of knowledge
PRACTICAL
CHECKLISTS
• Length must be considered

• The checklist must be easy to


use and to interpret

A pilot can
be useful
1. Systematic coverage

2. They allow comparisons

3. Basis of improvement

1. They might “blind” the


observer

2. They tend to get out of date

3. Summing the parts is equal to


the whole
FORMATIVE
EVALUATION
When
curriculum
design is
seen as a
continual
process
1. Parts that can be negotiated (T-ss)
2. Periodic and systematic observation of classes (teacher peers)
3. Regular meetings to discuss the progress of the course
4. Self-evaluation forms for teachers
5. Evaluation forms filled by students
6. Some class time can be set aside for ss to give feedback to the
teacher
7. Outside evaluator (occasionally)
EXAMPLE
RESULTS OF
THE
EVALUATION
ETHICAL ISSUES

Confidentialit
y

Feelings
Report of the evaluation to
indicate the quality of the
course

One or two written reports


and an oral one

However,
this is not
necessary
the end of
the
evaluation

Вам также может понравиться