Вы находитесь на странице: 1из 22

MED7083 - Evaluation in

Education

Tutorial 1
DR.NORADZIMAH Bt. ABDUL MAJID

• MUHAMMAD KHAIRUL HAFIZA ABU HANAFFI B.


MD ZAINAL ABIDIN S18100525
• NOOR SUHAIDA BINTI OMAR S18100512
What Makes A Good Evaluation

1) Tailored to your program and builds


on existing evaluation knowledge and
resources

• Educators have created and field-tester similar


evaluation designs and instrumen.
2) Inclusive
• Taken into account and that results are as
complete and unbiased as possible
• Input should be sought from all of those
involved and affected by the evaluation such
as students, parents, teachers, program staff
or community members.
3) Honest

• Should not be a simple declaration of program


success or failure.

• Help you learn where to best put your limited


resources.
4) Replicable
• The higher the quality of your evaluation
design, its data collection methods and its
data analysis.

• The more accurate its conclusions and the


more confident others will be in its findings.
Similarities And Differences Between
Measurement And Evaluation
Similarities
• Both means to monitor progress for individuals
or groups in the work place or in the
educational environment.

• They both use specified instruments to produce


scores or results about a given situation.
• Measurement and evaluation are processes
that are used to provide information about a
person or object and their performance.

• The outcomes of the measurements and


evaluations help determine potential and
effective systems that may be put in place to
ensure key performance in business and
learning institutions.
• Both processes are used in educational
research. For instance: to carry out a research
on the effect of increased temperature in
classroom on effective teaching-learning
process. The measurements and evaluation of
temperature of classrooms in the study area
must be taken to establish theories.
Differences
• While evaluation is a new concept,
measurement is an old concept.

• While evaluation is a technical term,


measurement is a simple word.

• While the scope of evaluation is wider, the


scope of measurement is narrow.
• In evaluation pupil’s qualitative progress and
behavioral changes are tested. In
measurement only quantitative progress of
the pupils can be explored.

• The qualities are measured in the evaluation


as a whole. In measurement, the qualities are
measured as separate units.
• Evaluation is that process by which the
interests, attitudes, tendencies, mental abilities,
ideals, behaviors and social adjustment etc. of
pupils are tested. By measurement, the
interests, attitudes tendencies, ideals and
behaviors cannot be tested.

• The evaluation aims at the modification of


education system by bringing a change in the
behavior. Measurement aims at measurement
only.
Theory Related To Measurement
And Evaluation

• Measurement Theory is the study of how


numbers are assigned to objects and
phenomena, and its concerns include the
kinds of things that can be measured, how
different measures relate to each other, and
the problem of error in the measurement
process.
• Various systems of axioms, or basic rules and
assumptions, have been formulated as a basis
for measurement theory. Some of the most
important types of axioms include axioms of
order, axioms of extension, axioms of
difference, axioms of conjointness, and axioms
of geometry.
• One of prominent theory-based evaluations is
Realistic Evaluation.
• Realistic evaluation is a form of theory-based
evaluation developed by Pawson and Tilley
(1997, 2006). They argue that whether
interventions work depends on the underlying
mechanisms at play in a specific context. For
Pawson and Tilley,
Outcome = Mechanism + Context
• Mechanisms describe what it is about the
intervention that triggers change to occur.
• In a smoking cessation intervention, for
example, mechanisms might include peer
pressure to stop or to not stop, fear of health
risks, and economic considerations.
• For realistic evaluators, the key evaluation
questions are, What works? For whom? In
what circumstances? In what respects? How?
Model Of Evaluation With Relevant
Example
Kirkpatrick Model of Evaluation

• Kirkpatrick developed a training evaluation in 1959.


• Arguably the most widely approach.
• Simple, Flexible and Complete
• 4-level model
• Analyzing and evaluating the results of training and
educational programs.
The Four Levels of Evaluation

* Level I: Evaluate Reaction

* Level II: Evaluate Learning

* Level III: Evaluate Behavior

* Level IV: Evaluate Results


* Fifth level was recently "added" for
return on investment ("ROI") but this
was not in Kirkpatrick's original model
Relationship Between Levels

Level 4 -
Results Each subsequent level is
Was it worth it?

predicated upon doing evaluation


Level 3 - Behavior
KSA being used on the job? at lower level

A Level 3 will be of marginal use,


Level 2 -
Knowledge if a Level 2 evaluation is not
Did they learn anything
conducted
Level 1 - Reaction
Was the environment suitable for learning?
Only by assessing each level can we yield actionable results

Level 4 -
Results
Was it worth it?

Level 3 - Behavior
KSA being used on the job?

Level 2 -
Knowledge
Did they learn anything

Level 1 - Reaction
Was the environment suitable for learning?

Slide 6
Types of Assessments Used at Each Level

Level 4 - Results
Type Form
Was it worth it?
Correlation of business results
Summative with other assessment results

Level 3 - Behavior Summative


KSA being used on the job? Observation of Performance 360
Survey

Level 2 - Diagnostic Self-assessment


Knowledge
Test
Did they learn anything Summative
Level 1 - Reaction Reaction Survey
Was the environment suitable for learning? Real-time Polling Quizzing
Formative
Example of Kirkpatrick Model
Thank You.

Вам также может понравиться