Академический Документы
Профессиональный Документы
Культура Документы
By :
AFRIANI INDRIA PUSPITA
RRA1B214029
Practicality
Practicality refers to the logistical,
down-to-earth, administrative issues
involved in making, giving and
scoring an assessment instrument.
A PRACTICAL TEST . . .
An effective test is practical when it fits the
following requirements:
budgetary limits
appropriate time constraints
clear directions for administration
appropriately utilizes human resources
does not exceed available material resources
considers time and effort for design and
scoring.
Validity
Validity of a test is the extent, to which it exactly
measures what it is supposed to measure.
1) Content validity
The correlation between the contents of the test
and the language skills, structures, etc. with which
it is meant to be measured has to be crystal clear.
The importance of content validity
) To be an accurate measures of what it is supposed
to measures i.e to have construct validity
) To have a harmful backwash effect.
2) Criterion-Related validity
how far the results on a test agree with those
provided by an independent and highly dependable
assessment of the candidates ability.
This kind of validity comes in two forms:
1. Concurrent validity
is when the test and the criterion are given at
roughly the same time. Suppose, for example,
there is an oral component as part of some final
achievement test.
2. Predictive validity
concerns the degree to which a test can predict
future performance. For example: how well a
proficiency test can predict a students ability to
cope with a graduate course at a specific university.
3) Construct validity
A tests construct validity refers to whether it
can be demonstrated that it measures the ability
it is supposed to measure. For example, one
might hypothesis that reading involves many
sub-abilities like the ability to guess the
meanings of words from context.
4) Face Validity
means it looks as if it measures what it is
supposed to measure. An example of a test
which lacks face validity would be one which
claims to measure pronunciation ability, but
does not require the candidates being tested to
speak.
Reliability
Test reliability refers to the degree
to which a test is consistent and
stable in measuring what it is
intended to measure. Most simply
put, a test is reliable if it is
consistent within itself and across
time.
Alternate
forms
method:
The
disadvantages of the previously mentioned
test can be reduced by using two different
forms of the same test.
Split half method: It is the most
common method. In this method only one
test is administered but it is split into two
halves and both halves are scored
separately. The two sets of scores are then
used to obtain the reliability coefficient.
But the two halves should be equivalent.
SCORER RELIABILITY:
Scorer
reliability
refers
to
the
consistency with which different
people who score the same test agree.
And to quantify the level of agreement
we use scorer reliability coefficient.
If the scoring of a test is not reliable
then the test results cannot be reliable
either. The test reliability coefficient
will be lower than the scorer reliability.