Вы находитесь на странице: 1из 15

Stages of Test Development

By Lily Novita - 69090007


Make a full and clear statement of the testing problem.
Write complete specifications for the test.
Write and moderate items.
Trial the items informally on native speakers and reject or modify
problematic ones as necessary.
Trial the test on a group of non-native speakers similar to those for
whom the test is intended.
Analyse the results of the trial and make any necessary changes.
Calibrate scales.
Validate.
Write handbooks for test takers, test users and staff.
Train any necessary staff (interviewers, raters, etc.).

1. Stating the Problem
The essential initial step in any testing is to make oneself
perfectly clear what one wants to know and for what purpose
What kind of test it is constructed for?
What is the precise purpose?
What abilities are to be tested?
How detailed must the results be?
How accurate must the results be?
How important is backwash?
What constraints are set by unavailability of expertise,
facilities, time ? (for construction, administration and
scoring)
2. Writing specifications
for the test
Content
Operations
Types of text
Addresses of texts
Length of text(s)
Topics
Readability
Structural range
Vocabulary Range
Dialect, accent, style
Speed of processing


2. Writing specifications
for the test
Structure , timing,
medium/channel and techniques
Test structure
Number of items
Medium / channel
Timing
Techniques

2. Writing specifications
for the test
Criterial levels of performance
Accuracy
Appropriacy
Range
Flexibility
Size


2. Writing specifications
for the test
Scoring procedures
Subjectivity
Achievement of high reliability &
validity in scoring
Rating scale to be used?
No. of people rating each piece of
work?
Solutions on disagreements
between raters

3. Writing and moderating
items
Sampling
Writing items
Moderating items
4. Informal trialling of
items on native speakers
Moderation of grammar test Yes No
1. Is the English grammatically correct?
2. Is the English natural and acceptable?
3. Is the English in accordance with the specifications?
4. Does the item test what it is supposed to test, as specified?
5. The correct response cannot be obtained without the appropriate
knowledge of grammar (other than random sampling)

6. Is the item economical?
7. (a) Multiple choice is there just one correct response?
(b) Gap filling are there just one or two correct responses?

8. Multiple choice : Are all the distractors likely to distract?
9. Is the key complete and correct?
5. Trialling of the test on a group of non-native
speakers similar to those for whom the test is
intended
trials are designed to help ensure
that the items function appropriately
and are not confusing for the
students.
this is accomplished by embedding
field test items in the operational test,
to ensure that the items are taken by
a representative group of motivated
students under standard conditions.

6. Analysis of the results of the
trial making of any necessary
changes
2 kinds of analysis should be carried out :
Statistical analysis : reveals qualities
(reliability) as a whole and individual items
how difficult they are , how well they
discriminate between stronger and weaker
candidates.
Qualitative analysis : responses are
examined to discover misinterpretations,
unanticipated but possibly correct answers and
indicators of other faulty items.
7. Calibration of scales
It means collecting samples of performance which cover
the full range of the scales.
A calibration test is a procedure in which an instrument,
tool, or device is tested to confirm that it conforms with
the standard. Calibration is very important, as it ensures
that objects are working properly. There are a number of
reasons to conduct a calibration test, ranging from
concerns that something is not working as it should to
preparations for an event in which very precise
calibration is desired, and there are a number of ways to
perform a calibration.
8. Validation
Essential validation for
high stakes or published
tests
Small-scale validation for
low stakes used within an
institution
9. Writing handbooks for test
takers, test users and staffs
(contents)
The rationale for the test;
An account of how the test was developed and
validated
A description of the test
Sample items
Advice on preparing for taking the test
An explanation of how test scores are to be interpreted
Training materials
Details of test administration
10. Training Staff
All staffs who will be
involved in the test process
should be trained :
interviewers, raters, scorers,
computer operators, and
invigilators.

Вам также может понравиться