Вы находитесь на странице: 1из 4

TD – January 22nd, 2010

Designing a Good Test


(or, Testing Positive... ly)

Introduction –

This session is intended to give you a few ideas about how to make
the most of the testing we do in the classroom. Usually, every other
Friday morning it’s time to dig out the relevant test from the
Teacher’s book and let your students get on with it while you get on
with counselling them. Of course, there’s nothing wrong with this;
it’s often the most practical and efficient means of creating a test.
However, there are some times when there isn’t a relevant test in
the teacher’s book, you want to test something that wasn’t taken
from your coursebook, or you simply want to add some variety to
the testing process.

Part 1 – What’s the point?

When thinking about whether a test is good or not, it’s impossible to


disregard the way in which a test fits into the broader aim of
language teaching. With that in mind, here are a few reasons why
we test:

• To gain concrete evidence of students’ progress (or lack


thereof)
• To measure our own success at teaching (or…)
• To consolidate
• To encourage study habits like revision
• To motivate students by challenging them
• To inform our future lesson planning

These points can be summarised in two (wonderfully descriptive)


bits of jargon:

backwash – this means the effect a test has on the


learning/teaching that precedes the test. In a general English
context, this essentially means that it gives the learner/teacher
something to work towards. If a student knows they will be tested
on the language they are being taught, they will (hopefully) strive
just a little harder to absorb it and then to retain it through revision.

spin-off – this means the effect a test has on the


learning/teaching that follows a test; what the teacher/student
learns from the test and the way it informs future work and
planning, i.e. one area needs more work or a student is ready to
move up a level.
I’m mentioning all this because I think the idea of a ‘good’ test is,
fundamentally, a test that is drawn from, and feeds into, the two-
week cycle we have here at St Giles. If a ‘test’ is simply something
that fulfils most of those functions, I think there’s quite a lot of room
for creativity.

Anyway, with that said, let’s look at the test itself…

Part 2 – Fine, now what?

Evaluating a test is not always as straightforward as it may initially


seem. It’s perhaps easier to describe what makes a test bad rather
than what makes it good. So, here’s some things I thought of:

• It looks rubbish – handwritten/hastily cut out and glued


workbook exercises
• The instructions aren’t clear or, worse, the task is unfamiliar
• It doesn’t actually test what it is supposed to test
• It takes ages to create/mark
• The marking (and therefore a student’s performance) is
subjective
• Etc, etc…

Again, these points can be boiled down into three essential bits of
jargon:

validity – this basically means ‘fairness’; that it tests what it


says it’s testing and that it tests only material that’s been covered
in class.

reliability – this means that there’s little or no chance


element (e.g. a 50/50 multiple choice question is not very reliable)
and that the marking is not subjective.

practicality – this refers how easy a test is to create,


administer and mark.

Keep these three in mind whenever you’re making a test or


evaluating one. In short, tests fall into one of two categories: direct
and indirect. A direct test is one that tests a learner’s ability to
formulate and use language for a purpose; an example would be a
writing test in which the student must write a letter of complaint. An
indirect test tests a specific grammatical or lexical item, not
necessarily a student’s ability to use it correctly or appropriately, a
gapfill for example.

Part 3 – What should I do with this?


Any test is a balance of these three ideas, with practicality ever at
odds with reliability and validity. The majority of tests we give are
gapfill or matching-type tests, which are high on practicality but
perhaps lacking in the other two areas. Any move towards direct
testing would go someway towards addressing this imbalance.
Therefore, some ideas:

Say you’d spent a lot of time teaching conditionals. You could:

- set a writing task, e.g. ‘Write a paragraph about a big


decision you have made and describe how your life would
be different if you hadn’t made it.’
- use pictures/photos as a stimulus, e.g.

And instruct students to write a conditional sentence


describing the woman’s feelings

- Set students a productive task. So much of what we do is


skills-based yet so little of our testing is. Set students a
task, e.g. give a short presentation about how your life
would be different if you had been born in Britain; give
them a model, leave them to prepare while you do
counselling, then let them present and assign them a mark.

Some general test design tips:

- always end with some kind of production task, e.g. a


writing, plan a brief talk. Not only is it good variation, it also
gives you a timing cushion to help with those students who
rush through the test in 5 minutes.
- Try and keep track of incidental vocabulary and grammar
items that cropped up during the week. Make a note of
them in your register and stick them on the end of your
test. This makes things a bit more relevant and personal.
- An extra credit section of content-based questions about
topics you’ve talked about during the week can be
engaging and fun; e.g. if you read a text that mentioned
that Tirana was the capital of Albania, throw it in as a
question.
- When dealing with vocabulary, try whenever possible not
to give the item and to tie its use to a function; e.g. Can
you remember three verbs that mean ‘steal’?
- Also with vocabulary, we seem to get through so much of it
in a two-week period that it’s impossible to cover it all in a
test. A vocabulary review sheet on which you simply write
15 – 20 items of vocabulary and ask students to tick the
ones they think they can confidently define (or, better, use
in a sentence), then compare with a partner will not give
you an objective mark, but satisfies all other functions of
testing. This is a valid supplement to a test, takes no time
to produce, and provides a useful reference for the student.

Happy testing!
John

Вам также может понравиться