Вы находитесь на странице: 1из 6

TSL3112-Language Assessment

QUESTION 1
A test is defined as a systematic procedure for measuring a sample of behavior by posing a set of
questions in a unified manner (Linn & Gronlund, 1995:6).
Briefly explain the term systematic procedure.
[1 mark]
Systematic procedure -

QUESTION 2
Compare assessment and measurement.
[1 mark]
Assessment

is a comprehensive process of planning, collecting, analysing,


reporting and using information on students over time.

Measurement

is the assigning of numbers to certain attributes of objects,


events or people according to a rule-governed system.

QUESTION 3
Provide ONE (1) reason why the relationship between curriculum specifications, instruction, and testing
is not always a linear process in reality.
[1 mark]

QUESTION 4
There are six (6) levels to Blooms Taxonomy of Educational Objectives
(Cognitive Domain). Name the lowest level and the highest level
[1 mark]
Lowest

Highest

knowledge
Comprehension
Application
Analysis
Synthesis
Evaluation
1

QUESTION 5
Briefly define validity.
[1 mark]
Validity refers to the evidence base that can provided about appropriateness
of the inferences
QUESTION 6
Explain why a test can be reliable but not valid.
[1 mark]

QUESTION 7
Name ONE (1) way how a teacher can try to achieve positive backwash.
[1 mark]

QUESTION 8
Briefly explain the term alternative assessment.
[1 mark]
Alternative Assessment
Are assessment procedures that differ from the traditional notions and practise of tests
with respect to format, performance or implimentation.
Found its root in writing assessment because of the need to provide continuous
assessment rather than a single impromptu evaluation.

QUESTION 9
Name TWO (2) ways in which peer assessment can be beneficial to students.
[1 mark]

QUESTION 10
What are TWO (2) advantages of portfolio assessment for the teacher?
[1 mark]

Read up
1. Basic principles:
- Validity

Reliability

Refer to how well a test measures what it is purpoted to measure

Is the degree to which an assessment tool produces stable and consistent results.

Practicality

Refers to the degree to which equally competent scorers obtain the same results.
It is not expensive
The time is appropriate
Does nit exceed available materials resources

Washback effect

Refers to the impact that tests have on teaching and learning.


Teachers can provide information that washes back to students in the form of useful diagnose
of strengths and weaknessess.

Autheticity

The degree of correspondance of the characteristic of a given language test task to the
features of a target language task.
Language learners are motivated to perform when they are faced with tasks that reflect real
world situation and contexts.

Developing a StandardizedTest
1. Determine the purpose and objectives of the test
-

evaluate the general English abilty


skills of listening, speaking, reading and writing
make decisions about the English language proficiency
assess spoken English ability (Test of Spoken English)
admission into appropriate courses- placement
diagnostic purposes

2. Design Test Specifications (The specs act as the blueprint in determining the number and types
of items to be created)
- identify a set of constructs underlying the test (construct validation) e.g. construct of language
proficiency. Language competence can be broken down into subsets of listening, speaking,
reading and writing.
- Each can be examined on a continuum of linguistic units: phonology (pronunciation),
orthography (spelling), words (lexicon), sentences (grammar), discourse (beyond sentence
level), and pragmatic (sociolinguistic, contextual)
- Oral production test conversational fluency, pronunciation
- Listening comprehension test listening for general meaning
- Reading test comprehension of long or short passages, sentences, phrases, words
- Writing test - open-ended form with free composition, structured to elicit correct spelling to
discourse-level competence

3. Design, select, and arrange test tasks / items


- e.g. the specs states the length of reading passage, number of comprehension questions, the
level (e.g. Blooms taxonomy)
- selecting passage and questions
- arranging the items in the right order etc.
- pre-test on sample audience

4. Make appropriate evaluations of different kinds of items


- Apply concepts of item facility / item difficulty (IF), item discrimination (ID), and distractor
analysis for MCQ
- Principles of practicality and reliability need to be applied (validity would be employed in (2)
above does the test measure what it proposes to measure)

5. Specify scoring procedures and reporting formats


- The use of rubrics for scoring the different components of the language
Objective scoring
Analytic scoring
Holistic scoring
- Report the scores to the test-takers and the institution
Norm-referenced assessment and reporting
Criterion-referenced assessment and reporting
Outcomes-approach

Вам также может понравиться