Академический Документы
Профессиональный Документы
Культура Документы
Education.com - print
/ /
Education.com - print
achievement. Another selected response format type is the multiple-choice test, which has long been the most widely used among the objective test formats. Multiple-choice test items require the examinee to select one or more responses from a set of options (in most cases, 37). The correct alternative in each item is called the answer (or the key), and the remaining alternatives are called distracters. Examinees have less chance of guessing the correct answer to a multiple-choice test question compared to a true/false test question. In addition, the distracter an examinee selects may provide useful diagnostic information. Related to the multiple-choice test is the matching test, which consists of a list of premises, a list of responses, and directions for matching the two. Examinees must match each premise with one of the responses on the basis of the criteria described in the directions. A major strength of the matching test is that it is space-saving and, therefore, can be used to assess several important learning targets at once.
Figure 1ILLUSTRATION BY GGS INFORMATION SERVICES. CENGAGE LEARNING, GALE. A typical example of a constructed-response format is the short-answer test, which asks examinees to supply a word, phrase, or number that answers a question or completes a sentence. Sometimes it is called a completion or fill-in-the-blank test. Although what a short-answer test item can assess is generally more limited to factual information, it does not require the development of plausible distracters. Moreover, short-answer items are much less susceptible to guessing than selectedresponse format items.
/ /
Education.com - print
be developed (i.e., instructional objectives) as the column headings (Figure 2). After specifying the content and ability covered by the test using the table of specifications, the appropriate test item format is selected for each item. At this point, not only objective test items but also other types of test itemsessay test or performance assessmentshould be considered, depending on the learning outcomes to be measured. The next step is to create specific test items. Typically, it is particularly important for objective test items to be written in clear and unambiguous language to allow examinees to demonstrate their attainment of the learning objectives. If complex wording is used, the item simply reflects reading comprehension ability. It is also important for each objective test item to focus on an important aspect of the content area rather than trivial details. Asking trivial details not only makes the test items unnecessarily difficult, it also obscures what the test constructor really wants to measure. Similarly, relatively novel material should be used when creating items that measure understanding or the ability to apply principles. Items created by copying sentences verbatim from a textbook only reflect rote memory, rather than higherorder cognitive skills. Many other specific rules exist for constructing objective test items. Test constructors must be very careful that examinees with little or no content knowledge cannot arrive at the correct answer by utilizing the characteristics of the test format that are independent of specific content knowledge. Jason Millman and his colleagues called this skill of the examinees test-wiseness. For example, in multiple-choice test items, all options should be grammatically correct with respect to the stem (questions or incomplete statements preceding options), and key words from a stem, or their synonyms, should not be repeated in the correct option. Any violation of these rules would obviously provide an advantage for testwise examinees. Test composers should also equalize the length of the options of an item and avoid using specific determiners such as all, always , and never because some testwise examinees know that the correct option is frequently long and without such specific determiners. Robert Thorndike and Anthony Nitko have provided more comprehensive guidelines, with detailed explanations for constructing objective test items.
/ /
Education.com - print
Kou Murayama, in a series of studies, investigated the effects of objective test items on the use of learning strategies. In one study, junior high school students participated in a history class for five days and took either an essay or short-answer test at the end of each day. Results showed that in the last day, those who took the short-answer tests used more rote learning strategies and fewer deepprocessing strategies than those who took the essay tests. George Madaus reviewed much literature about the effects of standardized testing on what is taught at schools and found that teachers pay particular attention to the form of the questions and adjust their instruction accordingly, suggesting that objective tests could narrow instruction to the detriment of higher-order skills. Madaus argued that high-stakes teststests that are used to make important decisions such as the ranking ofschools have much more influenceon teaching. However, educators should be reminded that objective test items are not limited to testing for specific factual knowledge. Well written items may not have such negative effects on students' use of learning strategies or teachers' teaching styles. Thus, it is not the objective test items per se that should be changed. What is important is to change the stereotypical beliefs that objective test items require only rote learning of factual knowledge and avoid poorly constructed objective test items. See also:Standardized Testing
BIBLIOGRAPHY
Ebel, R. L., & Frisbie, D. A. (1991). Essentials of educational measurement (5th ed.). Englewood Cliffs, NJ: Prentice Hall. Madaus, G. F. (1998). The influence of testing on the curriculum. In L. N. Tanner (Ed.), Critical issues in curriculum (pp. 83121). Chicago: University of Chicago Press. Millman, J., Bishop, C. H., & Ebel, R. L. (1965): An analysis of test-wiseness. Educational and psychological measurement, 25(3), 707726. Murayama, K. (2003). Test format and learning strategy use. Japanese Journal of Educational Psychology, 51(1), 112. Nitko, A. J. (2004). Educational assessment of students (4th ed.). Upper Saddle River, NJ: Merrill. Thorndike, R. M. (1997). Measurement and evaluation in psychology and education (6th ed.). Upper Saddle River, NJ: Merrill.
Related Books
www.education.com/print/objective-test-items/
4/5
/ /
Education.com - print
www.education.com/print/objective-test-items/
5/5