Вы находитесь на странице: 1из 28

Annotated Bibliography for technology-based assessment

Allan, J. M., Bulla, N. & Goodman, S.A. (2003). Test Access: Guidelines for ComputerAdministered Testing. American Printing House for the Blind: Louisville, KY. Retrieved
December 23, 2003 from http://www.aph.org
This document highlights and addresses the problems of all aspects of test accessibility to
disabled individuals, particularly those with low vision or who are blind. In spite of advances in
technology (e.g., computer-administered tests) it is clear that these individuals are still not able to
perform with their sighted peers on equal ground. The authors put forth the "Principles of
Inclusive Design" and argue that tests must be made accessible to all potential test-takers,
regardless of format and/or disability. Further, this is only possible by initiating the process at the
design stage; accommodations at the point of test administration are not enough. This document
provides information about the educational impact of visual disabilities and a general overview
of present testing accommodations for paper and pencil tests, as well as computer-based tests.
Austin, J. (2000). Technology and assessment. Washington, D.C.: U.S. Dept. of Education,
Office of Educational Research and Improvement.
Balajthy, E. (2002). Information technology and literacy assessment. Reading & Writing
Quarterly: Overcoming Learning Difficulties, 18(4), 369-373.
This column discusses information technology and literacy assessment in the past and
present. The author also describes computer-based assessments today including the following
topics: computer-scored testing, computer-administered formal assessment, Internet formal
assessment, computerized adaptive tests, placement tests, informal assessment, electronic
portfolios, information management, and Internet information dissemination. A model of the
major present-day applications of information technologies in reading and literacy assessment is
also included.
Beevers, C.E., Fiddes, D.J., McGuire, G.R. & Youngson, M.A. (1999). The emerging philosophy
behind computer-based assessment. Teaching mathematics and Its Applications, 18(4),
147-149.
The computer can play a role in several forms of assessment, including diagnostic and
self-testing, continuous, and grading assessment. Focuses on the educational aspects of an
assessment engine for mathematics.
Bennett, R. E. (2001). How the Internet will help large-scale assessment reinvent itself.
Education Policy Analysis Archives, 9(5).
Bennett, R. E. (2002). Using electronic assessment to measure student performance (Issue Brief):
NGA Center for Best Practices.
Bullis, M., Reiman, J., Davis, C. & Thorkildsen, R. (1994). Structure and videodisc adaptation of
the Transition Competence Battery (TCB) for deaf adolescents and young adults.

Exceptional Children, 61(2), 159-173.


Investigated group vs individual administration of a Level 1 videodisc version of the
TCB. Multiple-choice vs true/false TCB formats were also tested. The TCB consists of 3 subtests
related to employment and 3 related to independent living. It includes simply worded and
illustrated test booklets and videotapes that use sign language to present questions in a
standardized format. The 2 differently formatted versions of the videodisc TCB were tested on
208 hearing-impaired (HI) Ss (aged 12-32 yrs) in a group and 62 HI Ss (aged 16-29 yrs)
individually. Correlations for subtests on job-related social skills and money-management skills
favored group and individual administration modes and the multiple choice format. The multiplechoice format and group administration were appropriate for the majority of Ss.
Bunderson, C.V. Computers in educational assessment: An opportunity to restructure
educational practice (Report No. TM018025). Washington, D.C.: Office of Technology
Assessment (ERIC Document Reproduction Service No: ED340771)
Bushweller, K. (2000). Electronic exams. Electronic School, June, 20-24.
Calhoon, M. B., Fuchs, L. S., & Hamlett, C. L. (2000). Effects of computer-based test
accommodations on mathematics performance assessments for secondary students with
learning disabilities. Learning Disability Quarterly, 23(4), US: Council for Learning
Disabilities.
Cheek, D.W. & Arguso, S. (1995). Gender and equity issues in computer-based science
assessment. Journal of Science Education and Technology, 4(1), 57-59.
Suggests that computer and related technologies as tools for teaching, learning, and
assessment are neither gender neutral nor benign in effect. Examines computers, equity, and
access issues, computers as a technology, and the implications for computer-based assessment.
Chung, G. K. W. K., & Baker, E. L. (1997). Year 1 technology studies: Implications for
technology in assessment (Technical Report 459). Los Angeles: CRESST/UCLA.
Clariana, R. & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors
associated with the test mode effect. British Journal of Educational Technology, 33(5),
593-602.
This investigation seeks to confirm several key factors in computer-based versus paperbased assessment. Based on earlier research, the factors considered here include content
familiarity, computer familiarity, competitiveness, and gender. Following classroom instruction,
freshman business undergraduates (N = 105) were randomly assigned to either a computer-based
or identical paper-based test. ANOVA of test data showed that the computer-based test group
outperformed the paper-based test group. Gender, competitiveness, and computer familiarity
were NOT related to this performance difference, though content familiarity was. Higherattaining students benefited most from computer-based assessment relative to higher-attaining

students under paper-based testing. With the current increase in computer-based assessment,
instructors and institutions must be aware of and plan for possible test mode effects.
Constantine, A. (2000). Computers mightier than the pen in tests. Times Educational
Supplement, 4379, III.
Reveals that computer-based assessment (CBA) methods have increased students'
performance and boosted correct responses in examinations. Attitudes of lecturers on computerbased tests; Advantages of the CBA to further and higher education; Comparison between
pencil/paper-methods in tests and CBA.
Cutshall, S. (2001). Just click here. Techniques: Connecting Education & Careers, 76(5), 35-37.
Focuses on technological advancements in education and assessment systems in the
United States. General categories of Computer based assessment (CBA); Benefits of CBA to
students; Online testing developed by the National Occupational Competency Testing Institute.
Erwin, T.D., & DeMars, C. (2002). Advancing Assessment: Why not computer-based
assessment? Assessment Update, 14(2), 1-4.
Technologically delivered instruction is a topic of ubiquitous discussion in higher
education these days. The excitement is contagious, and there is little doubt that the delivery of
instruction has been changed forever. This article focuses on the application of technology to
methods of assessment. The multimedia capabilities of the microcomputer offer video and audio
functions that are much less cumbersome than the large-scale group movies and sound stages of
old. Electronic databases and the Internet offer information access that traditional classroom tests
and institutional assessment instruments cannot. Interactive tests, or smart tests, administer test
items based on prior responses. The most common of these are called computer adaptive tests
(CATs). An additional application of technology in assessment is computer automated scoring. A
variety of software programs that automatically rate writing ability is currently available. Any of
these applications has its advantages. Scoring is immediate, which can lead to better learning.
Frase, L.T., Almond, R.G., Burstein, J., Kukich, K., Shaheen, K.M., Steinberg, L.S., et al. (2003).
Technology and assessment. In H.F. ONeil, Jr. (Ed) & R.S. Perez (Ed), Technology
applications in education: A learning view (pp. 213-244). Mahwah, NJ: Lawrence
Erlbaum Associates.
Despite its successful use in selection, placement, and instruction, traditional educational
assessment faces more difficult challenges today than ever. To the assessment community,
confronted by these many challenges, modern process, psychometric, and computational
technologies promise some relief. This chapter reviews technologies that are laying the
groundwork for improvement in the creation, delivery, interpretation, and use of educational
assessments. These technologies are explored within the context of critical assessment activities
and, at the end of this chapter, consequences that arise from the use of technology in assessment
are considered.

Hoff, D.L. (1999). Testing. Education Week, 19(2), 6.


Presents information on a study comparing test scores of students on computer-based
assessments and traditional printed tests. Results of the study; Performance of highly skilled
computer users on open-ended math questions.
Gallagher, A., Bennett, R. E., Cahalan, C., Rock, D. A. (2002). Validity and fairness in
technology-based assessment: detecting construct-irrelevant variance in an open-ended,
computerized mathematics task. Educational Assessment, 8(1), 27-41.
The purpose of this study was to evaluate whether variance due to computer-based
presentation was associated with performance on a new constructed-response type
Mathematical Expression (ME)that requires examinees to enter mathematical expressions.
Participants took parallel computer-based and paper-based tests consisting of ME items, plus a
test of their skill in editing and entering data using the computer interface. Analysis of variance
was used to assess differences in ME performance across delivery modes, with and without
controlling for edit/entry performance. Comparisons of reliability, speededness, and relations
with external indicators were also conducted. Although no statistical evidence of constructirrelevant variance was detected, some examinees reported mechanical difficulties in responding
and indicated a preference for the paper-and-pencil test.
Gerber, M. M, Semmel, D. S. & Semmel, M. I. (1994). Computer-based dynamic assessment of
multidigit multiplication. Exceptional Children, 61(2), 114-125.
Reports design details, operation, and initial field-test results for DynaMath (DYN), a
computer-based, dynamic assessment system that provides individually tailored, instructionally
useful assessment of students with disabilities. In designing DYN it was hypothesized that
performance of students who have not mastered problem-solving in declarative and procedural
knowledge domains would decline as demand in these domains increased. Operating principles
were related to empirical knowledge about the psychosocial characteristics of learners with mild
disabilities. DYN organizes and outputs data on student performance on multidigit multiplication
problems varying in difficulty and demand. It graphically shows the zone of proximal
development (those problems most likely to prove susceptible to the next phase of instruction).
DYN provides a qualitative record of student performance via annotated real-time replay.
Goldberg, L. B., & O'Neill, L. M. (2000, July 2000). Computer technology can empower
students with learning disabilities. Exceptional Parent, 30, 72-74.
Goldstein, L.F. (2003). Special ed. tech. sparks ideas. Education week, 22(35), 27-29.
The article focuses on testing tools for children with disabilities. Testing experts say that
what educators learn from tailoring assessments to the needs of special education students could
shape how they test regular students, who have more subtle individual needs. Oregon is
developing as of May 8, 2003 computerized tests in which teachers would set individualized
parameters for special education students, according to Gerald Tindal, the director of behavioral
research and teaching at the University of Oregon's college of education. Special education

students are likely to be the canaries in the coal mine as educators experiment with new
technological assessments.
Greenwood, C.R. (1994) Advances in technology-based assessment within special education.
Exceptional Children, 61(2), 102-104.
This article introduces a special issue on technology-based assessment within special
education. It notes the effects of the 1988 Technology-Related Assistance for Individuals with
Disabilities Act and briefly reviews the nine papers in the issue.
Greenwood, C. R. & Rieth, H. J. (1994). Current dimensions of technology-based assessment in
special education. Exceptional Children, 61(2), 105-113.
Provides a current perspective on technology-based assessment (TBA) and its role in
special education and school psychology. The unique contribution of TBA has been to improve
quality through the merger of assessment methods and electronic technology, including
networking, expert systems, authoring software, and multimedia. The success of this
collaboration can be measured in terms of improvements in the ability to describe and treat
special education problems with greater conceptual clarity, methodological expertise, and
generality. Current TBA includes at least 4 dimensions: technology (e.g., the integration of text,
graphics, audio, and video), assessment (e.g., behavioral assessment), method, and utilization
(e.g., access, training, support, and cost).
Hamilton, L. S., Klein, S. P., & Lori, W. (2000). Using Web-based testing for large-scale
assessment (IP-196). Santa Monica, CA: RAND Corporation.
Hegelson, S.L. (1992, April). Assessment of science teaching and learning outcomes. Paper
presented at the meeting of the American Research Association, San Francisco, CA.
Indicators of current issues and concerns in the assessment of science teaching and
learning outcomes are discussed. The greatest public attention to science test scores is devoted to
scores from assessments that are external to schools, including national assessments such as the
National Assessment of Educational Progress (NAEP), international assessments, and statemandated assessments. Interest in attitudes toward science is an area drawing increased attention,
although the field is hampered by a lack of a theoretical base. Another area of current interest is
computer applications in assessment. Literature on computer assisted testing is reviewed,
emphasizing: (1) the use of item banks; (2) evaluation of student data from laboratory exercises;
(3) computer-generated homework problems and quizzes; (4) computerized feedback in the
classroom; (5) advantages and drawbacks to computerized assessment; (6) computerized
diagnostic testing; (7) computer managed instruction; and (8) the impact of interactive video.
Alternative assessments have been the focus of several studies. The data collecting, storage, and
analysis capabilities of the computer make alternative technology-based assessments
increasingly attractive. There is a 70-item list of references.

Helgeson, S. L. & Kumar. D. D. (1993). A review of educational technology in science


assessment. Journal of Computers in Mathematics and Science Teaching, 12(3/4), 227243.
Reviews emerging applications of microcomputers and hypermedia to assessment. Use
consists mainly of computerized test administration of multiple-choice tests drawn from item
banks. Significant advantages are possible in several areas including immediate feedback to
students, formative evaluation with remediation possibilities, and monitoring homework and
laboratory activities. Gains in assessment techniques include adaptive testing that matches
student's level of performance.
Huff, K.L. & Sireci, S.G. (2001). Validity issues in computer-based testing. Educational
Measurement: Issues and Practice, 20(3), 16-25.
Advances in technology are stimulating the development of complex, computerized
assessments. The prevailing rationales for developing computer-based assessments are improved
measurement and increased efficiency. In the midst of this measurement revolution, test
developers and evaluators must revisit the notion of validity. In this article, the authors discuss
the potential positive and negative effects computer-based testing could have on validity, review
the literature regarding validation perspectives in computer-based testing, and provide
suggestions regarding how to evaluate the contributions of computer-based testing to more valid
measurement practices. They conclude that computer-based testing shows great promise for
enhancing validity, but at this juncture, it remains equivocal whether technological innovations in
assessment have led to more valid measurement.
Irvin, L.K. & Walker, H.M. (1994). Assessing childrens social skills using video-based
microcomputer technology. Exceptional Children, 61(2), 182-196.
Reviews the content and procedural requirements of social-competence assessment for
children with disabilities and presents information on multiperspective prototype assessments
using a videodisc and a microcomputer with a touch screen. Assessment content focus is on
"joining in" with peers at work or play, dealing with teasing and provocation, and complying
with teacher directives. In each of these domains, cue recognition, response alternatives, and
awareness of consequences are assessed. Preliminary psychometric data from 105 students are
presented on sensitivity, reliability, and construct validity. Data indicate that the prototype
assessments discriminate among students with different social knowledge and perceptions, are
consistent internally and across time, and can be interpreted meaningfully across constructs.
Anomalous validity data reveal response biases related to assessment content and technology.
Jacobson, R. L. (1993). New computer technique seen producing a revolution in educational
testing. The Chronicle of Higher Education, September 15, A22-23, 26.
Discusses computer adaptive tests (CATS). Educational technology; Graduate Record
Examinations Board's General Test; Use of a computer's power to customize a test for each
person; Principles of item response theory; Benefits and disadvantages; CAT-related activities;

Debate on using CATS for college admissions; Impact of public-relations skills of test producers
on use and acceptance.
Kentucky Department of Education. (2003). Background on CATS Online Assessment. Heading
Feb 19, 2003. Retrieved September 15, 2003 from
http://www.kentuckyschools.net/KDE/Administrative+Resources/Testing+and+Reporting
+/District+Support/CATS+Online+Assessment/Background+on+the+CATS+Online+Fall
+Pilot.htm
An article that was published in the February Kentucky Teacher newsletter that
summarizes the development of the CATS Online Assessment, and provides some of the
evaluation data collected from the CATS Online Fall Pilot.
Maccini, P., Gagnon, J. C. & Hughes, C. A. (2002) Technology-based practices for secondary
students with learning disabilities. Learning Disability Quarterly, 25(4), 247-262.
The researchers conducted a comprehensive review of the literature on technology-based
practices for secondary students identified as having learning disabilities (LD) involving
instruction and/or assessment that measured some aspect of performance on a general education
task or expectation (i.e., test). Technology-based practices included computer- or video-based
interventions, multimedia programs, technology-based assessment, and verbatim audio
recordings. Three practices appear promising for educating students with LD: (a) hypertext and
hypermedia software programs; (b) videodisc instruction involving contextualized learning; and
(c) multimedia software. Educational recommendations and directions for future research are
offered based upon results.
McCain, G. (1995). Technology-based assessment in special education. T H E Journal, 23(1), 5759.
Outlines approaches in technology-based assessment in special education. Includes a
video-based computer-assisted test able to learn the language preference of students; Video
segments from popular movies used as elements of a moral dilemma; Viewing of video segments
of peers who are interacting in various social situations, through computer screen.
McDonald, A. (2002). The impact of individual differences on the equivalence of computerbased and paper-and-pencil educational assessments. Computers and Education, 39(3),
299-312.
Computer-based assessment (CBA) is yet to have a significant impact on high-stakes
educational assessment, but the equivalence between CBA and paper-and-pencil (P&P) test
scores will become a central concern in education as CBA increases. It is argued that as CBA and
P&P tests provide test takers with qualitatively different experiences, the impact of individual
differences on the testing experience, and so statistical equivalence of scores, needs to be
considered. As studies of score equivalence have largely ignored individual differences such as
computer experience, computer anxiety and computer attitudes, the purpose of this paper is to
highlight the potential effects of these. It is concluded that each of these areas is of significance

to the study of equivalence and that the often inconsistent findings result from the rapid changes
in exposure to technology.
Means, B., Penuel, B. & Quellmalz, E. (2000, September). Developing assessments for
tomorrows classrooms. Report presented at The Secretarys Conference on Educational
Technology, Alexandria, VA.
This paper begins with a discussion of technology-supported activities to support
meaningful learning and planning for a new research agenda. The remainder of the paper is a
description of two prototype technology-based assessments developed to help address the dearth
of appropriate student learning measures available to inquiry-oriented, technology-supported
projects. The first prototype assessment task, designed for middle and secondary school students,
presents an engaging, problem-based learning task that integrates technology use with
investigation of an authentic problem, i.e., that a group of foreign exchange students wants to
come to the United States for the summer and needs to choose one of two cities to visit. The
second prototype, tested with a fourth/fifth-grade class, is a palm-top collaboration assessment.
Approach, pilot testing, results, and next steps are described for each prototype. Excerpts from
the Internet Research Task Scoring Rubric, a list of dimensions of collaboration, and a
description of scoring classroom interactions with the collaboration rubric are attached.
National Center for Educational Progress. (24 January 2003). Special Studies. The TechnologyBased Assessment Project. Retrieved on August 11, 2003 from
http://nces.ed.gov/nationsreportcard/studies/tbaproject.asp
Neill, D. M. (1997). Transforming student assessment. Phi Delta Kappan, 79(1), 34-40, 58.
Widespread support for the National Forum on Assessment's 1995 "Principles and
Indicators" shows deep desire for a radical reconstruction of student assessment practices.
Assessment to enhance student learning should rest on certain foundations: an understanding of
how student learning occurs; clear statements of desired, universal learning standards; adequate
learning resources; and appropriate school structures and practices.
Park, J. (2003). A test-takers perspective. Education Week, 22(35), 11-15.
An informal survey of students about Oregon's online testing system shows they find
computer-based testing faster and more enjoyable than the paper-and-pencil variety, and they
report feeling that they perform better on computerized assessments than on traditional
tests.While some states of the U.S. have just begin to contemplate the possibility of
computerized testing, Oregon piloted a program in spring 2001 that is now being expanded
across the state as of May 8, 2003. To assess the pilot phase of the online testing program,
Oregon state education officials surveyed 740 3rd graders and 730 high school students from
around the state about their experiences taking the state assessment online compared with taking
the hard-copy version. Third graders were especially positive about Web-based testing.

Pellegrino, J.W., Chudowsky, N. & Glaser, R. (Eds.). (2001). Knowing what students know: The
science and design of educational assessment. Retrieved September 9, 2003, from The
National Academies Press Web site: http://www.nap.edu/books/0309072727/html/
Knowing What Students Know provides a compelling view of the future of educational
assessment, a future that includes better information about student learning and performance
consistent with our understandings of cognitive domains and of how students learn. That future
also promises a much tighter integration of instruction and assessment. Realizing these ambitions
depends on progress in the fields of cognition, technology, and assessment, as well as significant
changes in educational policy at local and national levels.
Pine, J., Baxter, G. P., & Shavelson, R. J. (1991, April). Computer simulations for assessment.
Paper presented at the Annual Meeting of the American Educational Research
Association, Chicago, IL.
Pitcher, N., Goldfinch, J. & Beevers, C. (2002). Aspects of computer-based assessment in
mathematics. Active Learning in Higher Education, 3(2), 159-176.
In this article a form of computer assessment in mathematics is discussed and the
progress of its use in teaching and learning is tracked. The work spans several different projects,
all using similar computer-based assessment engines, which have been progressively updated in
the light of successive evaluation results. The engines include facilities to randomize questions,
to choose feedback levels, to allow partial credit and to input mathematical expressions. The
software incorporates a facility to set and mark questions containing algebra. Evaluation results
arising from use with students are presented. The benefits and drawbacks of computer-based
assessment are catalogued in the context of working projects.
Rabinowitz, S. & Brandt, T. (2001). Computer-based assessment: Can it deliver on its promise?
Knowledge brief. Washington, D.C.: Department of Education, Office of Educational
Research and Improvement. (ERIC Document Reproduction No. ED 462447)
Computer-based assessment appears to offer the promise of radically improving both how
assessments are implemented and the quality of the information they can deliver. However, as
many states consider whether to embrace this new technology, serious concerns remain about the
fairness of the new systems and the readiness of states (and districts and schools) to support
them. This brief describes the potential advantages of a fully implemented computer-based
assessment system and then lays out questions states must address as they consider the next
generation of high-stakes assessments. Computer-based assessments promise to make obsolete
many of the shortcomings of current high-stakes, statewide assessment systems and to expand
the capacity of such systems to measure rigorous standards in truly innovative ways. They offer
answers to the usual concerns about current assessments with regard to logistics,
content/methodologies, and value. Standing in the way of computer-based assessments are other
logistical problems, such as the need for back-up machines and materials, delivery of materials
and training of test administrators, and scheduling. Security is a problem in computer-based
testing as it is in conventional testing. Test equivalence and computer equivalence must be
considered, and the largest hurdle to realizing the promise of computer-based assessment is

access. Moving ahead without addressing the issues raised in this brief will almost certainly
result in flawed assessments.
Ricketts, C. & Wilks, S.J. (2002). Improving student performance through computer-based
assessments: Insights from recent research. Assessments & Evaluation on Higher
Education, 27(5), 475-479.
Compared student performance on computer-based assessment to machine-graded
multiple choice tests. Found that performance improved dramatically on the computer-based
assessment when students were not required to scroll through the question paper. Concluded that
students may be disadvantaged by the introduction of online assessment unless care is taken with
the student-assessment interface.
Russell, M. (2000). Its Time to Upgrade: Tests and Administration Procedures for the New
Millennium. Paper presented at the Secretary's Conference on Education Technology
2000.
Salyers, F. (2003). New era of accessibility eligible students can take state tests online this
spring. Kentucky Teacher Newsletter, (Feb 2003). Retrieved August 20, 2003, from
http://www.kentuckyschools.net/cgi-bin/MsmGo.exe?
grab_id=98749710&EXTRA_ARG=SUBMIT
%3DSearch&host_id=1&page_id=113&query=CATS+Online&hiword=C
ATS+ONLINE
Sandals, L.H. (1992). An overview of the uses of computer-based assessment and diagnosis.
Canadian Journal of Educational Communication, 21(1), 67-78.
Presents an overview of the applications of microcomputer-based assessment and
diagnosis for both educational and psychological placement and interventions. Advantages of
computer-based assessment (CBA) over paper-based testing practices are described, the history
of computer testing is reviewed, and the construct validity of computer-based tests is discussed.
Sclater, N. & Howie, K. (2003). User requirements of the ultimate online assessment engine.
Computers & Education, 40(3), 285-306.
As online computer assisted assessment (CAA) is adopted throughout education, the
number of CAA systems proliferates. While a number of commercial systems are gaining in
sophistication, no single package is universally appropriate. For those implementing online
assessment, selecting appropriate systems or indeed building them, it may be helpful to consider
the ultimate online CAA system. This combination of web server software, middleware and
database package does everything required of it for all possible users of the system. In this paper
we take a step back from developments and re-evaluate the requirements of CAA systems for
users with 21 possible roles. These user requirements are then mapped onto two leading online
assessment systems to analyze how close we are to achieving the ultimate CAA system.
Smith, C. (2001, May 7, 2001). No longer exempt from exams, students with disabilities must
take online versions of N.C. tests. The Charlotte Observer.

Technology and Assessment: Thinking Ahead -- Proceedings from a Workshop. (2002).


Retrieved September 9, 2003 from The National Academies Press Web site:
http://www.nap.edu/books/0309083206/html/
The Technology-Based Assessment Project. (2003). Techniques: Connecting Education and
Careers, 78 (2), 15.
Explores the use of computer technology to enhance the quality and efficiency of
educational assessments under the Technology-based Assessment (TBA) project. Effect of
computer technology on the National Assessment of Education Progress; Implications of
technology-based assessment for equity and operation; Empirical studies included in TBA.
Thompson, S., Thurlow, M., & Moore, M. (2002). Using computer-based tests with students with
disabilities (Policy Directions No. 15). Minneapolis, MN: University of Minnesota,
National Center on Educational Outcomes. Retrieved August 20, 2003, from
http://education.umn.edu/NCEO/OnlinePubs/Policy15.htm
Thompson, S. J., Thurlow, M. L., Quenemoen, R. F., & Lehr, C. A. (2002). Access to computerbased testing for students with disabilities (Synthesis Report 45). Minneapolis, MN: University
of Minnesota, National Center on Educational Outcomes. Retrieved August 20, 2003, from
http://education.umn.edu/NCEO/OnlinePubs/Synthesis45.html
Trotter, A. (2002). Testing computerized exams. Education Week, 20(37), 30-35.
Van Horn, R. (2003). Computer adaptive tests and computer-based tests. Phi Delta Kappan,
84(8), 567-569.
Presents the thoughts of the author about the intersection of assessment and technology in
student evaluation. Assessment fundamentals for evaluating students; Improvement-referenced
evaluation; Grade-level testing; Computer Adaptive Testing (CAT); Production by the Northwest
Evaluation Association of computer adaptive Measures of Academic Progress (MAP) test;
Number of students who have taken the MAP test; Opinion that one of the best references is the
'Mental Measurements Yearbook'; Evaluation of the CAT Lexia Comprehensive Reading Test.
Vitale, M.R. & Romance, N.R. (1995). Technology-based assessment in science: Issues
underlying teacher advocacy of testing policy. Journal of Science Education and
Technology, 4(1), 65-74.
Focuses on the development of informed teacher advocacy for new advancements in
technology-based assessment. Suggests key evaluative questions for teachers to ask about any
forms of science assessment that would amplify the potential value of new technology-based
forms of classroom science assessment applications.
Walsh, M. (2003). Marketing to the test. Education Week, 22(35), 35-37.

Since the 1920s, the business of school testing has largely been a province of educational
publishing. The same companies that published American textbooks also distributed such wellknown assessments as the Stanford Achievement Test, the California Achievement Test, and the
Iowa Tests of Basic Skills. The big publishers cornered the market for decades largely because of
the high upfront costs of developing and validating test content as well as the complicated nature
of distributing and scoring paper tests. The market for computer-based school testing should be
about as wide open as other technology and Internet business niches. Eduventures estimates that
educational testing in the U.S. was a $925 million industry in 2002. But revenues related to
online and computer-based assessment represented no more than $50 million.
Welch, R. E., & Frick, T. (1993). Computerized adaptive testing in instructional settings.
Educational Technology Research & Development, 41(3), 47-62.
Discusses the use of computerized adaptive testing (CAT) in the classroom. Highlights
include item response theory; sequential probability ratio test (SPRT); combining SPRT with
expert system reasoning, resulting in EXSPRT; and a study of college students that examined the
efficiency and accuracy of the various CAT methods discussed.
Wiig, E.H. & Jones, S.S. (1996). Computer-based assessment of word knowledge in teens with
learning disabilities. Language, Speech, & Hearing Services in Schools, 27(1), 21-28.
Examines word knowledge in teenagers with learning disabilities. Computer-based and
standard administrations of the Test of Word Knowledge. Accessibility of computer technology;
Importance of word and concept knowledge for academic and lifetime achievement.
Woodward, J. & Rieth, H. (1997). A historical review of technology research in special
education. , Review of Educational Research, 67(4), 503-536.
Reviews the research literature since 1980 on uses of technology in special education.
Unlike past reviews, which have typically focused on academically related issues and the
effectiveness of computer-assisted instruction, this review also summarizes the extensive
observational and naturalistic studies, as well as research efforts in technology-based assessment.
This diversity of research in special education stems from the multiple roles of the special
education teacher, who, in addition to bearing instructional responsibilities, often determines
eligibility for services, tracks progress toward Individual Education Plan goals, and facilitates a
student's day-to-day participation in general education settings.
Zakrewski, S. & Steven, C. (2000). A model for computer-based assessment: The Catherine
wheel principle. Assessment & Evaluation in Higher Education, 25(2), 201-215.
The success of computer-based assessment systems is based on a structured approach to
design and implementation together with a model that generates efficient and effective standards
and procedures. This paper proposes a model that utilizes a step-wise approach to assessment
design and implementation within which the management and assessment of operational,
technical, pedagogic and financial risks are made explicit. It is the strategies for risk elimination
that form the basis for the standards and procedures adopted.

Annotated Bibliography for test accommodations


The Alliance for Technology Access. (1994). Computer resources for people with disabilities: A
guide to exploring todays assistive technology. Alameda, CA: Hunter House, Inc.
Anderson-Inman, L., Horney, M., Chen, D., & Lewin, L. (1994). Hypertext literacy:
Observations from the ElectroText Project. Language Arts, 71(April), 279-287.
Arditi, A. (1999). Making text legible: Designing for people with partial sight. New York:
Lighthouse. Retrieved August 20, 2003 from http://www.lighthouse.org/print_leg.htm
Barton, K.E.W. (2002). Stability of constructs across groups of students with different disabilities
on a reading assessment under standard and accommodated administrations (Doctoral
dissertation, University of South Carolina, 2002). Dissertation Abstracts International,
51, 417.
Federal legislation requires the inclusion of students with disabilities (SDs) in
assessments, who may need testing accommodations. The stability of reading constructs on a test
of reading was investigated to address the following: Is the construct of reading being measured
similarly across groups with and without disabilities on a regular and orally accommodated (OA)
form? The study was based on two data sets, which consisted of SDs and students without
disabilities in grades 10-12 who took either the regular or OA forms in a large-scale assessment
program in the South-East. An exploratory factor analyses (EFA) of student responses for
students without a disability was conducted on both forms. Based on the unrotated factor patterns
estimated by the EFA, confirmatory factor analyses (CFA) of responses given by SDs were
conducted and goodness of fit assessed. The results of these analyses suggest the constructs of
reading are measured similarly across various disability groups in both the regular and oral
administration forms.
Behuniak, P. (2002). Types of commonly requested accommodations. In R.B. Ekstrom (Ed) &
D.K. Smith (Ed), Assessing individuals with disabilities in educational, employment, and
counseling settings (pp. 45-58). Washington, DC, US: American Psychological
Association.
Describes some of the many possible types of testing accommodations that can increase
access to test participation by test takers with disabilities. Topics include: physical environment
changes as accommodations; changes in testing time as accommodations; changes in test
directions as accommodations; test accommodations involving changes in test format and
response mechanisms; response aids as accommodations; alternative measures; and computerbased testing.
Burk, M. (1999). Computerized test accommodations: A new approach for inclusion and success
for students with disabilities. Washington, D.C.: A.U. Software, Inc.

Burns, E. (1998). Test accommodations for students with disabilities. Springfield, IL: Charles C.
Thomas.
Identifying and implementing valid test accommodations involves the interaction of
several important legal, practical and theoretical considerations. How test accommodations are
conceptualized, and when and if accommodations are implemented depends on how each of
these components is viewed. The purpose of this book is to consider legal questions, theoretical
issues, and practical methods for meeting the assessment needs of students with disabilities.
Chapters address test accommodation topics relating to Federal and State regulations (including
the IDEA Amendments Act of 1997), problems concerning reliability and validity, and practical
strategies for planning test accommodations and adapting and modifying tests. It is intended tor
psychologists, special educators, reading and speech specialists, regular education teachers,
multidisciplinary and individualized education program team members, parents, and
professionals involved in educational decision making.
Cardman, M. (Ed.) (1999). Classroom inclusion not carried through to tests. Special Education
Report, 25(18), 1-2.
Discloses the findings of a study on the accommodations that special education students
in Illinois receive during testing. Percentage of students that have participated in the Illinois
Standards Achievement Tests; Training program for teachers to bring test accommodations and
participation more closely in line with classroom accommodations.
Dyck, N. & Pemberton, J.B. (2002). A model for making decisions about text adaptations.
Interventions in School and Clinic, 38(1), 28-35.
This article examines a process for teachers to use when deciding whether to adapt a text
for a student. The following five options for text adaptations are described: bypass reading,
decrease reading, support reading, organize reading, and guide reading. Adaptations for student
work products and for tests are also addressed.
Ediger, M. (2001). Taking tests: More time for the handicapped? (ERIC Document
Reproductions Service No. ED455276)
This paper discusses issues related to testing accommodations for the disabled, focusing
on the provision of extra time in testing. Recent research on learning styles and multiple
intelligences makes the case for allowing for student individuality in instruction, but considering
these theories in designing test accommodations could lead to endless changes. Among the
questions that must be considered when state mandated tests are given is whether the student
would be hindered in indicating what had been learned if no accommodations were made. It is
also important to determine whether accommodations are to be considered on a case-by-case
basis and who will decide the accommodations to be made. Other considerations are the threat of
litigation if accommodations are not made and general issues of fairness. A complete review of
standardized testing practices should be conducted to determine the proper place of testing and
the proper use of tests before appropriate accommodations can be designed.

Elliott, S.N. & Kratochwill, T.R. (1998). The assessment accommodation checklist. Teaching
Exceptional Children, 31(2), 10-15.
Presents some perspectives, answers and practical suggestions for United States
educators in the area of inclusive assessment. Testing accommodations for students with
disabilities; Inclusion and integrity. INSETS: The Assessment Accommodation Checklist:
Content overview; Standards, equity, and accommodation.
Elliott, J., Kratochwill, T.R., & McKevitt, B.C. (2001). Experimental analysis of the effects of
testing accommodations on the scores of students with and without disabilities. Journal
of School Psychology, 39(1), 3-24.
Focuses on the use and effects of testing accommodations on the scores of fourth-grade
students with disabilities on challenging mathematics and science performance assessment tasks.
Results indicate that slightly more than 75% of the testing accommodation packages suggested
by students' individual education plan teams had a moderate to large effect on their test scores.
Elliott, S.N., McKevitt, B.C., & Kettler, R.J. (2002). Testing accommodations research and
decision making: The case of good scores being highly valued but difficult to achieve
for all students. Measurement & Evaluation in Counseling & Development, 35(3), 153166.
Evaluates definitional, legal and validity issues on testing accommodations confronting
educators in the United States. Uses and likely effects of testing accommodations; Importance of
the inclusion of students with disabilities in the assessment to improve the qualities of
educational opportunities; Psychometric issues associated with the use of testing
accommodations.
Elliott, J., Thurlow, M., Ysseldyke, J. & Erickson, R. (1997). Providing assessment
accommodations for students with disabilities in state and district assessments. NCEO
Policy Directions, 7.
This report examines issues concerning the provision of accommodations for students
with disabilities participating in state and district assessments. The report considers what an
accommodation is, what kinds of accommodations are available, who should make the decision
regarding accommodations, when accommodations should be used, and how accommodations
affect test results. Analysis of state written guidelines has resulted in identification of the
following principles to guide decisions: (1) base decisions on the student's needs; (2) use a form
identifying variables in accommodation decisions; (3) have people who know the student make
decisions about accommodations; (4) align instruction, classroom testing, and district or state
assessment; and (5) consider the type of test. States are urged to have a written assessment policy
which reflects inclusive practices for student participation in assessment and clear assessment
accommodation policies. Two tables list types of assessment accommodations and sample
questions to consider in the decision process.

Elliott, J., Ysseldyke, J., Thurlow, M. & Erickson, R. (1998). What about assessment and
accountability?: Practical implications for educators. Teaching Exceptional Children,
31(2), 20-27.
Highlights specific questions, concerns, and challenges faced by educators, parents, and
students concerning assessment, accountability, accommodations, and alternate assessment.
Examples of state assessment-accommodation policies are provided and reasons for providing
test accommodations, requirements of Individualized Education Programs, eligibility for
alternate assessment, and parent participation are explained.
Fuchs, L. (2001). Helping teachers formulate sound test accommodations decisions for students
with learning disabilities. Learning Disabilities Research & Practice, 16(3), 174-181.
This paper introduces a data-based approach as an alternative way to help teachers
formulate decisions about the validity of test accommodations for students with LD. Three
rationales for the approach are provided: (a) an inadequate research base to guide decisionmaking; (b) the heterogeneity of the LD population; and (c) problems with teachers' use of
subjective judgment. Well-controlled studies on test accommodations are too scarce to draw firm
conclusions about effects for the group of students labeled learning disabled (LD). Moreover, in
light of the heterogeneity of learning disabilities, the individual, rather than the LD label, may be
the more appropriate unit for deciding which test accommodations preserve the validity of test
scores for students with LD. In this paper, we provide a rationale for a data-based approach to
help teachers formulate decisions about the validity of test accommodations for individual
students with LD. Then we describe an objective assessment process teachers may use in
determining valid test accommodations. We conclude with recommendations for practitioners.
Fuchs, L.S. & Fuchs, D. (1999). Fair and unfair testing accommodations. School Administrator,
56(10), 24-27.
Test accommodations are changes in standardized test conditions to equalize
opportunities between students with or without disabilities by achieving valid scores. The
Individuals with Disabilities Act 1997 amendments require states and districts to include disabled
students in accountability programs. Assumptions, practical implications, methodologies, and
resources are discussed.
Fuchs, L.S., Fuchs, D., Eaton, S.B., Hamlett, C., & Karns, K.M. (2000). Supplementing teacher
judgments of mathematics test accommodations with objective data sources. School
Psychology Review, 29(1), 65-85.
Examines the use of data-based assessment process to supplement teacher judgments of
mathematics test accommodations for learning disabled students in the United States. Provisions
of the 1997 Individuals with Disabilities Education Act; Exclusion of learning disabled in
educational accountability programs; Lack of consensus on appropriate accommodations.

Fuchs, L.S., Fuchs, D., Eaton, S.B., Hamlett, C., Binkley, E. & Crouch, R. (2000). Using
objective data sources to enhance teacher judgments about test accommodations.
Exceptional Children, 67(1), 67-81.
181 students with learning difficulties (LD) and 184 students without LD completed 4
brief, parallel tests under 4 conditions (standard, extended time, large print, students reading
aloud). The authors (a) examined whether students with LD benefited from accommodations
more than students without LD; (b) estimated "typical" accommodation boosts among
nondisabled students; and (c) awarded accommodations to students with LD whose boosts
exceeded the "typical" boost. Teachers provided independent accommodation judgments, and
students with LD completed a large-scale assessment with and without accommodations.
Students with LD, as a group, profited differentially from reading aloud, not from extended time
or large print; teachers' decisions did not correspond to benefits students derived; the data-based
assessment predicted differential performance on large-scale assessment better than teacher
judgments.
Gajria, M. (1994). Teacher acceptability of testing modifications for mainstreamed students.
Learning Disabilities Research and Practice, 9(4), 236-243.
Sixty-four general secondary education teachers provided data on 32 test adaptations for
students with disabilities. Data included awareness, use, integrity, effectiveness, and ease of use.
Results suggest that teachers are likely to use modifications that they perceive to maintain
academic integrity, be effective, and require little individualization in terms of planning,
resources, and extra time.
Gordon, R.P., Stump, K. & Glaser, B. (1996). Assessment of individuals with hearing
impairments: Equity in testing procedures and accommodations. Measurement &
Evaluation in Counseling & Development, 29(2), 111-118.
Reviews research related to testing procedures and accommodations for individuals with
hearing impairments, with a special focus on the specific issues that may influence standardized
psychological, educational, and work-related testing procedures for people with hearing
impairments. Includes considerations for professionals conducting assessments with this
population.
Griffith, D. (1990). Computer access for persons who are blind or visually impaired: Human
factors issues. Human Factors, 32(4), 467-475.
Provides an overview of the problems confronting the blind or visually impaired (VI)
computer user. The generic types of computer systems adapted for use by blind and/or VI
persons are described. General human factors problems confronting VI computer users are
reviewed, along with potential solutions. Instructive research on issues concerning blind and VI
persons is outlined. Problems created by icon-based interfaces are discussed, and commonalities
between the problems faced by the blind or VI user and the sighted user are considered.

Hanson, E.G., Lee, M.J., & Forer, D.C. (2002). A self-voicing test for individuals with visual
impairments. Journal of Visual Impairment & Blindness, 96(4), 273-275.
A study involving 17 individuals (ages 17-55) who were blind examined the use of a
prototype testing system that employs synthesized speech to deliver questions on reading and
listening comprehension tests. In general, the usability of the system was evaluated positively.
The most serious problem was synthesized speech quality.
Helwig, R. & Tindal, G. (2003). An experimental analysis of accommodation decisions on largescale mathematics tests. Exceptional Children, 69(2), 211-225.
Reports on an investigation of teachers and students within special education to
determine the accuracy with which teachers recommend read-aloud accommodations for
mathematics tests. Development of a profile of students who benefit from this type of
accommodation; Administration of a mathematics achievement test; Failure of teachers at
predicting which students would benefit from the accommodation.
Helwig, R., Rozek-Tedesco, M.A. & Tindal, G. (2002). An oral versus a standard administration
of a large-scale mathematics test. Journal of Special Education, 36(1), 39-47.
Students (n=1,343) were administered either a standard format mathematics test or one in
which questions were read aloud. Performance of students with learning disabilities (LD) was
compared to others on specific "difficult reading" items. Elementary school students with LD
(but not middle school students with LD or general education students) performed better when
these items were read aloud.
Henry, S. (1999). Accommodating practices. School Administrator, 56(10), 32-34, 36-38.
Accommodations are test-administration changes that do not change the underlying
construct being measured. The Individuals with Disabilities legislation and regulations provide
little guidance on implementing assessment accommodations for students with learning
disabilities. Certain research-based checklists or rating scales help link instructional and testing
accommodations.
Hoener, A., Salend. S., & Kay, S. I. (1997). Creating readable handouts, worksheets, overheads,
tests, review materials, study guides, and homework assessments through effective
typographic design. Teaching Exceptional Children, 29 (3), 32-35.
Hollenbeck, K., Tindal, G. & Almond, P. (1998). Teachers knowledge of accommodations as a
validity issue in high-stakes testing. Journal of Special Education, 32(3), 175-183.
A survey of 166 regular and special-education teachers concerning allowable accommodations
on statewide assessment tests found only 21% reported that they used allowed accommodations.
Teachers' knowledge of allowable accommodations was low and suggests some students are
unnecessarily exempted from participation. Results support preservice and inservice training in
allowable accommodations.

Jayanthi, M. & Epstein, M. (1996). A national survey of general education teachers perceptions
of testing adaptations. Journal of Special Education, 30(1), 99-115.
Respondents (n=401) to a national survey of general education teachers identified
specific testing adaptations seen to be either helpful for students with disabilities or easy to
make. Two-thirds of respondents indicated that it was not fair to make testing adaptations only
for students with disabilities, whereas one-third indicated that this was fair.
Johnson, E.S. (2000). The effects of accommodations on performance assessments. Remedial
and Special Education, 21(5), 261-267.
A study examined the effects of providing the accommodation of reading the
mathematics items to 115 fourth-grade students, 38 with learning disabilities, on the Washington
Assessment of Student Learning. Students with learning disabilities benefited from having the
mathematics items read to them, although typical students had higher test scores.
Keiser, S. (1998). Test accommodations: An administrators view. In M. Gordon (Ed.) & S.
Keiser (Ed.), Accommodations in higher education under Americans with Disabilities Act
(ADA): A no-nonsense guide for clinicians, educators, administrators, and laywers (pp.
46-69). Philadelphia: Office of Test Accommodations.
Covers test accommodations for students with disabilities. The author discusses the role
of the test accommodations administrator, how to determine which students are covered under
the Americans with Disabilities Act, and how to determine effective accommodations. Also
discussed is the role of the diagnostician, as well as the documentation review process, and
examples of accommodations and their functions. The National Board of Medical Examiners
documentations guidelines are also included.
Lombardi, T.P. & Burke, D. (1999). To testor not to test? Teaching Exceptional Children,
32(1), 26-29.
Discusses the participation of students with disabilities on the Stanford Achievement Test
in West Virginia and the testing guidelines and testing Accommodations used. The effects of the
inclusion of students with disabilities on class standings and teacher attitudes towards the
participation of students with disabilities are addressed.
Luckner, J. & Denzin, P. (1998). In the mainstream: Adaptations for students who are deaf or
hard of hearing. Perspectives in Education and Deafness, 17(1), 8-11.
Discusses the need to provide specific adaptations in instruction and assessments to
students who are deaf or hard of hearing in general education classrooms. It provides a list of
adaptations used in general education classrooms and includes adaptations for the environment,
input, output, social, behavioral, evaluation, and grading.

Meloy, L.L., Deville, C. & Frisbie, D.A. (2002). The effect of a read aloud accommodation on
test scores of students with and without a learning disability in reading. Remedial &
Special Education, 23(4), 248-255.
The purpose of this study was to examine the effect of a read aloud testing
accommodation on students with and without a learning disability in reading. A sample of 260
midwestern middle school students (24% with a learning disability in reading, and 76% without
such a disability) were randomly assigned to two experimental conditions for testing with four
tests of the Iowa Tests of Basic Skills. The test conditions were standard administration and
reading the tests aloud to the students. Based on a two-way (2 2) analysis of variance, with test
administration and student status as the two fixed factors, the students with learning disabilities
in reading, as well as those without, exhibited statistically significant gains with the read aloud
test administration. Interaction effects were not significant. Implications of these results for the
read aloud accommodation are presented.
National Center on Educational Outcomes. (1993). Accommodating students with disabilities in
national and state testing programs. (Brief Report 9). Minneapolis, MN.
The challenge of making educational programs accessible to people with disabilities,
combined with the increasing emphasis on testing and other forms of assessment, has translated
into a need for accommodations in the testing and assessment of people with disabilities.
Typically, test accommodations are provided for people with sensory and/or physical
impairments, but there is less agreement about test accommodations for people with learning
disabilities and other less visible disabilities. Making decisions about test accommodations is
challenging because changing testing procedures or materials may change the technical adequacy
of a test. Characteristics of technical adequacy include test reliability, accuracy in making
predictions about test takers, and test validity. Accommodations allowed in national and state
assessment programs are inconsistent, involving widespread variation in practice. A
comprehensive set of guidelines is needed that state and national agencies can use in decision
making. Guidelines for making decisions about accommodations need to address: inclusion and
exclusion criteria, when and how to modify tests or testing procedures, how to report scores, and
how to summarize data. Identifying accommodations that are both fair and technically adequate
requires a delicate balancing of individual and societal rights.
Nelson, J.S., Jayanthi, M., Epstein, M.H. & Bursuck, W.D. (2000). Student preferences for
adaptations in classroom testing. Remedial and Special Education, 21(1), 41-52.
A study investigated the preferences of 158 middle school students with and without
disabilities for specific adaptations in general education classroom testing. Open-notes and openbook tests were among the adaptations most preferred. Students with disabilities and/or students
with low achievement indicated significantly higher preference than others for several
adaptations.
Nester, M.A. (1994) Psychometric testing and reasonable accommodation for persons with
disabilities. In Bruyere, S.M. (Ed.) & OKeeffe, J. (Ed.), Implications of the Americans

with Disabilities Act for psychology (pp. 25-36). Washington, D.C.: American
Psychological Association.
The issue of nondiscrimination against persons with disabilities is a very new one in
mainstream psychometrics and industrial/organizational psychology. Testing accommodations
will be discussed under three broad categories: testing medium, time limits, and test content.
Legal and regulatory requirements [for employment testing] and the role of rehabilitation
psychologists is discussed.
Olson, L. (2002). Testing. Education Week, 21(20), 6.
Offers a look at educational testing in the United States. Publication of 'Data-Based
Decision-Making,' released by the National Association of Elementary School Principals, to help
teachers understand the testing research data; Website on the use of testing accommodations for
students with disabilities.
Online testing accommodations database unveiled. (2002). Education Week, 44(2), 5.
Reports on the development of an online searchable database by the National Center for
Education Outcomes in the U.S. Purpose of the database; Reauthorization of the 1997
Individuals With Disabilities Education Act in the U.S; List of testing accommodations for
disabled students made by U.S. states.
Phillips, S.E. (1994), High-stakes testing accommodations: validity versus disabled rights.
Applied Measurement in Education, 7(2), 93-120.
This article explores the measurement problems associated with granting
accommodations for mental disabilities, uses existing case law to construct a framework for
considering such accommodations, and discusses the advantages and disadvantages of alternate
strategies for handling testing accommodation requests. Key questions that must be addressed
by measurement specialists include: Why are accommodations for mental disabilities more
problematic than those for physical disabilities? What are the characteristics of a valid
accommodation? What are the legal standards for denying a requested test accommodation?
Phillips, S.E., (2002). Legal issues affecting special populations in large-scale testing programs.
In G. Tindal (Ed.) & Haladyna, T.M. (Ed.), Large-scale assessment programs for all
students: Validity, technical adequacy, and implementation (pp. 109-148). Manwah, NJ:
Lawrence Erlbaum Associates.
Notes that while legal challenges to state assessment programs have focused primarily on
the issues of adverse impact, parental rights, and testing accommodations, once a case goes to
trial, issues of test validity, reliability, passing standards, and adherence to other professional
standards are typically raised. The Debra P. v. Turlington (1984) and G. I. Forum et al v. Texas
Education Agency et al (2000) cases are briefly described and and provide precedent for many of
the testing standards reviewed in subsequent sections of the chapter. The major focus of this
chapter is on the area of testing accommodations when there are few legal precedents and tough

policy questions. Other topics discussed in this chapter include professional standards,
accommodations vs modifications, reporting and interpreting of scores, and recommendations
based on legal requirements and psychometric properties.
Pitoniak, M.J. & Royer, J.M. (2001). Testing Accommodations for examinees with disabilities: A
review of psychometric, legal and social policy issues. Review of Educational Research,
71(1), 53-104.
Reviews several issues related to the provision of testing accommodations to examinees
with disabilities. United States laws that act as catalysts for researching and implementing testing
accommodations; Discussion on learning disabilities; Strategies for test accommodation; Legal
cases related to the provision of testing accommodations for examinees with disabilities;
Categories of difficulties in conducting research on testing accommodations.
Schulte, A.A.G., Elliott, S.N. & Kratochwill, T.R. (2000). Educators perceptions and
documentation of testing accommodations for students with disabilities. Special Services
in the Schools, 16(1-2), 35-56.
This investigation focused on educators' use of the Assessment Accommodation
Checklist (ACC) to facilitate selection of assessment accommodations for two hypothetical
students with disabilities. Results suggest that educators did not recommend significantly more
accommodations for use with a student with a severe disability compared with a student with a
more mild disability, except on performance assessment tasks.
Shriner, J.G., & DeStefano, L. Participation and accommodation in state assessment: The role of
Individualized Education Program. Exceptional Children, 69(2), 147-161.
In this intervention study, training was found to increase the quality and extent of
participation and accommodation documentation on the Individualized Education Program (IEP)
of students with disabilities in three Illinois school districts. Correlations between what was
documented on the IEP and what happened on testing day were highly variable.
Students with disabilities and high-stakes testing [Special issue]. (2001). Assessment for
Effective Intervention, 26(2).
Thurlow, M. House, A.L., Scott, D.L. & Ysseldyke, J. (2000). Students with disabilities in largescale assessments: State participation and accommodation policies. Journal of Special
Education, 34, 154-163.
Thurlow, M. Ysseldyke, J., Bielinski, J. & House, A. (2000). Instructional and assessment
accommodations in Kentucky. (State Assessment Series Report7). Minneapolis, MN:
University of Minnesota, National Center on Educational Outcomes.
Tindal, G. & Fuchs, L. (1999). A summary of research on test changes: An empirical basis for
defining accommodations. (Contract No. H326R980003). Lexington, KY: Mid-South
Regional Resource Center. (ERIC Document Reproduction Service No. ED442245).

This document summarizes the research on test changes to provide an empirical basis for
defining accommodations for students with disabilities. It begins by providing an historical
overview of special education accountability. It describes how separate special education
accountability systems have evolved and summarizes information on the participation of students
with disabilities in general education accountability systems. The role of the Individualized
Education Program as the main vehicle for expressing the need for test accommodations is
emphasized. The paper then summarizes the research on test changes using a taxonomy from the
National Center on Educational Outcomes. Testing accommodations are reviewed relating to
timing and scheduling of testing, test settings, computer presentation of tests, examiner
familiarity, multiple changes in presentation, dictation to a proctor or scribe, using an alternative
response, marking responses in test booklets, working collaboratively with other students, using
word processors, using calculators, reinforcement, and instruction on test-taking strategies.
Tindal, G., Heath, B., Hollenbeck, K., Almond, P. & Harniss, M. (1998). Accommodating
students with disabilities on large-scale tests: An experimental study. Exceptional
Children, 64(4), 439-450.
Seventy-eight special-education students (ages 8-13) and 403 typical students took a
large-scale statewide test using standard test administration procedures and two major
accommodations addressing response conditions and test administration. No differences were
found in the response conditions; however, students with disabilities who were read a test orally
performed better.
Tindal, G., Hollenbeck, K., Heath, B., & Almond, P. (1998). The effect of using computers as an
accommodation in a statewide writing test. Paper presented at the AERA Roundtable Including Students with Disabilities in Large Scale Assessments SIG.
USDOE(2001).ClarificationoftheroleoftheIEPteaminselectingindividual
accommodations,modificationsinadministration,andalternateassessmentsforstateand
districtwideassessmentsofstudentachievement.UnitedStatesDepartmentof
Education.
Weston, T.J. (1999). Investigating the validity of the accommodation of oral presentation in
testing (learning disabilities, fourth-grade). Dissertation Abstracts International, 60,
1083A.
Ysseldyke, J., Thurlow, M., Bielinski, J., House, A., Moody, M. & Haigh, J. (2001). The
relationship between instructional and assessment accommodations in an inclusive state
accountability system. Journal of Learning Disabilities, 34(3), 212-220.
We investigated the kinds of instructional and assessment accommodations students with
disabilities receive, and the extent to which instructional accommodations match assessment
accommodations. Most students who had IEPs in specific content areas received instructional
accommodations in those areas, and there were no differences by disability type. We provide data

on the specific types of accommodations used. Overall, students' assessment accommodations


matched their instructional accommodations, though many students received testing
accommodations that had not been received in instruction. Implications are discussed for IEP
teams who make decisions about instructional and assessment accommodations.
www.doe.state.de.us/aab/DSTP_research.html
National Center on Educational Outcomes. (1993). Accommodating students with disabilities in
national and state testing programs. (Brief Report 9). Minneapolis, MN.
The challenge of making educational programs accessible to people with disabilities,
combined with the increasing emphasis on testing and other forms of assessment, has translated
into a need for accommodations in the testing and assessment of people with disabilities.
Typically, test accommodations are provided for people with sensory and/or physical
impairments, but there is less agreement about test accommodations for people with learning
disabilities and other less visible disabilities. Making decisions about test accommodations is
challenging because changing testing procedures or materials may change the technical adequacy
of a test. Characteristics of technical adequacy include test reliability, accuracy in making
predictions about test takers, and test validity. Accommodations allowed in national and state
assessment programs are inconsistent, involving widespread variation in practice. A
comprehensive set of guidelines is needed that state and national agencies can use in decision
making. Guidelines for making decisions about accommodations need to address: inclusion and
exclusion criteria, when and how to modify tests or testing procedures, how to report scores, and
how to summarize data. Identifying accommodations that are both fair and technically adequate
requires a delicate balancing of individual and societal rights.

Annotated Bibliography for universal design of


assessment
Bar, L., & Galluzzo, J. (1999). The accessible school: Universal design for educational settings.
Berkeley, CA: MIG Communications.
Bowe, F.G. (2000). Universal design in education. Westport, CT: Bergin & Garvey.
Thoroughly but informally describes and applies the "seven principles of universal
design" to various aspects of formal schooling systems. The aspects include audio-visual and
other resources for teaching and learning, computer rooms, email and web pages, so the book has
at least something to say to us all. Offers cost-effective ways to respond to the special needs of
today's diverse students.
Burgstahler, S. (2001). Universal design of instruction. Seattle: DO-IT, University of
Washington. Retrieved August 20, 2003, from
http://www.washington.edu/doit/Brochures/Academics/instruction.html
Burgstahler, S. (2003). Equal access: Computer labs. Seattle: DO-IT, University of Washington.
Retrieved August 20, 2003, from
http://www.washington.edu/doit/Brochures/Technology/comp.access.html
Cawley, J.F., Foley, T.E. & Miller, J. (2003). Science and students with mild disabilities.
Intervention in School & Clinic, 38(3), 160-171.
This article presents a framework for science programming at the elementary school level
by incorporating the principles of universal design. Adherence to the principles of universal
design shows promise for (a) increasing access to the general education curriculum, (b)
enhancing student progress in science, and (c) framing the general education curriculum to make
it more appropriate for students with disabilities (Orkwis, 1999). Five models of elementary
school science are reviewed, with an emphasis on the principles of universal design: spiral,
intensified, theme-based, integrated, and multiple-option. The multiple-option curriculum as
exempted by Science for All Children both meets the criteria and expands on the principles of
universal design.
Connell, B.R., Jones, M., Mace, R., Mueller, J., Mullick, A., Ostroff, E., Sanford, J., Steinfeld,
E., Story, M., & Vanderheiden, G. (1997). The principles of universal design. Raleigh,
NC: North Carolina State University, Center for Universal Design. Retrieved August, 20,
2003, from http://www.design.ncsu.edu/cud/univ_design/princ_overview.htm.
Dolan, R., & T. Hall (2001). Universal design for learning: Implications for large-scale
assessment. Perspectives: International Dyslexia Association Perspectives, 27 (4), 22-25.

Dolan, R. P., & Rose, D. H. (2000). Accurate assessment through Universal Design for Learning.
Journal of Special Education Technology, 15(4).
ERIC/OSEP (Educational Resources and Information Clearinghouse & Office of Special
Education Programs. (1998 fall). Topical Report. Washington, DC: Author. Retrieved July
2002, from the World Wide Web: www.cec.sped.org/osep/ud-sec3.html.
Goldberg, L. (1999). Making learning accessible. Exceptional Parent, 29(11), 34-40.
Hehir, T. (2002). Eliminating ableism in education. Harvard Educational Review, 72(1), 1-32.
Reviews research on education of students with deafness, blindness, visual impairments,
or learning disabilities. Finds that ableist assumptions reinforce prejudices and contribute to low
educational attainment. Advocates including disabilities in diversity efforts, encouraging
disability-specific modes of learning, specializing special education, focusing on results over
placement, promoting high standards, and employing universal design.
Higbee, J.L. (2001). Implications of universal instructional design for developmental education.
Research and Teaching in Developmental Education, 17(2), 67-70.
States that many of the accommodations typically made for students with disabilities can
benefit all students--most, for example, can benefit from extended time or alternative test
formats. Asserts that Universal Instructional Design (UID) can be applied to all students, as it
takes into consideration the differences in learning styles of both disabled and non-disabled
learners.
Hitchcock, C., Meyer, A., Rose, D. & Jackson, R. (2002). Providing new access to the general
curriculum. Teaching Exceptional Children, 35(2), p8-17.
Discusses the evolution of the general curriculum and special education in the U.S.
Framework for curriculum reform; Principles of Universal Design for Learning based
curriculum; Diversity of learning needs among students.
Mace, R. (1991). Definitions: Accessible, Adaptable, and Universal Design (Fact Sheet): Center
for Universal Design, NCSU.
Meyer, A., O'Neill, L. (2000). Beyond access: Universal Design for Learning.
Exceptional Parent, 30(3), 59-61.
O'Neill, L. (1999). eReader: A technology key for reading success. Exceptional Parent, 29(12),
54.
O'Neill, L. (2001). The Universal Learning Center: Helping teachers and parents find accessible
electronic learning materials for students with disabilities. Exceptional Parent, 31(9), 5659.

For students with disabilities, electronic (or digital) text offers many advantages over
print-based materials. A one-stop resource where teachers could quickly find electronic versions
of curricular materials and the resources and supports they need to use them effectively, even for
the next day's lesson, seems logical. CAST is creating such a resource with its new Universal
Learning Center (ULC), a Web-based service (http://www.ulc.cast.org) that will enable teachers,
students, administrators, and parents to locate and acquire accessible digital content and software
tools to help them meet the needs of individual learners, especially those with disabilities.
Pisha, B. & Coyne, P. (2001). Smart from the start: The promise of Universal Design for
Learning. Remedial and Special Education, 2 (4), 197-203.
Rose, D. (2001). Universal design for learning: Associate Editor column. Journal of Special
Education Technology, 15 (2), 1-8.
Rose, D. & Dolan, B. (2000). Universal design for learning: associate editors column. Journal
of Special Education Technology, 15(4), 47-51.
Rose, D. & Meyer, A. (2002). Teaching every student in the digital age: Universal design for
learning. Alexandria, VA: Association for Supervision and Curriculum Development.
Rose, D. & Meyer, A. (2000). Universal design for individual differences. Educational
Leadership, 58(3), 39-43.
Applied to instruction, the principles of universal design can guide the development of
educational tools to accommodate the diverse needs of all learners, including those with
disabilities.
Rose, D., Sethuraman, S. & Meo, G. (2000). Universal Design for Learning. Journal of
Special Education Technology, 15(2), 56-60.
Thompson, S. J., Johnstone, C. J., & Thurlow, M. L. (2002). Universal design applied to large
scale assessments (NCEO Synthesis Report 44). Minneapolis, MN: University of
Minnesota, National Center on Educational Outcomes. Retrieved August 20, 2003, from
http://education.umn.edu/NCEO/OnlinePubs/Synthesis44.html
Vanderheiden, G.C. (1990). Thirty-something million: should they be exceptions? Human
Factors, 32(4), 383-396.
Wehmeyer, M.L., Lance, G.D. & Bashinski, S. (2002). Promoting access to the general
curriculum for students with mental retardation: A multi-level model. Education and
Training in Mental Retardation and Developmental Disabilities, 37(3), 223-34.
This article presents a multi-step process and multi-level model to promote access to the
general curriculum for students with mental retardation. It discusses the incorporation of the
following universal design principles: equitable use, flexible use, simple and intuitive use,
perceptible information, tolerance for error and low physical and cognitive effort.

www.adaptenv.org/universal/index.php
www.cast.org/udl
www.design.ncsu.edu/cud
www.education.umn.edu/nceo
http://ncam.wgbh.org
www.trace.wisc.edu
The Trace Research & Development Center is a part of the College of Engineering,
University of Wisconsin-Madison. Founded in 1971, Trace has been a pioneer in the field of
technology and disability. Its mission is to prevent the barriers and capitalize on the opportunities
presented by current and emerging information and telecommunication technologies, in order to
create a world that is as accessible and usable as possible for as many people as possible.

Вам также может понравиться