Академический Документы
Профессиональный Документы
Культура Документы
University of Cincinnati
Evaluating the CLA 2
Introduction
State legislatures across the United States are “under increasing pressure to
hold higher education responsible for student learning” (Klein, Kuh, Chun, Hamilton, &
and increased demands placed on educational quality. The applicability and relevance
being given wide attention not only in the United States and Great Britain, but also other
countries (Chapman, 1996; Gorard, 2002; Hess, 2006; Hippel, 2004; Stromquist, 2005).
Interestingly, Klein et al. also point out that there is no consensus on the outcomes of
student learning and how it is to be measured. In response to the growing call for
accountability, the University System of Ohio recently committed all of the public four-
education.
Council for Aid to Education (Klein, Benjamin, Shavelson, & Bolus, 2007). Unlike the
(1) critical thinking, (2) written communication, (3) problem solving, and (4) analytic
reasoning (Klein et al., 2007). These variables are broad enough to measure a
student’s application of these variables in the development of a response and not place
An interesting feature about the CLA that has garnered attention is the approach
of using the institution as the primary unit of analysis. According to Klein et al., “[the]
instructional and other programs (taken as a whole) with respect to certain important
learning outcomes” (2007, p. 418). By shifting the focus of analysis away from the
student, this offers institutions to assess small samples of the campus population rather
than widely administer the CLA. Over time, an institution’s involvement with the CLA
should produce information illustrating how students are improving their learning.
The CLA is comprised of two task components: the performance task and
analytical writing task. Administered via the Internet, the tasks feature open-ended
prompts that require an essay response from the student (Council for the Aid to
Education, n.d.). Both tasks uniquely assess critical thinking, written communication,
analytic reasoning and problem solving, and the constructed such that a student could
Evaluating the CLA 4
questions will either ask the student to compare and contract various points in the
resolve the antagonizing issue(s) presented. No matter what the scenario entails, the
student will be provided (via a split-screen and drop-down menu) a series of documents
that may include charts, tables, opinions, and other information what may be pertinent
to the situation. Part of the student’s responsibility would be to discern any credible
evidence, and synthesize and justify a response based on the information provided.
Although not as extensive as the performance task, the analytical writing task
rationale of the presented argument, whether they are in agreement or not. The
Like the performance task, the analytical writing prompts require the student to connect
evidence to support a position. But what differentiates the analytical writing task is the
key evaluative element of how well the student maintains sentence structure to
CLA Reliability
Although the concepts of assessment and evaluation have been around for many
years, the study of it has risen to prominence and developed as an academic discipline
primarily in post-modern times. The discipline seeks directly to formally assess and
evaluate the acquisition of, impact of, or the change in a certain affect over time of a
purposes of this class assessment and evaluation involves the assessment and
evaluation of classroom learning that occurs at all levels from pre-kindergarten through
higher education. Without assessment and evaluation practices and procedures there
would be no way of knowing how well students are acquiring the knowledge in a
particular classroom setting. In addition, there would be little information for the teacher
to be able to improve instructional practices to ensure the students are learning at the
highest level. Evaluation and assessment can occur both formally and informally.
assessment can and should occur at an ongoing rate as the teacher will constantly
monitor their own instructional practices and techniques and will also determine the
level of students understandings by the comments, questions, and responses that occur
throughout the learning process in the classroom (Knight, 2002; Klein, Kuh, Chun,
students learning is best progressed when the variety of instruction and various
educational tools are administered in order for student to learn at their fullest potential.
The five senses should be used as much as possible in the learning process. Teacher’s
Evaluating the CLA 6
should use technology as much as possible, use hands on approaches to learning, and
provide as many modes of instruction to the students in order for the greatest amount of
learning to take place. Formal assessments are also conducted best when they
evaluated based on well designed formal tests that employ written essays, short
answers, and the requirement that students define certain concepts and objectives. In
addition, most classroom assessment should employ the four language arts of reading,
writing, speaking, and listening in which the teacher evaluates student performance
based on structured rubrics and whenever possible the instructor should use some type
of peer assessment and evaluation to arrive at the final conclusions when monitoring
students’ progress. The test currently under review, the Collegiate Learning
Assessment effectively measures these effects and hold up well when reviewed for
reliability and internal and external validity (Kyriakesides, 2002; McCaffrey, Lockwood,
degree of subjectivity. The test can and has been replicated at its more than 33
institutions that consist of mostly small public and private colleges. Since the tests are
flexibility and subjectivity will be included from test evaluator to test evaluator. This is
enable the results to be similarly compared. Replication of the consistent findings within
this review process has been consistently administered. Because of the flexibility of the
ethnicities both domestic and international. Similar processes have also been
administered to such collegiate tests as the ACT, SAT, GRE, GMAT, MCAT, LSAT, and
other assessments that contain open-ended questions to give a more fuller assessment
a students’ critical thinking and analytical skills that employ a rubric assigned by
different evaluators and an average of the scores are taken to represent the student
achievement. Problems concerning validity and reliability with the other types of
standardized tests that have arisen in the media and literature in recent years have
been primarily concerned with other aspects of those tests and not applicable to their
replicability and open-ended response items (Schagen, Sainburg & Strand, 1999;
CLA Validity
CLA’s focus on critical thinking, analytic reasoning, and problem solving are
assessments. What is unique about the test is its focus on value-added assessment or
learning and the administration of the test longitudinally over the course of the time the
students are enrolled in their bachelorette degree programs. The internal validity of the
conclusions that can be drawn upon about the university value-added dimensions of the
test are very strong. The internal validity of the CLA as compared to other existing tests
is substantially greater because the same test is administered throughout the entire
college experience so that a student can be assessed for critical thinking and analytic
skills they posses whenever coming in to a program and empirical evidence is provided
that allows scholars to observe the amount of knowledge they gain over their college
experiences. This is stronger evidence than other mechanisms that compare a students
Evaluating the CLA 8
SAT or ACT scores, for example, to their achievement levels on a GRE, LSAT, or other
test since an “apples to oranges approach” has to be taken. Threats to internal validity
can also be counter-balanced because comparisons can be made to tests that are
administered at the other 33 plus universities that are currently administering the
External validity concerns are evident but significant controls on its influence are
also apparent with the longitudinal nature of the test and its universal administration
across a wide array of cultures and populations. Statisticians often observe the
assessment of the competence for predictive validity to involve the process of assessing
all individuals taking an assessment at the beginning and then all of them are
subsequently evaluated a second time so that their achievement levels may be similarly
compared. This assessment holds up well regarding predictive validity because almost
all participant are subsequently assessed. Criterion, content, and construct validity
measures hold up well under scrutiny as well. The tests are reviewed and revised to
ensure that the measurements for the tests effectiveness in measuring the test
elements that, “we say we are measuring” (McCaffrey, Lockwood, Koretz, Louis &
Hamilton, 2004; Tekwe, Carter, Ma, Algina, Lucas & Ariet, et al., 2004).
higher order skills and assessing established programs which may need programmatic
primarily higher ordered skills and provides a measurement for institutions relative to
problems and the outcomes measure a student’s ability to think critically. In addition to
thinking critically, students are assessed on their ability to communicate cognitively and
sonorously. Educators which administer this test will receive aggregated test scores
which appraise the institution’s student performance. This approach allows the
assessment. The directions are sufficiently clear, but are stated primarily for the test
taker as the test is a computer administered test, administrators may have a difficult
task in terms of duplicating the test. According to Benjamin (2005) faculty are
challenged to “buy into” assessment campus wide. One reason for this lack of
propelling the improvement of teaching and learning institutionally. This test allows for
faculty to get results from the computer-based test with very little effort on their behalf.
The test cannot be scored locally because it is a computer based test, which is
CLA institution. The procedures for scoring the tests are explicit and virtually self-
explanatory. Since it is assessed by the Council for the Aid to Education (CAE), the one
may question the analysis of local norms and validity evidence at the institutional level.
Evaluating the CLA 10
The CLA test results are accompanied by a power point presentation which contains
student satisfaction and engagement and major specific tests), in depth sampling in
experimental areas for subsequent years. The CLA encourages institutions to (1)
communicate results institutionally, (2) link students-level CLA results with other data
sources, (3) pursue in-depth sampling, and (4) participate in CLA in the classroom (CLA
data outcomes. The comparative test groups are freshman and seniors; these groups
are compared to measure improvement in higher order skills, critical thinking, analytical
reasoning, problem solving and written communication skills. The most effective way to
in the value added assessment. The value added assessment along with the averages
from the freshman and senior are next compared with other participating institutions; as
of the 2007-2008 school year there were 176 institutions. The results are not to rank
the institutions but to highlight the differences between them (CLA institutional report,
2007, p.4). The norms are reported in the concluding materials given to the institution
at the end of the test period. The populations to which the norms refer are clearly
defined and described as freshman and seniors of one’s particular institution. The
norms are stated and faculty members should find comparing their test subjects
(freshman and seniors) to existing norms exceptionally explicit. The test does not
Evaluating the CLA 11
discuss the possibility of local norms. The test refers to populous norms and not local
norms the institution has to create a system of norms. The test is completely computer
based, but does not come with a computer program in which to assist with writing.
Conclusions
outcomes based learning and its ability to assess value-added learning over time.
However there are concerns around the validity of the test. One area of concern lies in
the institutional comparison among institutions with similar sizes. This raises
lies in how the test is administered. Because the CLA is solely administered via the
Internet, there is a significant assumption that all student test-takes are computer
literate, which delves into a socio-economic issue that further generalizes the
formulate responses without being penalized for a lack of content knowledge. But there
are concerns related to how a student’s progress over time is accurately measured and
portrayed. In the end, the institution serving as the unit of analysis creates an artificial
direct measurement of learning, one that can be compared with other institutions and
over time, it does not offer a sufficient measure of the student’s learning environment
References
Council for the Aid to Education. (n.d.). Architecture of CLA Tasks. Retrieved July 9,
http://www.collegiatelearningassessment.org/.
Gorard, S. (2002). Political Control: A Way Forward for Educational Research? British
Hess, F.M. (2006). Accountability Without Angst? Public Opinion and No Child Left
275-276.
Klein, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The Collegiate Learning
10.1177/0193841X07303318.
Evaluating the CLA 13
Klein, S.P., Kuh, G.D., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to
Knight, P.T. (2002). Learning from Schools. Higher Education, 44(2), 283-298.
McCaffrey, D.F., Lockwood, J.R., Koretz, D., Louis, T.A., Hamilton, L. (2004). Models
Assessment and Its Relationship to End of Key Stage One Assessment. Oxford
Tekwe, C.D., Carter, R.L., Ma, CX, Algina, J., Lucas, J.R., Ariet, M., et al. (2004). An
35.
University System of Ohio (2008, June 11). University System of Ohio Colleges Sign
http://uso.edu/newsUpdates/media/releases/2008/06/MediaRel_11Jun08.php.