Вы находитесь на странице: 1из 13

Evaluating the CLA 1

Running Head: EVALUATING THE COLLEGIATE LEARNING ASSESSMENT

Test Evaluation: Collegiate Learning Assessment

Derrick Jenkins, Steven Napier, and Richard Robles

University of Cincinnati
Evaluating the CLA 2

Introduction

State legislatures across the United States are “under increasing pressure to

hold higher education responsible for student learning” (Klein, Kuh, Chun, Hamilton, &

Shavelson, 2005). Central governments are increasingly responsive to public opinion

and increased demands placed on educational quality. The applicability and relevance

of standardized testing and demands placed by the public on educational quality is

being given wide attention not only in the United States and Great Britain, but also other

countries (Chapman, 1996; Gorard, 2002; Hess, 2006; Hippel, 2004; Stromquist, 2005).

Interestingly, Klein et al. also point out that there is no consensus on the outcomes of

student learning and how it is to be measured. In response to the growing call for

accountability, the University System of Ohio recently committed all of the public four-

year institutions to participate in the national Voluntary System of Assessment (VSA)

and mandated the implementation of a standardized test to measure value-added

learning (University System of Ohio, 2008). Furthermore, one of the identified

standardized tests to administer is the Collegiate Learning Assessment (CLA). The

following is a description of the CLA and how it is administered at an institution of higher

education.

About the Collegiate Learning Assessment

Released in spring 2004, the Collegiate Learning Assessment is a result of the

RAND Corporation’s Value-Added Assessment Initiative and was developed by the

Council for Aid to Education (Klein, Benjamin, Shavelson, & Bolus, 2007). Unlike the

Measure of Academic Proficiency and Progress (MAPP), and the Collegiate


Evaluating the CLA 3

Assessment of Academic Proficiency (CAAP) standardized tests, the CLA measures

institutional contributions to undergraduate learning outcomes through four variables:

(1) critical thinking, (2) written communication, (3) problem solving, and (4) analytic

reasoning (Klein et al., 2007). These variables are broad enough to measure a

student’s application of these variables in the development of a response and not place

significant focus on discipline-specific, content knowledge.

An interesting feature about the CLA that has garnered attention is the approach

of using the institution as the primary unit of analysis. According to Klein et al., “[the]

goal is to provide a summative assessment of the value added by the school’s

instructional and other programs (taken as a whole) with respect to certain important

learning outcomes” (2007, p. 418). By shifting the focus of analysis away from the

student, this offers institutions to assess small samples of the campus population rather

than widely administer the CLA. Over time, an institution’s involvement with the CLA

should produce information illustrating how students are improving their learning.

Additionally, the overall score could be used as a benchmark and point-of-comparison

against other institutions (Klein et al., 2007).

The CLA Instrument

The CLA is comprised of two task components: the performance task and

analytical writing task. Administered via the Internet, the tasks feature open-ended

prompts that require an essay response from the student (Council for the Aid to

Education, n.d.). Both tasks uniquely assess critical thinking, written communication,

analytic reasoning and problem solving, and the constructed such that a student could
Evaluating the CLA 4

take one or both tasks.

The performance task asks students to answer a series of open-ended questions

related to a realistic hypothetical situation within 90 minutes. These open-ended

questions will either ask the student to compare and contract various points in the

information presented or as the student to suggest or select a course of action to

resolve the antagonizing issue(s) presented. No matter what the scenario entails, the

student will be provided (via a split-screen and drop-down menu) a series of documents

that may include charts, tables, opinions, and other information what may be pertinent

to the situation. Part of the student’s responsibility would be to discern any credible

evidence, and synthesize and justify a response based on the information provided.

Although not as extensive as the performance task, the analytical writing task

asks students to “critique-“ or “make-an-argument,” within 30 or 45 minutes,

respectively. For the “critique-an-argument” prompt, students have to discuss the

rationale of the presented argument, whether they are in agreement or not. The

“make-an-argument” prompt asks students to write a persuasive essay on an issue.

Like the performance task, the analytical writing prompts require the student to connect

evidence to support a position. But what differentiates the analytical writing task is the

key evaluative element of how well the student maintains sentence structure to

transition arguments and grammar.


Evaluating the CLA 5

CLA Reliability

Although the concepts of assessment and evaluation have been around for many

years, the study of it has risen to prominence and developed as an academic discipline

primarily in post-modern times. The discipline seeks directly to formally assess and

evaluate the acquisition of, impact of, or the change in a certain affect over time of a

specific curriculum, policy change, or administration of a certain procedure. For the

purposes of this class assessment and evaluation involves the assessment and

evaluation of classroom learning that occurs at all levels from pre-kindergarten through

higher education. Without assessment and evaluation practices and procedures there

would be no way of knowing how well students are acquiring the knowledge in a

particular classroom setting. In addition, there would be little information for the teacher

to be able to improve instructional practices to ensure the students are learning at the

highest level. Evaluation and assessment can occur both formally and informally.

Although this class will concentrate on formal classroom assessment informal

assessment can and should occur at an ongoing rate as the teacher will constantly

monitor their own instructional practices and techniques and will also determine the

level of students understandings by the comments, questions, and responses that occur

throughout the learning process in the classroom (Knight, 2002; Klein, Kuh, Chun,

Hamilton & Shavelson, 2005).

Human intelligence is varied and can involve a variety of factors. In addition,

students learning is best progressed when the variety of instruction and various

educational tools are administered in order for student to learn at their fullest potential.

The five senses should be used as much as possible in the learning process. Teacher’s
Evaluating the CLA 6

should use technology as much as possible, use hands on approaches to learning, and

provide as many modes of instruction to the students in order for the greatest amount of

learning to take place. Formal assessments are also conducted best when they

encompass a variety of techniques and approaches. Students should be assessed and

evaluated based on well designed formal tests that employ written essays, short

answers, and the requirement that students define certain concepts and objectives. In

addition, most classroom assessment should employ the four language arts of reading,

writing, speaking, and listening in which the teacher evaluates student performance

based on structured rubrics and whenever possible the instructor should use some type

of peer assessment and evaluation to arrive at the final conclusions when monitoring

students’ progress. The test currently under review, the Collegiate Learning

Assessment effectively measures these effects and hold up well when reviewed for

reliability and internal and external validity (Kyriakesides, 2002; McCaffrey, Lockwood,

Koretz, Louis & Hamilton, 2004).

Reliability of the test results as in most quality assessments involves some

degree of subjectivity. The test can and has been replicated at its more than 33

institutions that consist of mostly small public and private colleges. Since the tests are

collected and blindly reviewed by professionals containing rubrics a certain degree of

flexibility and subjectivity will be included from test evaluator to test evaluator. This is

counterbalanced between each evaluator in order to maintain a degree of fairness and

enable the results to be similarly compared. Replication of the consistent findings within

this review process has been consistently administered. Because of the flexibility of the

test it is replicable and applicable to a variety of cultures, groups, countries, and


Evaluating the CLA 7

ethnicities both domestic and international. Similar processes have also been

administered to such collegiate tests as the ACT, SAT, GRE, GMAT, MCAT, LSAT, and

other assessments that contain open-ended questions to give a more fuller assessment

a students’ critical thinking and analytical skills that employ a rubric assigned by

different evaluators and an average of the scores are taken to represent the student

achievement. Problems concerning validity and reliability with the other types of

standardized tests that have arisen in the media and literature in recent years have

been primarily concerned with other aspects of those tests and not applicable to their

replicability and open-ended response items (Schagen, Sainburg & Strand, 1999;

Tekwe, Carter, Ma, Algina, Lucas & Ariet, et al., 2004).

CLA Validity

CLA’s focus on critical thinking, analytic reasoning, and problem solving are

rigorous and measure up to the standards of others standardized collegiate

assessments. What is unique about the test is its focus on value-added assessment or

learning and the administration of the test longitudinally over the course of the time the

students are enrolled in their bachelorette degree programs. The internal validity of the

conclusions that can be drawn upon about the university value-added dimensions of the

test are very strong. The internal validity of the CLA as compared to other existing tests

is substantially greater because the same test is administered throughout the entire

college experience so that a student can be assessed for critical thinking and analytic

skills they posses whenever coming in to a program and empirical evidence is provided

that allows scholars to observe the amount of knowledge they gain over their college

experiences. This is stronger evidence than other mechanisms that compare a students
Evaluating the CLA 8

SAT or ACT scores, for example, to their achievement levels on a GRE, LSAT, or other

test since an “apples to oranges approach” has to be taken. Threats to internal validity

can also be counter-balanced because comparisons can be made to tests that are

administered at the other 33 plus universities that are currently administering the

Collegiate Learning Assessment (Ibid).

External validity concerns are evident but significant controls on its influence are

also apparent with the longitudinal nature of the test and its universal administration

across a wide array of cultures and populations. Statisticians often observe the

assessment of the competence for predictive validity to involve the process of assessing

all individuals taking an assessment at the beginning and then all of them are

subsequently evaluated a second time so that their achievement levels may be similarly

compared. This assessment holds up well regarding predictive validity because almost

all participant are subsequently assessed. Criterion, content, and construct validity

measures hold up well under scrutiny as well. The tests are reviewed and revised to

ensure that the measurements for the tests effectiveness in measuring the test

competencies regarding whether the test is measuring effectively the proscribed

elements that, “we say we are measuring” (McCaffrey, Lockwood, Koretz, Louis &

Hamilton, 2004; Tekwe, Carter, Ma, Algina, Lucas & Ariet, et al., 2004).

Administrative use of the Collegiate Learning Assessment

Faculty, academic advisory staff, department heads as well as school

administrators will find this instrument particularly useful in terms of strengthening


Evaluating the CLA 9

higher order skills and assessing established programs which may need programmatic

refinement. The Collegiate Learning Assessment (CLA) posits itself as a

complementary tool, to be utilized with other evaluative benchmarks. CLA focuses on

primarily higher ordered skills and provides a measurement for institutions relative to

higher order competencies. The assessment is based around realistic analysis of

problems and the outcomes measure a student’s ability to think critically. In addition to

thinking critically, students are assessed on their ability to communicate cognitively and

sonorously. Educators which administer this test will receive aggregated test scores

which appraise the institution’s student performance. This approach allows the

administrator of the test to acknowledge themselves as the primary stakeholder of the

assessment. The directions are sufficiently clear, but are stated primarily for the test

taker as the test is a computer administered test, administrators may have a difficult

task in terms of duplicating the test. According to Benjamin (2005) faculty are

challenged to “buy into” assessment campus wide. One reason for this lack of

willingness to embrace the CLA is that researchers are inherently biased to

standardized tests. Benjamin goes on to give incentives as to the importance of faculty

propelling the improvement of teaching and learning institutionally. This test allows for

faculty to get results from the computer-based test with very little effort on their behalf.

The test cannot be scored locally because it is a computer based test, which is

assessed electronically or in-house by trained scorers since 2007-2008, at the national

CLA institution. The procedures for scoring the tests are explicit and virtually self-

explanatory. Since it is assessed by the Council for the Aid to Education (CAE), the one

may question the analysis of local norms and validity evidence at the institutional level.
Evaluating the CLA 10

The CLA test results are accompanied by a power point presentation which contains

results, an internal analysis (course-taking patterns, grades, portfolio assessments,

student satisfaction and engagement and major specific tests), in depth sampling in

experimental areas for subsequent years. The CLA encourages institutions to (1)

communicate results institutionally, (2) link students-level CLA results with other data

sources, (3) pursue in-depth sampling, and (4) participate in CLA in the classroom (CLA

institutional report, 2007, p.3)

CLA Test Scales and Norms


The CLA test scales are clearly stated as to remove ambiguity in terms of the

data outcomes. The comparative test groups are freshman and seniors; these groups

are compared to measure improvement in higher order skills, critical thinking, analytical

reasoning, problem solving and written communication skills. The most effective way to

measure improvement is to compare the averages of freshman and seniors, culminating

in the value added assessment. The value added assessment along with the averages

from the freshman and senior are next compared with other participating institutions; as

of the 2007-2008 school year there were 176 institutions. The results are not to rank

the institutions but to highlight the differences between them (CLA institutional report,

2007, p.4). The norms are reported in the concluding materials given to the institution

at the end of the test period. The populations to which the norms refer are clearly

defined and described as freshman and seniors of one’s particular institution. The

norms are stated and faculty members should find comparing their test subjects

(freshman and seniors) to existing norms exceptionally explicit. The test does not
Evaluating the CLA 11

discuss the possibility of local norms. The test refers to populous norms and not local

norms the institution has to create a system of norms. The test is completely computer

based, but does not come with a computer program in which to assist with writing.

Conclusions

The Collegiate Learning Assessment is a helpful tool in measuring student

outcomes based learning and its ability to assess value-added learning over time.

However there are concerns around the validity of the test. One area of concern lies in

the institutional comparison among institutions with similar sizes. This raises

consistency issues in how learning is contextualized across institutions. Another issue

lies in how the test is administered. Because the CLA is solely administered via the

Internet, there is a significant assumption that all student test-takes are computer

literate, which delves into a socio-economic issue that further generalizes the

comparison among institutions.

As a self-contained assessment tool, the student has every opportunity to

formulate responses without being penalized for a lack of content knowledge. But there

are concerns related to how a student’s progress over time is accurately measured and

portrayed. In the end, the institution serving as the unit of analysis creates an artificial

layer of measurement where an actual student’s learning is diffused. Even though it is a

direct measurement of learning, one that can be compared with other institutions and

over time, it does not offer a sufficient measure of the student’s learning environment

and the context of learning.


Evaluating the CLA 12

References

Benjamin, R. (2008). The Contribution to the Collegiate Learning Assessment to

Teaching and Learning, Council to Aid for Education, 1-20

Chapman, K. (1996). Entry Qualifications, Degree Results and Value-Added in UK

Universities. Oxford Review of Education, 22(3), 251-264.

Council for the Aid to Education. (n.d.). Architecture of CLA Tasks. Retrieved July 9,

2009, from the Collegiate Learning Assessment web site:

http://www.collegiatelearningassessment.org/.

Fritz-Gibbon, C.T. & Vincent, L. (1997). Difficulties Regarding Subject Difficulties:

Developing Reasonable Explanations for Observable Data. Oxford Review of

Education, 23(3), 291-298.

Gorard, S. (2002). Political Control: A Way Forward for Educational Research? British

Journal of Educational Studies, 50(3), 378-389.

Hess, F.M. (2006). Accountability Without Angst? Public Opinion and No Child Left

Behind. Harvard Educational Review, 76(4), 587-612.

Hippel, P.V. (2004). School Accountability. Journal of Economic Perspectives, 18(2),

275-276.

Klein, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The Collegiate Learning

Assessment: Facts and Fantasies. Evaluation Review, 31(5), 415-439. doi:

10.1177/0193841X07303318.
Evaluating the CLA 13

Klein, S.P., Kuh, G.D., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to

measuring cognitive outcomes across higher education institutions. Research in

Higher Education, 46 (3), 251-276.

Knight, P.T. (2002). Learning from Schools. Higher Education, 44(2), 283-298.

Kyriakesides, L. (2002). A Research-Based Model for the Development of Policy on

Baseline Assessment. British Educational Research Journal, 28(6), 805-826.

McCaffrey, D.F., Lockwood, J.R., Koretz, D., Louis, T.A., Hamilton, L. (2004). Models

for Value-Added Modeling of Teacher-Effects. Journal of Educational and

Behavioral Statistics, 29 (1), 67-101.

Schagen, I. Sainbury, M. & Strand, S. (1999). Statistical Aspects of Baseline

Assessment and Its Relationship to End of Key Stage One Assessment. Oxford

Review of Education, 25(3), 359-367.

Stromquist, N.P. (2005). Comparative and International Education: A Journey Toward

Equality and Equity. Harvard Educational Review, 75(1), 89-112.

Tekwe, C.D., Carter, R.L., Ma, CX, Algina, J., Lucas, J.R., Ariet, M., et al. (2004). An

Empirical Comparison of Statistical Models for Value-Added Assessment of

School Performance. Journal of Educational and Behavioral Statistics, 29 (1), 11-

35.

University System of Ohio (2008, June 11). University System of Ohio Colleges Sign

Accountability Agreement. Retrieved July 11, 2009, from

http://uso.edu/newsUpdates/media/releases/2008/06/MediaRel_11Jun08.php.

Вам также может понравиться