Вы находитесь на странице: 1из 3

Cynthia Taskesen

Analytic Personal and Professional Essay


Since the Portfolio Two presentation, I have focused on issues of test validity both in my
professional work and academic studies.
In my position in the Defense Language and National Security Education Program, one
of my responsibilities is overseeing our Advisory Panel meetings, which convene to review
progress in the development of both the Defense Language Proficiency Test (DLPT) , a foreign
language test, and the English Comprehension Level test, which is administered to International
Military Students. In meetings regarding these tests, Panel members, who are experts in
language testing and psychometrics, stress the importance of the understanding and development
of a validity framework. The Panel members are enthusiastic about the approach to test validity
set forth by Kane (2006, 2013), so I have become very familiar with the method of establishing
validity arguments and claims, and approaching validation as a process of collecting evidence for
each use of the test. However, I have also had the opportunity to see how difficult the
implementation of the Kane approach is in practice: The collection of evidence is timeconsuming and expensive, and unfortunately can be seen by test developers as a way of proving
the validity of their tests, rather than looking objectively at potential counter-arguments to test
validity.
The advantages and disadvantage of the Kane methodology, when compared both to
those of previous validity theorists such as Messick (1989. 1995), and more controversial
contemporary theorists such as Boorsbom (e.g. Boorsbom, Cramer, Keivit, Scholten & Franic,
2009) is a topic I have begun to address in my Portfolio 3 essay. Because the issue of test
validity is so critical, and has important practical implication to test developments, it is one I plan
to explore further and expand in the dissertation proposal.
I had the opportunity to complete an Independent Study with Prof. Dimitrov in the
Spring of 2014, where I explored the topic of Cognitive Diagnostic Assessment (CDA). I was
impressed by the range of methods of conducting CDA (cf. DiBello, Roussos & Stout, 2007;
Rupp, Templin & Henson, 2010), many of which are very complex. I also noticed that in spite of
the extensive literature on CDA computational methods, there were very few articles describing
how these methods had been used to provide either diagnostic feedback in an instructional
setting or in the analysis of test content for validation purposes. My impression was that the
techniques were far too complex for practitioners to implement. This is why I became interested
in Prof. Dimitrovs Least Squares Distance Method (LSDM) of cognitive diagnosis (Dimitrov,
2007), since it is easily understandable as analogous to a regression with weighted predictors,
and thus more accessible to practitioners. There are several examples of the use of LSDM in the
literature; hopefully its use will expand, since it is well suited to address issues of test validity.
In the 797 class this semester, I began to explore the issue of the effect of native language
on DLPT scores. This is important because while the DLPT was designed to test native speakers
of English, it is regularly administered to native speakers of the tested language. It is possible
that the non-native speakers of English may be at a disadvantage for reasons unrelated to their

foreign language proficiency. In particular, the fact that the multiple choice items are presented
in English may challenge the English reading ability of non-native speakers of English, resulting
in construct irrelevant variance for native speakers of the foreign language. I proposed using
confirmatory factor analysis to investigate the invariance of the DLPT factor structure across the
native and non-native English speaking groups. I hope to continue with this study, and with
similar studies to investigate the issues of validity of DLPT for the various purposes for which it
is used.
References
Borsboom, D., Cramer, A. O. ., Keivit, R. A., Scholten, A. Z., & Franic, S. (2009). The end of
construct validity. In R. W. Lissitz, The concept of validity: Revisions, new directions
and applications (pp. 135170). Charlotte, NC: Information Age Publishing.
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity.
Psychological Review, 111(4), 10611071.
DiBello, L. V., Roussos, L. A., & Stout, W. (2007). Review of cognitive diagnostic assessment
and a summary of psychometric models. In C. R. Rao & S. Sinhary (Eds.), Handbook of
statistics (Vol. 26).
Dimitrov, D. M. (2007). Least squares distance method of cognitive validation and analysis for
binary items using their item response theory parameters. Applied Psychological
Measurement, 31(5), 367387. doi:10.1177/0146621606295199
Kane, M. (2006). Validation. In Educational Measurement (4th ed., pp. 1764). Westport, CT:
Greenwood Publishing.
Kane, M. (2013). Validating the interpretations and uses of test scores. Journal of Educational
Measurement, 50(1), 173.
Messick, S. (1989). Validity. In Educational Measurement (pp. 13103). New York: American
Council on Education/MacMillan.
Messick, S. (1995). Validity in psychological assessment. Validation of inferences from persons
responses and performances as scientific inquiry into score meaning. American
Psychologist, 50, 741-749.
Rupp, A. A., Templin, J., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods,
and applications (1 edition.). New York: The Guilford Press.

Вам также может понравиться