Академический Документы
Профессиональный Документы
Культура Документы
Carmelo M. Callueng
University of Florida
& Miller, 2005; Mertler, 2003; 2009). Additionally, assessment data are used
by education policy makers and practitioners involved for accountability
(how well students have learned) and instruction (how to promote higher
levels of learning) (Bennet & Gitomer, 2009; Danielson, 2008; Sato et al.,
2008; Vardar, 2010).
Classroom assessment practices and preferences have been widely
studied (e. g. Alkharusi, 2010; Bienbaum, 1997, 2000, Birembaum &
Feldman, 1998; Bliem & Davinroy, 1997; Zhang & Burry-Stock, 2003).
However, there is less evidence on the impact of professional development
programs on teachers’ classroom assessment preferences (Sato et al, 2008).
Hence, this study attempts to examine the role of professional development
on teachers’ classroom assessment preferences and practices.
teachers conduct assessments with a clear purpose in mind and believe that
their assessment promotes excellence in students (Astin, 1991; Earl & Katz,
2006; Hill, 2002; Murray, 2006; Sanchez & Brisk, 2004). On the other hand,
effective schools need to rethink the roles of assessment. Here, “effective”
means maximizing learning and well being for students. Hence, two
questions need to be answered. First, what uses of assessment are most
likely to maximize student learning and well-being? Second, how can
assessment be used best to facilitate student learning and well-being?
The usual response to these questions would be to provide
professional development and training to teachers on classroom assessment
and how to maximize the information gathered from assessment. Research
studies report that teachers’ assessment and evaluation practices are still
incongruent with recommended best practices (Galluzzo, 2005; Mertler,
2003, 2009; Volante & Fazio, 2007; Zhang & Burry-Stock, 2003). Hence, the
need to determine the implications of classroom assessment practices on the
professional development programs and the impact of training programs on
improving assessment practices of teachers.
Professional development is commonly understood as a
comprehensive, sustained and intensive approach to improving teachers’ and
administrators’ effectiveness in raising students’ achievement. In North
America, there is relatively little emphasis on assessment in the professional
development of teachers (Stiggins, 2002). For example, out of 10 Canadian
provinces and 50 US states, only Hawaii and Nebraska allocated a significant
sum of money that is specifically targeted to improve assessment and
evaluation practices within schools (Volante & Fazio, 2007). In the
Philippines, systematic educational assessment has been increasingly
institutionalized within schools. However, teachers’ level of confidence and
competence in classroom assessment still need attention (Magno & Gonzales,
2011). Evidently, there is strong clamor for professional development
programs to address the assessment literacy needs of teachers and school
administrators.
Effective professional development is considered as the center of
educational reforms (Dilworth & Imig, 1995) but only few studies have
documented its cost and effectiveness (Lowden, 2005). Stakeholders
including policy makers, board of education, legislators, funding agencies and
even taxpayers all want to know if professional development makes a
difference. Hence, this study assesses the impact of professional development
on classroom assessment practices of teachers.
Method
Participants
Measure
Procedure
Data Analysis
were computed between CAPSQ total score and demographic and teaching-
related variables to determine possible moderating variables. Hierarchical
regression analysis was conducted in the following manner. Step 1
regression equation was comprised of the moderating variables as predictors
of classroom assessment practices. In Step 2 regression equation, classroom
assessment pre-service training was added to predictors in Step 1. In Step 3
regression equation, classroom assessment in-service training was added to
predictors in Step 2. Finally, Step 4 included those predictors in Step 3 as
well as the two-way interactions of the moderating variables and
professional development training activities. A p≤ .05 was used for all stages
of the regression analyses.
Results
Table 1
Initial Communalities and Pattern Matrix of the Final 18 Items of the CAPSQ
Abbreviated Item 1 2 3 4 Communalities
Factor 1 (50.27%)
Item 2: Monitor learning progress .87 -.03 .03 -.01 .72
Item 4: Do self-assessment .86 -.08 .10 -.10 .68
Item 6: How students can learn .74 .01 -.04 .04 .54
Item 3: Getting personal feedback .71 .01 -.05 .20 .68
Item 1: Develop clear criteria .63 .08 .05 .08 .65
Item 5: Set the criteria .59 .13 .03 .09 .60
Factor 2 (8.02%)
Item 8: Measure extent of learning -.02 .95 -.21 .03 .64
Item 9: Evaluate level of competence -.04 .84 .04 -.06 .61
Item 11: Determine desired outcomes .02 .82 -.10 .11 .69
Item 12: Make final decision .10 .59 .18 -.01 .60
Factor 3 (5.98%)
Item 19: Inform other school officials .01 -.31 .94 .12 .60
Item 20: Provide information to .12 .20 .64 -.18 .56
parents
Item 22: Compare relative to others .14 .28 .45 -.11 .59
Item 23: Supply information to .03 -.06 .44 .14 .61
teachers
Factor 4 (5.45%)
Item 17: Help students improve .08 -.06 -.06 .86 .59
learning
Item 15: Strengths and weaknesses .01 -.02 .18 .68 .61
Item 13: Identify better learning .11 .13 -.06 .58 .47
Item 14: Collect learning data -.14 .16 .26 .50 .48
Initial communalities of the final 18 items ranged from .47 to .72, with
a mean coefficient of .61. Results of the PAF for this solution suggest that a
four- factor solution would best describe the structure of CAPSQ. Total
variance explained by these four factors was approximately 62%: factor 1
accounted for the majority of the variance, 50.27%; factor 2 accounted
8.02%; factor 3 accounted 5.98%; and factor 4 accounted 5.45%. Each of
these four factors also contributed at least 3% of the sum of squared
loadings. Items have pure loadings of at least .40 on only one factor. Initial
communalities and pattern matrix of the final items are presented in Table 1.
Internal consistency of the factor scores and total score was calculated
using Cronbach’s alpha (α). The four factors demonstrated high internal
consistency, with α = .92 for factor 1, α = .88 for factor 2, α = .83 for factor 3,
and α = .85 for factor 4. Internal consistency for the total score also was high,
α = .95. Inter- factor correlations ranged from .57 (moderate) to .72 (high).
Correlations between CAPSQ factors and total score were all very high (r =
.82-.92), indicating that total score can be the most accurate and valid
estimate of the classroom assessment practices.
Discussion
to teachers in small classes and who did not attend in-service training. While
obviously in-service training increases assessment competence, teachers
handling large classes are given more opportunities to harness their
knowledge and skills in assessment because of the diversity of the learning
needs in large classes.
References
Black, P., & William, D. (1998b). Inside the black box: Raising standards
through classroom assessment. Phi Delta Kappan, 80(2), 139-148.
Bennet, R. E., & Gitomer, D. H. (2009). Transforming K-12 assessment:
Integrating accountability testing, formative assessment and
professional support. In C. Wyatt-Smith & J. J. Cumming (Eds.),
Educational assessment in the 21st century (pp. 43-62). Dordrecht,
Heidelberg, London and New York: Springer.
Biggs, J. (1995). Assessing for learning: some dimensions underlying new
approaches to educational assessment. The Alberta Journal of
Educational Research, 41(1), 1-17.
Birenbaum, M. (1997). Assessment preferences and their relationship to
learning strategies and orientations. Higher Education, 33, 71–84.
Birenbaum, M. (2000). New insights into learning and teaching and the
implications for assessment.Keynote address at the 2000 conference
of the EARLI SIG on assessment and evaluation, September13,
Maastricht, The Netherlands.
Birenbaum, M., & Feldman, R. A. (1998). Relationships between learning
patterns and attitudes towards twoassessment formats. Educational
Research, 40(1), 90–97.
Bliem, C.L. & Davinroy, K.H. (1997). Teachers’ beliefs about assessment and
instruction in literacy. CSE Technical Report 421, National Center for
Research on Evaluation, Standards, and Student Testing (CRESST),
Graduate School of Education and Information Studies, University of
California, Los Angeles.
Bond, L. A. (1995). Critical issue: Rethinking assessment and its role in
supporting educational reforms. Oaks Brooks, IL: North Central
Regional Education Laboratory. [on-line] Available:
http://www.ncrel.org/sdrs/areas/issues/methods/assessment/as70
0.htm.
Borko, H., Mayfield, V., Marion, S., Flexer, R., & Cumbo, K. (1997). Teachers’
developing ideas and practices about mathematics performance
assessment: Successes, stumbling blocks, and implications for
development, Teaching and Teacher Education, 13(3), 259-278.
Boston, C. (2002). The concept of formative assessment. Practical Assessment,
Research & Evaluation, 8(9). [on-line] Available:
http://PAREonline.net/getvn.asp?v=8&n=9.
Brown, G. T. L. (2002) Teachers’ conception of assessment. Unpublished
Doctoral Dissertation, Unversity of Auckland, New Zealand.
Raty, H., Kasanen, K., & Honkalampi, K. (2006). Three years later: A follow-up
student of parents’ assessments of their children’s competencies.
Journal of Applied Social Psychology, 36(9), 2079-2099.
Rust, C. (2002). Purposes and principles of assessment. Oxford Center for Staff
and Learning Development, Learning and Teaching Briefing Paper
Series.
Sadler, R. (1989). Formative assessment and the design of instructional
systems. Instructional Science, 18, 119-144.
Sanchez, M. T., & Brisk, M.E. (2004). Teachers’ assessment practices and
understandings in bilingual program. NABE Journal of Research and
Practice, 2(1), 1993-208.
Sato, M., Wei, R. C., & Darling-Hammond, L. (2008). Improving teachers’
assessment practices through professional development: The case of
National Board Certification. American Educational Research Journal,
45(3), 669-700.
Schafer, W.D. (1991). Essential assessment skills in professional education of
teachers. Educational Measurement: Issues and Practice, 10(1), 3-6.
Shultz, K.S., & Whitney, D.A. (2005). Measurement theory in action: Case
studies and exercises Thousand Oaks, CA: SAGE Publications, Inc.
Segers, M., Dochy, F., & Cascallar, E. (2003). The era of assessment
engineering: Changing perspectives on teaching and learning and the
role of new modes of assessment. In M. Segers, F. Dochy, & E. Cascallar
(Eds.), Optimising new modes of assessment: In search of qualities
and standards (pp. 1–12). Dordrecht: Kluwer Academic Publishers.
Sparks, D. (2005). Learning for results. Thousand Oaks, California: Corwin
Press.
Stake, R. E. (2004). Standards-based and responsive evaluation. Thousand
Oaks, CA: Sage Publications.
Stephenson, W. (1953). The study of behavior. Chicago: University of Chicago
Press.
Stiggins, R.J. (1997). Student-centered classroom assessment. Englewood Cliffs,
NJ: Prentice-Hall.
Stiggins, R. J. (2002). Where is our assessment future and how can we get
there from here? In R. W. Lissitz & W.D. Schafer (Eds), Assessment in
educational reform: Both means and ends (pp. 18-48). Boston: Allyn &
Bacon.
Stiggins, R. J. (2008). An introduction to student-involved assessment FOR
learning. New Jersey: Pearson Merrill Prentice Hall