Академический Документы
Профессиональный Документы
Культура Документы
TL1080
Workshop Goals
Considering the various rubric options Creating a Customized Rubric Evaluating A Rubric for effective assessment Calibrating rubric scoring for reliable assessment data
Rubric Overview
Rubrics provide the criteria for classifying products or behaviors into categories that vary along a continuum.
They can be used to classify virtually any product or behavior, such as essays, research reports, portfolios, works of art, recitals, oral presentations, performances, and group activities. Judgments can be self-assessments by students; or judgments can be made by others, such as faculty, other students, supervisors, or external reviewers. Rubrics can be used to provide formative feedback to students, to grade students, and/or to assess programs.
Rubric Options
Considering the various rubric options
Nature of the rubric
Assessment of student
Knowledge Performance
Assigning grades
Feedback for students
Assessment of curriculum
Feedback for faculty
Rubric Options
Considering the various rubric options (continued):
Nature of the rubric
Aligned with the course syllabus and student learning outcomes
Designed to assess what is being taught
User friendly
Can faculty scoring be calibrated to achieve reliable data? Can students benefit from the rubrics feedback?
Holistic
Summative
Analytic
Formative feedback Quantitative Qualitative
Rubric Options
What do you want to assess with your rubric?
Student
Knowledge: recall facts, terms, formulas Skills: ability to use acquired knowledge Performance: ability to produce a performance based product Student Learning Outcomes Learning Objectives
Creating Rubrics
What do you want to assess with your rubric?
Student
Knowledge: recall facts, terms, formulas rather than a rubric, a scantron test may be more suitable
Creating Rubrics
Assigning Grades
Effective assessment and feedback for students
What are the outcomes and supporting objectives you want to assess? Have the students been given the opportunity to practice these outcomes and objectives in class under the instructors supervision? Are these outcomes and objectives present in the course syllabus?
10
Creating Rubrics
Assigning Grades
Effective assessment and feedback for students
Look for meaningful assessment categories, wording and terminology that not only point out weaknesses and deficiencies, but also provide the information the student can use to understand why they received the grade they did as well as to improve their performance in the future. Avoid including in your rubric any outcomes and objectives that have not been included in the syllabus or taught in the class.
11
Creating Rubrics
Assessing course curriculum
Effective assessment and feedback for faculty
Is the rubric instructionally aligned with the course curriculum? Does the rubric adequately address the student learning outcomes and supporting objectives assigned to that course? Does the rubric provide enough information to allow faculty to pinpoint problems and weaknesses in the curriculum that will need to be adjusted in the future?
12
Creating Rubrics
Assessing course curriculum
Effective assessment and feedback for faculty For curriculum purposes, a rubric should assess a courses student learning outcomes and the supporting learning objectives, preferably one student learning outcome and its objectives per rubric instrument. A holistic rubric would be given first consideration as they are less time consuming and not designed to provide detailed feedback to students.
13
Creating Rubrics
Course curriculum
Syllabus and student learning outcomes are aligned with actual classroom student performance products.
For rubric produced data to be reliable, the syllabus and its student learning outcomes should be aligned with what is actually being taught in the classroom. Are full time faculty and adjuncts teaching the same material and using the same assignments? Are the textbooks and syllabi aligned with the curriculum?
14
I, D I D D D D D
I D D D
M M
1413
1414 M DM M
15
Rubric Options
Course curriculum
Syllabus and student learning outcomes are aligned with actual student performance If there are too many variables between the curriculum, the textbooks, the syllabi, the full time faculty and the adjuncts; rubric produced data will be unreliable. It is critical to ensure there are alignment of course and teaching variables, as well as calibration of grading and scoring.
16
Rubric Options
Course curriculum
User friendly
Can faculty scoring be calibrated to achieve reliable data? Is a wide variations in grading unusual? Can students benefit from the rubrics feedback? Should the learning process continue throughout the testing or assessing process?
17
Rubric Options
Course curriculum
User friendly
Can faculty scoring be calibrated to achieve reliable data?
Faculty and adjuncts can be quickly trained to grade or score student work with standardized results
Can students benefit from the rubrics feedback?
Using clear and easily understandable wording in the rubric provide students with helpful feedback that will encourage improved performance
18
Creating Rubrics
Course curriculum
Holistic
Summative
The focus of a score reported using a holistic rubric is on the overall quality, proficiency, or understanding of the specific content and skills. Holistic rubrics can result in a less time consuming scoring process than use of analytic rubrics.
19
Creating Rubrics
Course curriculum
Analytic
Student feedback Quantitative Qualitative
An analytic rubric articulates levels of performance for each criterion so instructor can assess student performance on each. Provides detailed feedback for the students on their strengths and weaknesses. Most important, it continues the learning process.
20
Creating Rubrics
Course curriculum
Analytic
Student feedback Quantitative Qualitative
The various levels of student performance can be defined using either quantitative (i.e., numerical) or qualitative (i.e., descriptive) labels. In some instances, both quantitative and qualitative labels can be utilized. If a rubric contains four levels of proficiency or understanding on a continuum, quantitative labels would typically range from "1" to "4." When using qualitative labels, there is much more flexibility, and can be more creative. A common type of qualitative scale might include these labels: master, expert, apprentice, and novice. Nearly any type of qualitative scale will suffice, provided it "fits" with the task.
21
Creating Rubrics
Rubrics support data driven decision making Data-driven decision making (DDDM):
uses data on function, quantity and quality of inputs Examines how students learn to suggest educational solutions Based on the assumption that scientific methods can effectively evaluate educational programs, and instructional methods.
Break
23
Creating Rubrics
Creating a customized rubric Step 1: Re-examine the learning objectives to be addressed by the task. This allows you to match your scoring guide with your objectives and actual instruction. Step 2: Identify specific observable attributes that you want to see (as well as those you dont want to see) your students demonstrate in their product, process, or performance. Specify the characteristics, skills, or behaviors that you will be looking for, as well as common mistakes you do not want to see. Step 3: Brainstorm characteristics that describe each attribute. Identify ways to describe above average, average, and below average performance for each observable attribute identified in Step 2.
24
Creating Rubrics
Creating a customized rubric Step 4a: For holistic rubrics, write thorough narrative descriptions for excellent work and poor work incorporating each attribute into the description. Describe the highest and lowest levels of performance combining the descriptors for all attributes. Step 4b: For analytic rubrics, write thorough narrative descriptions for excellent work and poor work for each individual attribute. Describe the highest and lowest levels of performance using the descriptors for each attribute separately.
25
Creating Rubrics
Creating a customized rubric Step 5a: For holistic rubrics, complete the rubric by describing other levels on the continuum that ranges from excellent to poor work for the collective attributes. Write descriptions for all intermediate levels of performance. Step 5b: For analytic rubrics, complete the rubric by describing other levels on the continuum that ranges from excellent to poor work for each attribute. Write descriptions for all intermediate levels of performance for each attribute separately.
26
Creating Rubrics
Creating a customized rubric Step 6: Collect samples of student work that exemplify each level. These will help you score in the future by serving as benchmarks. Step 7: Revise the rubric, as necessary. Be prepared to reflect on the effectiveness of the rubric and revise it prior to its next implementation.
27
Creating Rubrics
Table 2: Criteria #1 Template for Beginning 1
beginning level of performance beginning level of performance
analytic Developing 2
movement toward mastery level of performance movement toward mastery level of performance movement toward mastery level of performance movement toward mastery level of performance
rubrics Accomplished 3
achievement of mastery level of performance achievement of mastery level of performance achievement of mastery level of performance achievement of mastery level of performance
Exemplary 4
highest level of performance highest level of performance
Score
Criteria #2 Criteria #3
Criteria #4
Criteria #5
28
Creating Rubrics
Table 1: Template for Holistic Rubrics Score 5 4 3 2 1 Description Demonstrates complete understanding of the problem. All requirements of task are included in response. Demonstrates considerable understanding of the problem. All requirements of task are included. Demonstrates partial understanding of the problem. Most requirements of task are included. Demonstrates little understanding of the problem. Many requirements of task are missing. Demonstrates no understanding of the problem.
29
Creating Rubrics
Assigning scores to the rubric If a rubric contains four levels of proficiency or understanding on a continuum, quantitative labels would typically range from "1" to "4." When using qualitative labels, teachers have much more flexibility, and can be more creative. A common type of qualitative scale might include the following labels: master, expert, apprentice, and novice. Nearly any type of qualitative scale will suffice, provided it "fits" with the task. One potentially frustrating aspect of scoring student work with rubrics is the issue of somehow converting them to "grades." It is not a good idea to think of rubrics in terms of percentages (Trice, 2000). For example, if a rubric has six levels (or "points"), a score of 3 should not be equated to 50% (an "F" in most letter grading systems). The process of converting rubric scores to grades or categories is more a process of logic than it is a mathematical one. Trice (2000) suggests that in a rubric scoring system, there are typically more scores at the average and above average categories (i.e., equating to grades of "C" or better) than there are below average categories. For instance, if a rubric consisted of nine score categories, the equivalent grades and categories might look like this:
30
Creating Rubrics
Table 3: Rubric Score 8 7 6 5 4 3 2 1 0 Sample grades and categories Grade A+ A B+ B C+ C U U Y Category Excellent Excellent Good Good Fair Fair Unsatisfactory Unsatisfactory Unsatisfactory
31
Hands on Practice
Create a holistic or analytical rubric for a course in your discipline
32
Break
33
Evaluating A Rubric
Evaluating A Rubric for effective assessment
A Rubric for Rubrics: A Tool for Assessing the Quality and Use of Rubrics in Education
Downloaded January 22, 2010 from http://webpages.charter.net/bbmullinix/Rubrics/A%20 Rubric%20for%20Rubrics.htm Dr. Bonnie B. Mullinix Monmouth University December 2003
34
Evaluating A Rubric
Criteria 1
Clarity of criteria
Acceptable 3
Criteria being assessed are unclear, inappropriate and/or have significant overlap
Acceptable 3
Criteria being assessed can be identified, but are not clearly differentiated or are inappropriate
Exemplary
Criteria being assessed are clear, appropriate and distinct
Exemplary
Each criteria is distinct, clearly delineated and fully appropriate for the assignment(s) /course
Some distinction between levels is made, but is not totally clear how well
Rubric is shared and provides some idea of the assignment/ expectations
Evaluating A Rubric
Criteria 1
Acceptable 3 Acceptable 3 Exemplary Exemplary
Reliability of Scoring
There is general agreement between different scorers when using the rubric (e.g. differs by less than 5-10% or less than level) Rubric is referenced - used to introduce an assignment/ guide learners
Rubric serves as primary reference point for discussion and guidance for assignments as well as valuation of assignment(s)
36
Evaluating A Rubric
Criteria 1
Support of Metacognition (Awareness of Learning) Acceptable 3
Rubric is not shared with learners
Acceptable 3
Rubric is shared but not discussed/ referenced with respect to what is being learned through the assignment(s) /course
Exemplary
Rubric is shared and identified as a tool for helping learners to understand what they are learning through the Assignment / in the course Learners discuss the design of the rubric and offer Feedback / input and are responsible for use of rubrics in peer and/or selfevaluation
Exemplary
Rubric is regularly referenced and used to help learners identify the skills and knowledge they are developing throughout the course/ assignment(s) Faculty and learners are jointly responsible for design of rubrics and learners use them in peer and/or selfevaluation
Learners offered the rubric and may choose to use it for self assessment
37
Calibrating a Rubric
Calibrating rubric scoring for reliable assessment data.
The validity of your rubric and assessment depends in part on the validation of the rubric scores. Therefore calibration is an essential process. Calibration training for a group of instructors who can then score rubrics and produce valid data is critical. The process for calibration will be determined by each discipline or program and may involve paired scoring or open table scoring. Rubric scores are determined by consensus and part of the calibration expert's role is to resolve discrepancies in scoring.
38
Designate a recorder to note any issues, record discussions, and initial and final scores for each box.
39
42
43
Glossary of Terms
Analytic Rubric: An analytic rubric articulates levels
of performance for each criterion so the teacher can assess student performance on each criterion.
Glossary of Terms
Authentic Task:
An assignment given to students designed to assess their ability to apply standards-driven knowledge and skills to real-world challenges. A task is considered authentic when 1) students are asked to construct their own responses rather than to select from ones presented; and 2) the task replicates challenges faced in the real world. Good performance on the task should demonstrate, or partly demonstrate, successful completion of one or more standards. The term task is often used synonymously with the term assessment in the field of authentic assessment.
45
Glossary of Terms
Blooms Taxonomy: A classification of learning behaviors or categories used as a way of determining learning progression in a course or program. The revised taxonomy includes lower level skills and high order thinking skills. Blooms Taxonomy is used to align Student Learning Outcomes and their subsequent objectives. Calibration: Training faculty to score rubrics in a similar fashion to ensure validity of scores and subsequent data. Closing the Loop: Primarily regarded as the last step in the assessment process, closing the loop actually starts the process over again if the data is analyzed and the desired results are not achieved. Closing the loop refers specifically to using the data to improve student learning.
46
Glossary of Terms
Course Goals:
Generally phrased, non measureable statements about what is included and covered in a course.
students select subjects.
Course Guide: a booklet and online resource that helps Curriculum Guide: a practical guide designed to aid
teachers in planning and developing a teaching plan for specific subject areas.
47
Glossary of Terms
Course Objectives: A subset of student learning outcomes, course objectives are the specific teaching objectives detailing course content and activities. Criteria: Characteristics of good performance on a particular task. For example, criteria for a persuasive essay might include well organized, clearly stated, and sufficient support for arguments. (The singular of criteria is criterion. Data-driven decision making: A process of making decisions about curriculum and instruction based on the analysis of classroom data, rubric assessment, and standardized test data.
48
Glossary of Terms
Descriptors: Statements of expected performance at each level of performance for a particular criterion in a rubric - typically found in analytic rubrics. See example and further discussion of descriptors. Direct Assessment Method: The assessment is based on an analysis of student behaviors or products in which they demonstrate how well they have mastered learning outcomes.
49
Glossary of Terms
Indirect Assessment Method: The assessment is based on an analysis of reported perceptions about student mastery of learning outcomes or the learning environment. Instructional Alignment: the process of ensuring that Student Learning Outcomes and the subsequent objectives use the same learning behaviors or categories in Blooms Taxonomy.
50
Glossary of Terms
Program Assessment: An on-going, systemic process designed to evaluate and improve student learning by identifying strengths and areas for improvement. The data from the evaluation is used to guide decision making for the program. Reliability: The degree to which a measure yields consistent results. Rubric: the criteria for classifying products or behaviors into categories which varies along a continuum. Rubrics are used as a way of assessing a Student Learning Outcome. A scoring scale used to evaluate student work. A rubric is composed of at least two criteria by which student work is to be judged on a particular task and at least two levels of performance for each criterion.
51
Glossary of Terms
Student Learning Outcomes: Student Learning Outcomes are defined as the specific knowledge, skills, or attitude students should be able to demonstrate effectively at the end of a particular course or program. Student Learning Outcomes are measured and provide students, faculty, and staff the ability to assess student learning and instruction. Each course should have four to seven Student Learning Outcomes. Validity: The degree to which a certain inference from a test is appropriate and meaningful.
52
Program Assessment
Course Objectives
53
Blooms Taxonomy
The Taxonomy is often used as a way of determining the progression of learning or intellectual skills in a program.
More advanced course work, for example, should rely on higher order skills while developmental or elementary course work could focus on lower level skills.
54
Blooms Pyramid
Evaluation Level 6 Synthesis Level 5 Analysis Level 4 Application Level 3 Comprehension Level 2
Knowledge
Level 1
55
4 Analysis
analyze organize deduce choose contrast compare distinguish
5 Synthesis
design hypothesize support schematize write report discuss plan devise compare create construct
6 Evaluation
evaluate choose estimate judge defend criticize justify
56
Instructional Alignment
SLOs are the foundation for teaching and learning district-wide.
Align Course Goal, Student Learning Outcomes, Course Objectives, and Rubric Assessment A level five SLO should be aligned with level five course objectives as well as a level five rubric assessment
57
Resource Links
http://www.hccs.edu/hccs/faculty-staff/studentlearning-outcomes--01
58
59