Вы находитесь на странице: 1из 134

Program Assessment Handbook

906 Lacey Ave, Suite 206 Lisle, IL 60532 (630) 737-1067 www.pcrest.com

Pacic Crest

Program Assessment Handbook


Copyright 2010 Pacic Crest 906 Lacey Avenue, Suite 206 Lisle, IL 60532

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the publisher. Portions of this handbook are excerpts from the Faculty Guidebook, 4th edition Copyright 2007 published by Pacic Crest Project Directors: Steven Beyerlein, University of Idaho Carol Holmes Daniel Apple, Pacic Crest

February 2010

ii

Table of Contents
Section 1: Pre-Institute Reading .................................................................................................................................... 1 Faculty Guidebook: 4.1.1 Overview of Assessment ................................................................................................... 3 Faculty Guidebook: 4.1.2 Distinctions between Assessment and Evaluation ............................................................. 7 Faculty Guidebook: 1.5.2 Methodology for Designing a Program Assessment System ........................................... 11 Faculty Guidebook: 4.1.9 SII Method for Assessment Reporting ............................................................................. 15 Section 2: Introductions ................................................................................................................................................ 17 Tentative Agenda ....................................................................................................................................................... 18 Pacic Crest: A Brief Overview................................................................................................................................. 19 Broad Learning Goals for a Program Assessment Institute ....................................................................................... 19 Overview of a Program Assessment Institute ............................................................................................................ 20 Learning Outcomes for a Program Assessment Institute ........................................................................................... 20 Section 3: The Transformation of Education .............................................................................................................. 21 The Transformation of Education (introduction) ....................................................................................................... 22 The Transformation of Education (table) .................................................................................................................. 23 Overview of Process Education ................................................................................................................................ 25 Principles of Process Education ................................................................................................................................. 25 Compass of Higher Education ................................................................................................................................... 26 Group Exploration Activity: Exploring Educational Transformation ....................................................................... 27 Key Performance Areas and Performance Criteria .................................................................................................... 29 Recommended Sequencing of Pacic Crest Institutes .............................................................................................. 31 Section 4: Course Design for the Program Assessment Institute .............................................................................. 33 Completed Course Design Form................................................................................................................................ 35 Section 5: Specifying and Dening the Program ......................................................................................................... 41 Faculty Guidebook: 1.5.3 Dening a Program ................................................................................................... 43 Section 6: Establishing Program Quality ..................................................................................................................... 45 Performance Criteria background ....................................................................................................................... 46 Faculty Guidebook: 1.5.4 Writing Performance Criteria for a Program ............................................................. 47 Section 7: Performing Annual Program Assessment .................................................................................................. 51 Section 8: Constructing a Table of Measures ............................................................................................................... 53 Faculty Guidebook: 1.4.1 Overview of Measurement ........................................................................................ 55 Faculty Guidebook: 1.5.5 Identifying Performance Measures for a Program .................................................... 59 Faculty Guidebook: 1.5.6 Constructing a Table of Measures ............................................................................. 63 Section 9: Documenting Program Quality ................................................................................................................... 67 Faculty Guidebook: 1.5.7 Writing an Annual Assessment Report ...................................................................... 69 Faculty Guidebook: 1.5.8 Assessing Program Assessment Systems .................................................................. 71 Appendix A: Models of Program Assessment .............................................................................................................. 75 Appendix B: Glossary of Useful Terms ...................................................................................................................... 113 Appendix C: Useful Forms .......................................................................................................................................... 125

iii

There is a Learning Object on Program Assessment available at:

www.pcrest.com/LO/PA/0.htm

iv

Section 1
Preparation Reading
Faculty Guidebook: 4.1.1 Overview of Assessment Faculty Guidebook: 4.1.2 Distinctions between Assessment and Evaluation Faculty Guidebook: 1.5.2 Methodology for Designing a Program Assessment System Faculty Guidebook: 4.1.9 SII Method for Assessment Reporting

4.1.1 Overview of Assessment


by
Faculty Development Series

Marie Baehr (Vice President for Academic Affairs, Coe College) and Steven W. Beyerlein (Mechanical Engineering, University of Idaho)

Simply put, assessment is a process used for improving quality. Assessment is critical for growing lifelong learning skills and elevating performance in diverse contexts. However, the value of assessment is not always apparent nor is the process always understood. Because there has not always been agreement on a specic denition, there has been some confusion on how to approach assessment to ensure that the feedback is valuable. This overview outlines a purpose and use of assessment that is consistent throughout the entire Faculty Guidebook. Elements of quality assessment feedback are identied and discussed. Methods for implementing assessment in a variety of teaching/learning contexts are detailed in companion modules. The Nature of Assessment Assessment leads to improvement. Both the assessor (person giving feedback) and the assessee (performer) must trust the process. Although the assessor gives the feedback to the assessee, the assessee is always in control. The assessee may use the assessors feedback for improvement. Although a well-designed assessment process yields high quality improvements in a timely manner, any assessment process can lead to some improvement. Assessment is an area in which assessors can start simple and increase the complexity as the process is better implemented (4.1.4 Assessment Methodology). One can use assessment to improve a performance or an outcome. For example, a composition instructor might assess a students writing by looking at a completed assignment draft (outcome) and nding strengths and areas to improve in the writing. The instructor might also observe the student as he or she writes the paper to assess strengths and areas to improve in using the writing process (to develop the written sample). Principles of Quality Assessment Table 1 outlines ten principles for undertaking assessment in any teaching/learning situation. These principles address the mindset under which assessment is conducted, the circumstances surrounding assessment activities, and the nature of the dialogue between the assessor and assessee. A brief discussion of each principle follows. 1. Assessment focuses on improvement, not judgment. It is important that both the assessee and assessor understand that the purpose of assessment is to add to quality, not to judge the level of quality or to give interesting feedback that will not be used (Stiggins, 1996). Table 1 Principles of Quality Assessment

1. Assessment focuses on improvement, not judgment. 2. Assessment focuses on performance, not the performer. 3. Assessment is a process that can improve any level of performance. 4. Assessment feedback depends on who both the assessor and the assessee are. 5. Improvement based on assessment feedback is more effective when the assessee seeks assessment. 6. Assessment requires agreed-upon criteria. 7. Assessment requires analyses of the observations. 8. Assessment feedback is accepted only when there is mutual trust and respect. 9. Assessment should be used only where there is a strong opportunity for improvement. 10. Assessment is effective only when the assessee uses the feedback.

2. Assessment focuses on the performance, not the performer. Assessment is only about improving a performance. It is not meant to judge the quality of the performance, nor does it in any way judge the qualities of the performer. One may use assessment to give feedback on how a performers skills could be improved to in order to improve a performance. It should never be used to point out weaknesses in the performer, because doing so would undermine both the purpose of assessment and the building of trust needed for effective assessment.

3. Assessment is a process that can improve any level of performance. There are always areas to improve in a performance, regardless of the level of quality, and there are always areas that made the performance as good as it was. So assessment can always be used to give feedback that can be used to improve a performance. 4. Assessment feedback depends on who both the assessor and assessee are. Although it focuses on the performance alone, assessment is much more effective when both the assessor and assessee understand their own abilities as well as those of the other. This understanding helps in creating realistic performance criteria and feedback that can be used effectively (4.1.6 Performance Levels for Assessors). 5. Improvement based on assessment feedback is more effective when the assessee seeks assessment. As in most things in life, feedback is useful only when it is valued. One of the components of valuing assessment feedback is the assessees desire to obtain it. When the assessee seeks assessment, it is clear that he or she sees the need for improvement and has plans to act on the given feedback. 6. Assessment requires agreed-upon criteria. Both the assessor and assessee must have a common understanding of what will be assessed. In any performance, the purpose lends itself to numerous areas in which to look for strengths and areas to improve. The involved parties should decide in advance on the criteria that will be used in the assessment. These criteria can focus on the performance itself (performance criteria) and/or the nal outcome (outcome criteria). Both types of criteria can be used in assessing a performance or in assessing a product (Pellegrino, Chudowsky & Glaser, 2001). The chosen criteria should focus on areas that both the assessee and assessor believe are important; they must be appropriate to the performance; and they must be appropriate for the assessment abilities of the assessor (Astin et al., 1992). 7. Assessment requires analyses of the observations. Once performance criteria are set, the assessor must collect information germane to the set criteria by observing the performance. During the actual performance, or after the information is collected, the assessor must identify the strengths of the performance and why the strengths contribute to the quality of the performance. In addition, the assessor must identify the areas where improvement could occur and how the improvements could be made (1.4.2 Fundamentals of Rubrics).

8. Assessment feedback is accepted only when there is mutual trust and respect. The assessee must trust in the assessment process and in the assessors abilities. The assessor must trust in the assessees willingness to accept and use feedback. Often this trust takes time to build, but it builds quickly once the assessee sees improvement. To help build the trust, an assessor should be sure to follow these guidelines in the feedback report: Use only positive language; for example, area to improve, instead of weakness Include no judgmental statements (4.1.2 Distinctions Between Assessment and Evaluation) Focus only on agreed-upon criteria Describe real strengths and why they are strengths Provide substantial supporting evidence for both strengths and areas to improve Offer specic suggestions about how to improve Provide interesting and relevant insights Convey support and encouragement for change 9. Assessment should be used only where there is a strong opportunity for improvement. It makes sense to carry through an assessment process only if there is the opportunity for improvement. If assessment feedback is given during the performance (formative assessment), the performer has the opportunity to use the feedback to improve the current performance. If the feedback is given at the end of the performance (summative assessment), the feedback can be used to improve future performance. If there are no plans for future performances, summative assessment should not be used. 10. Assessment is effective only when the assessee uses the feedback. The assessee must have the opportunity and desire to improve in order for the feedback to be used. Not only must there be an opportunity to improve, but there must also be a willingness to implement the suggested improvements. Even if the assessment process might help in identifying needed improvements, there is little point in taking the time to assess if there will be no effort to improve.

Issues that Affect Assessment Quality A variety of factors inuence the quality of an assessment process. These include the skills of the parties involved as well as the resources available for conducting the assessment (Angelo & Cross, 1993). Factors Related to the Skills of the Parties Involved Content Expertise of the Assessor An assessor who is a content expert in a eld specic to the performance or outcome will typically give feedback that is more useful than that given by a novice in the content area, assuming that the assessment skills of the two people are equivalent. Understanding how knowledge is constructed within a discipline can help one determine which evidence to collect and how to analyze it. It also helps, as one collects evidence, to understand the content. This advantage does not mean, however, that an assessor must be a content expert in order to provide any useful assessment feedback. For example, it would be helpful for a novice in a content area to assess a performance if one of the criteria is to reduce the use of technical jargon. Assessment Skills of the Assessor It is as important that an assessor be knowledgeable and skilled in assessment as it is that he or she have expert knowledge about what is being assessed. Experts in the eld are not automatically strong assessors. Highly effective assessors Display respect for the assessee Work closely with the assessee to set appropriate criteria for the assessment Assess only those aspects which meet the agreedupon criteria Apply keen observation skills that put ndings in context Employ strong recording skills Collect relevant and high quality evidence Analyze results to extract important patterns and gain understanding Generalize ndings so they can be transferred to new situations Offer timely and constructive feedback Enjoy reecting/introspecting Are comfortable in their role, which is solely focused on improving the assessees performance

Usefulness of the Assessment Report Once the assessment process is completed, the assessee is left with the report of the ndings. Since the purpose of the assessment is for improvement, it is important that the report outlines in a concise way what was done well and why it was done well (strengths), areas that could be improved, and some strategies for improvement (4.1.9 SII Method for Assessment Reporting). A quality assessment report Includes only non-judgmental statements Follows a concise, well-organized format Focuses on agreed-upon criteria Describes real strengths and describes why they are strengths Provides substantial supporting evidence for both strengths and areas to improve Offers specic suggestions about how to improve Factors Related to Available Resources Quality of the Tools Used An assessor can rely on his or her memory or use elaborate tools to complete an assessment. As a general rule, the more structured the tool, the wider the audience of potential assessors, and the more likely it will be that the assessment report will be more specic. Many of the assessment instruments in the Faculty Guidebook have been implemented in dozens of faculty development workshops and in hundreds of college classrooms. However, there is no need to wait to assess something until the tools are in place to assess everything. Development and Implementation Costs The cost of assessment can vary from very little to quite a lot. Elaborate expense can only be justied for educational research questions that have programmatic implications. The Faculty Guidebook suggests many assessment instruments that offer a point of departure that will minimize the cost of developing effective, special-purpose instruments for courses, projects, and institutions. Time Required to Conduct an Assessment An assessment may have a complex design or be carried out with little or no preparation. Instructors and administrators need to balance assessment activities with planning and facilitation activities. Often spending 5% of in-class and out-of-class time on assessment is adequate to determine strengths and areas to improve.

Examples of Assessment Peer Coaching A great way to get feedback on facilitation skills is to have a second set of eyes in the classroom. An instructor can meet with a trusted colleague before class and outline two or three focus areas to be assessed in the class. The peer coach should avoid becoming a second instructor and should instead keep relevant notes and report them back to the instructor after the class. This method is benecial both to the assessor and the assessee; the assessee gets valuable feedback, and the assessor can observe teaching strategies that he or she might nd valuable. Assuming peer coaching is an ongoing process between the two parties, this would be an example of formative assessment of a performance. Course Outcome Review At the end of a course, an instructor can have students review the desired course outcomes listed in the syllabus and estimate how well they completed each outcome (outcome criteria). When students identify outcomes that have and have not been fully met and explain alternative actions that could be taken in future semesters to ensure achievements of each outcome, the instructor can use the information to assess the course instruction and curriculum. This is an example of summative assessment of an outcome. Assessment of Student Learning Students can be tested early in a course to determine how well they have learned and retained the skills and concepts they need to carry over from a previous course. The information can be used to revise the content, focus, and teaching of the previous course. This is an example of indirect assessment in which one group is evaluated in order to improve something that affects the quality of the evaluated groups performance. Concluding Thoughts Learning to use assessment widely and frequently is likely to produce a positive, trusting learning environment. The creation of magical or teachable moments will stimulate student engagement in the teaching/learning process and promote productive risk-taking. Long-term use of classroom assessment techniques provides opportunities for raising the bar for learner performance and shifting responsibility for learning to students (Angelo & Cross, 1993). When instructors model assessment in their daily classroom and professional activities, both instructors and students can improve signicantly over the term.

References Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass. Astin, A. W., Banta, T. W., Cross, K. P., El-Khawas, E., Ewell, P. T., Hutchings, P., et al. (2003). Nine principles of good practice for assessing student learning. Washington, DC: American Association for Higher Education. Glaser, R., Linn, R., & Bohrnstedt, G. (1997). Assessment in transition: Monitoring the nations educational progress. New York: National Academy of Education. Pellegrino, J., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington DC: National Academy Press. Stiggins, R. J. (1996). Student-centered classroom assessment (2nd ed.). Old Tappan, NJ: Prentice Hall.

4.1.2 Distinctions Between Assessment and Evaluation


by
Faculty Development Series

Marie Baehr (Vice President for Academic Affairs, Coe College)

Educators use two distinct processes to help students build lifelong learning skills: assessment and evaluation. Assessment provides feedback on knowledge, skills, attitudes, and work products for the purpose of elevating future performances and learning outcomes. Evaluation determines the level of quality of a performance or outcome and enables decision-making based on the level of quality demonstrated. These two processes are complementary and necessary in education. This module draws important distinctions between assessment and evaluation, underscoring the need for both processes to occur at separate places and times, and ideally through different roles (4.1.4 Assessment Methodology and 1.4.7 Evaluation Methodology). Inconsistent Use of the Terms In the last fteen years, much has been written about assessment and evaluation, but the terms have not always had distinct meanings. As accrediting agencies have become increasingly interested in improvement, it has become imperative to have a word that describes feedback for improvement that is distinct from one that describes the determination of quality. To add another layer of confusion from the literature, the word formative (used as an adjective with assessment or evaluation) has typically been used to describe an improvement process, while the word summative has been used to describe a decision-making process (Brown, Race, & Smith, 1996). However, the words formative and summative mean as it is being created and addition of all things, respectively. A process to determine quality can both be accomplished either as a performance is being created or after it is completed, so other words should be used to distinguish the two processes. In the literature of the last several years, assessment has usually been used to indicate that at least some hint of improvement is expected in the assessment process (Bordon & Owens, 2001; Palomba & Banta, 1999). Similarly, evaluation is usually used to indicate that some sort of judgment of quality will be made. The Faculty Guidebook is consistent in its delineation of these two processes of improvement and judgment. Assessment is the term used to look at how the level of quality of a performance or outcome could be improved in the future; it includes strengths that should be sustained as well as highpriority areas for improvement. The assessment process is not concerned with the level of quality; only with how to improve the level of quality. Evaluation is the term used to describe the determination of the level of quality. The evaluation process focuses only on the actual level of quality with no interest in why that level was attained. Assessment and evaluation both have their purposes, and, when used correctly, both can add signicant value to teaching/learning. However, there can be detrimental effects when the people involved have not agreed whether the process is evaluation or assessment, or when the Assessment Methodology gets confused with the Evaluation Methodology. Key Attributes Although assessment and evaluation are used for different reasons, they do have some similar steps. Both involve specifying criteria to observe in a performance or outcome. Both require the collection of data and other evidence by observing the performance or by looking at the outcome or product. Both require a performer and a person who collects information about the performance. Both processes also conclude with a report of the ndings which include all the similarities and at least as many differences. The relationship between the people involved is different in the assessment and evaluation processes. In both cases a person (either evaluator or assessor) observes or collects evidence about a performance or outcome; another person (either assessee or evaluatee) performs or develops an outcome. In both cases a person (either the assessee or client) requests the process (either evaluation or assessment). In assessment, the locus of control rests with the performer; in evaluation, it rests with the observer. The report to the performer (assessee or evaluatee) is also vastly different. In the assessment process, the report includes information about why the performance was as strong as it was, and describes what could be done to improve future performances. In assessment, there is no mention of the actual quality of the performance; only how to make the next performance stronger. There is no language indicating the level of quality, such as good, terrible, terric, or horrible. Conversely, in the evaluative report, only information regarding the actual quality of the performance is given. This might be in the form of a grade or a score or an evaluative comment, such as good work. The purpose of the evaluative report is to report the level of quality and possibly any consequences based on the determined level of quality. It is not used to suggest improvements in future performances. Table 1 claries the similarities and differences between the two processes. The modules 4.1.1 Overview of Assessment, 1.4.6 Overview of Evaluation, 4.1.4 Assessment Methodology, and 1.4.7 Evaluation Methodology give supporting explanations.

Table 1

Differences Between Processes of Assessment and Evaluation Assessment Evaluation to determine the quality of the present performance client

Case Studies Examples of the use of the assessment process or evaluation process can be found in 4.1.1 Overview of Assessment or 1.4.6 Overview of Evaluation respectively. This section addresses ways that evaluation and assessment can become confused. Case 1: The person observing a performance believes he or she is assessing, but the performer perceives the feedback as evaluative because the performer has not worked with the observer to set up criteria and valuable feedback. Dysfunctional Partners One of the rst steps in the Assessment Methodology is for the assessor and assessee to determine the performance or outcome criteria for which the assessee would like to gain feedback. If this step is skipped, no matter how wellmeaning the person giving feedback may be, the feedback is likely to be perceived by the assessee as judgmental. Since the control in assessment rests with the assessee, feedback will be used for improvement only if the person receiving the feedback wants to use feedback from the assessor. Parent-Child Relations All parents want their children to improve. However, parents also want their children to perform at acceptable levels of quality. When a parent gives feedback for improvement using evaluative language to a child in an area in which the child has no desire to improve, the child will perceive this feedback as judgmental. For instance, there is a big difference in the message sent between saying, Your room is a mess. Clean it up now or you will be punished, and If you put your books away and make your bed, your room would look much nicer. In-Class Assessment Exercises Students are more used to feeling that they are evaluated by instructors, rather than assessed. Part of the reason for this perception is that instructors do evaluate students by giving grades. Part of the reason is that students are not often included in determining what should be fed back to them. In order for assessment of student learning to work effectively, students must participate in determining the criteria that will be used for their feedback. For example, after giving an assignment that requires a draft, you could ask students to tell you in what areas they would like feedback for improvement. In this way they would have to determine the areas where they feel improvement would make a difference, and it would help clarify that the purpose of the draft is not for a free grading cycle.

What is the purpose?

to improve the quality of future performances

Who requests it? Who performs? Who observes the performance? Who sets criteria?

assessee

assessee

evaluatee

assessor

evaluator client (with possible consultation with the evaluator) client (to make decisions)

assessee and assessor

Who uses the information?

assessee (in future performances) during or after a performance observations; and strongest and weakest points what made the quality of the performance strong; and how might one improve future performances

When can feedback occur?

during or after a performance

On what is feedback based?

level of quality based on a set standard

What is included in the report?

the quality of the performance, often compared to set standards

Who receives the report?

assessee

client

How is the report used?

to improve performance

to make judgments

Figure 1 Comparison of Assessment and Evaluation


Evaluation determines whether a standard was met; success or failure Assessment provides feedback on performance; Strengths, Areas for Improvement, and Insights

Measurement Good/High Performance

Evaluator, using a higher set standard determines that the measured performance was a failure

Assessor provides feedback about how the performance can be improved

Measured Performance

Evaluator, using a lower set standard determines that the measured performance was a success

Assessor provides feedback on what made the performance as strong as it was

Low/Poor Performance

Case 2: A person, observing a performance and using the same criteria, gives assessment feedback as well as evaluative judgments. Interim Feedback on Work Products Students are often dismayed when they make all the suggested improvements on a paper that was turned in for comment as a rough draft and they do not receive an A on the nal product. In this case, the instructor has given feedback for improvement without determining the quality of the paper. The student perceives that if he or she improves in the areas noted, he or she will have an excellent paper. One way to avoid this problem while strengthening the assessment process is to ask the students to request feedback on the draft based on set criteria.

Supervisor as Mentor Often chairs of departments are expected to mentor their non-tenured faculty in their department at the same time that they are expected to make decisions on continuing employment. Although the individuals might agree on criteria to use, it becomes difcult for the assessee to feel in control of using or not using the feedback as he or she sees t, since, at some point, the assessor will become the evaluator. Although this is sometimes unavoidable, the problem can be reduced by choosing the criteria differently in the two cases. In the mentoring situations, the non-tenured faculty member should choose the criteria for focus, while in the evaluative situations, the chair should. In both cases the criteria need to be known by both parties.

Case 3: A person who is more comfortable with the evaluator role is put in the role of assessor. Expert Assessing a Novice Sometimes, someone who is so ingrained in an area of expertise is unable to stop judging the quality of a novice performance. Though all criteria and scale are agreed upon, the expert as assessor can sometimes give the feedback in evaluative terms without realizing it. This sometimes happens when faculty start teaching right after they have earned their graduate degree. They are not prepared for the limited understanding and skills of the students who are taking their class. Rather than mentoring the students to help them build their knowledge and skills, the faculty members are sometimes apt to evaluate students as unmotivated and poorly prepared. First-Time Assessor Often, when one is used to giving feedback on the level of quality only, someone can feel uncomfortable giving critical feedback to an assessee, feeling that pointing out areas to improve is the same as criticizing the performance. This can cause even more problems when the assessee also perceives the feedback as evaluative (Case 1). Practice and building trust help this situation the most but it can also help if the assessor imagines what feedback he or she would have wanted if he or she had been the performer. It is important for the assessee to send the message that he or she would like to have the feedback from someone he or she trusts. Concluding Thoughts Discussion in this module is intended to strengthen outcomes from assessment and evaluation in teaching/ learning situations. Assessment is a process used to improve a performance or outcome. Evaluation is a process used to determine the quality of a performance or outcome and to make decisions based on the quality. Both processes can be formative (undertaken while an educational process is ongoing) or summative (taken at the conclusion of an educational process). Before starting either assessment or evaluation it is essential for instructors to clarify the purpose of the process. It is then critical to communicate this purpose to everyone involved and to establish whether this will be conducted as assessment or evaluation. Finally, one should be cautious whenever an assessor will ultimately be an evaluator or when assessment is initiated without buy-in of the assessee.

References Borden, V., & Zak Owens, J. L. (2001). Measuring quality: Choosing among surveys and other assessment of college quality. Washington, DC: American Council on Education and Florida State UniversityAssociation for Institutional Research. Brown, S., Race, P., & Smith, B. (1996). 500 tips on assessment. London: Kogan Page. Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

10

1.5.2 Methodology for Designing a Program Assessment System


Faculty Development Series

by

William Collins (Neurobiology & Behavior, Stony Brook University) and Daniel K. Apple (President and Founder, Pacic Crest)

This module provides an overview of the Methodology for Designing a Program Assessment System (PAS) and explains how to use it. Before one begins to design a PAS, one should examine all of the steps in the design methodology in order to gain an understanding of the entire design process. In addition to presenting the complete methodology, this module identies and briey discusses the critical steps faculty and administrators nd particularly challenging. The modules that follow in this section further explain the stages of the methodology. Designing a Quality Program Assessment System The Methodology for Designing a Program Assessment System is given in Table 1 found on the next page. While the steps of the methodology are listed in a sequential fashion, in most cases it is necessary to revisit and update previous steps while working through the methodology. The purpose of the steps can be broken down into ve stages: Specifying and dening the program (Steps 1-6) Establishing program quality (Steps 7 and 8) Designing annual program assessment (Steps 9-11) Constructing a table of measures (Steps 12-15) Documenting program quality (Steps 16-20) Stage 1 Specifying and Dening the Program As a program continues to evolve, it is important to step back and truly understand what the program is about. Stage 1 of the methodology focuses on the key aspects of the program dening and specifying components to include the essence, goals, limitations, assets and important processes. A signicant benet of designing and implementing a program assessment system is that it gives the stakeholders of the program the opportunity to clarify the identity of the program clearly and publicly. Through this action the stakeholders both claim ownership of the program and also limit others from imposing an identity on the program. This benet is realized through the straightforward, yet challenging act of stating the essence of the program (Step 1). The essence statement should be a one-sentence description of the program (as it presently exists) including the processes used and the products produced. Then, building on the essence statement, the program stakeholders are identied (Step 2) and the scope of the program is dened (Step 3). A key component of the program specication is the identication of the current and future goals of the program (Step 4). Once one has a clear understanding of the goals, it is a relatively straightforward process to identify the top products or assets of the program (Step 5) and to dene the processes to be used to accomplish the goals (Step 6). Stage 2 Establishing Program Quality The primary goal of a PAS is to enhance the quality of the program. In order to measure the quality of any program it is important to state performance criteria for that program (Step 7). A strong criterion statement is stated clearly and concisely and supports one or more of the desired qualities of the program while suggesting at least one context for measurement. The objective is to identify 3-8 areas of the program that account for most of the quality of the program. The performance criteria will serve as the basis of the program assessment system by providing the framework for identifying specic attributes to be measured. Writing performance criteria is one of the most challenging aspects of the PAS design process. In particular, many individuals have trouble seeing the connection between a quality, the meaning (or analysis) of that quality, and how to express the meaning in the form of a written performance criterion. In addition, there is a common tendency to begin determining performance standards rather than focusing on identifying areas of quality in the program. It is important to identify key characteristics that determine quality for the products and processes. Using this list of characteristics, critical areas for measurement are identied and prioritized. Then the main areas of quality are claried as statements (performance criteria) along with measurable attributes for each criterion (Step 8). To facilitate writing quality performance criteria a detailed methodology has been developed (1.5.4 Writing Performance Criteria for a Program). Note that writing the performance criteria for a program parallels the process of writing the performance criteria for a course or a learning activity.

11

Table 1

Methodology for Designing a Program Assessment System

Specifying and Dening the Program Step 1 Write a one-sentence description which captures the essence of the current program. Step 2 Identify all program stakeholders and their interests. Step 3 Dene the appropriate scope (boundaries) of the program; what it is, and what it is not. Step 4 Identify the top ve current goals and ve future goals for the program; use a three to ve year time frame. Step 5 Identify the top ve products or assets of the current and future program. Step 6 Provide a description of key processes, structures, and systems associated with the program which will help accomplish the current and future goals from Step 4. Establishing Program Quality Step 7 Write clear performance criteria that account for most of the quality of the program. Methodology for Writing Performance Criteria: 1. Brainstorm a list of characteristics/qualities (and values) which determine program quality. 2. Check with other programs/stakeholders to determine whether any key characteristics are missing. 3. Rank the top ten qualities for the future design of the program. 4. Select the critical areas for measuring; prioritize to just a few (7-10), consolidating highly related qualities. 5. For each quality, identify a set of three to ve important aspects. 6. Write statements illustrating the performance expectations that produce these qualities by describing the important aspects of the performance. Step 8 Identify up to three attributes (measurable characteristics) for each criterion. Performing Annual Program Assessment Step 9 Self-assess the program for the previous academic year. Step 10 All stakeholders should provide feedback (strengths, areas for improvement, and insights) about the performance of the program. Step 11 Produce an annual assessment report. Constructing a Table of Measures Step 12 Create the structure for a table of measures (Table 2). Fill in the rst two columns (criteria and attributes) with information from Step 7 and Step 8. Step 13 Prioritize the attributes by appropriately weighting each attribute. Step 14 Identify a means for collecting data. Step 15 Identify a key instrument associated with each chosen attribute to measure the performance reected in the data collected. Documenting Program Quality Step 16 Determine current benchmarks and future targets for each attribute to document annual performance. Step 17 Assign accountability (to an individual) for each attribute to assure that targets for performance are met. Step 18 Create an index for measuring overall success. Step 19 Obtain stakeholder buy-in of the program assessment system by asking them to assess the system. Step 20 Annually assess the program assessment system.

12

Stage 3 Performing Annual Program Assessment It is important to shift from thinking about doing assessment (planning) to actually implementing an assessment system. This is not an all-or-nothing process, and it is not necessary to wait until the PAS design process is completed before initiating assessment. Once the performance criteria have been identied, a pragmatic approach to implementation is to design an annual assessment report around the performance criteria. Begin by assessing the program for the previous academic year (Step 9). The SII Method (4.1.9 SII Method for Assessment Reporting) provides a useful format for this self-assessment. At this point in the process, it is important to include all of the stakeholders in the program assessment process (Step 10). Complete the assessment by generating an annual assessment report (Step 11). Once you apply the performance criteria to the performance of the program over the previous year, it becomes much more evident how you should progress with the next step (designing measures). Further, the annual assessment report will serve as a model for annual reports generated in the future. Stage 4 Constructing a Table of Measures The heart of the mechanism for measuring quality is the Table of Measures (Table 2); it is a template for completing the PAS design process. It focuses on what really matters in the program: the measurable characteristics (or attributes) that align with the performance criteria (from Steps 7, 8, and 12). An essential component of the process of building the table of measures is the act of prioritizing and weighting the attributes to identify the most important while eliminating the non-essential ones (Step 13). For each attribute, determine whether an instrument exists to measure performance (Steps 13-15). Examples of instruments include rubrics, alumni surveys, grants, publications, retention and graduation data, placement data, satisfaction surveys, and portfolios. If no instrument exists for a given attribute, then one must be built.

Stage 5 Documenting Program Quality The nal stage in the methodology focuses on the documentation of the program quality through the tracking of the quality of the attributes. For each attribute, it is helpful to make comparisons with benchmarks of current performance and with targets established for future performance (Step 16). In order to share and distribute the responsibility for meeting the targeted performance levels, it is also important to assign the accountability for each attribute to a specic program member (Step 17) and establish criteria for measuring overall success (Step 18). Before the program assessment system is fully implemented, all participants and stakeholders involved in the program should be given the opportunity to provide assessment feedback that includes strengths, areas for improvement, and insights (Step 19). In addition to improving the quality of the program assessment system, this helps to build commitment and trust which is essential for the successful implementation of the system. Assessment is a vital component to methodologies. Assessment provides the feedback mechanism which allows for building upon strengths and taking action to make improvements. It is important not to overlook the need to assess the program assessment system itself. Thus, a complete assessment system involves using various forms of assessment (formative, summative, and realtime) on all aspects of the program and on the program assessment system itself (Step 20). Concluding Thoughts The benets to a program that are derived from a well-designed and successfully implemented program assessment system signicantly outweigh the time and energy invested in the design of the system. Nevertheless, the design process can be an intimidating impediment to establishing an assessment-based program. The Methodology for Designing a Program Assessment System provides a clear progression of steps to assist even a novice in this endeavor. The result will be an efcient program assessment system focusing on the key attributes that determine quality performance.

13

Table 2

Table of Measures
Attribute Weight Means Instrument Benchmark Target Accountability

Criterion

14

4.1.9 SII Method for Assessment Reporting


by
Faculty Development Series

Jack Wasserman (Mechanical Engineering, University of Tennessee at Knoxville) and Steven W. Beyerlein (Mechanical Engineering, University of Idaho)

Assessment results are most likely to be put into action by an assessee when they are concisely stated, supported by evidence, and delivered in a positive manner. This module outlines a format for informal assessment reports that meets these needs. Known as the SII method, it includes a thoughtful description of assessee strengths, areas for improvement, and insights that can be transferred to other contexts. The SII method is assessee-centered in its language, specic in its use of data from a specic learning context, and enlightening in its recommendations for future action. The Role of Self-Assessment Psychological studies of highly successful people across all domains of intelligencelinguistic, musical, mathematical, scientic, interpersonal, kinesthetic, intrapersonal, and spiritualreveal that these extraordinary individuals share three behaviors that are the source of sustained personal growth (Gardner, 1998). These individuals stand out in the extent to which they reect, often explicitly, on the events of their lives These individuals stand out less by their impressive raw powers than by their ability to identify and then exploit their strengths These individuals fail often and sometimes dramatically, but they stand out in the extent to which they learn from their setbacks and convert defeats into opportunities Strengthsidentify the ways in which a performance was of high quality and commendable. Each strength statement should address what was valuable in the performance, why this attribute is important, and how to reproduce this aspect of the performance. Areas for Improvementidentify the changes that can be made in the future, between this assessment and the next assessment, that are likely to improve performance. Improvements should recognize the issues that caused any problems and mention how changes could be implemented to resolve these difculties. Insightsidentify new and signicant discoveries/ understandings that were gained concerning the performance area; i.e., What did the assessor learn that others might benet from hearing or knowing? Insights include why a discovery/new understanding is important or signicant and how it can be applied to other situations. These statements should be delivered in the order given above rst to afrm the assessee and then to apprise him or her of opportunities for additional growth. An assessor should take care to cast these statements in a succinct manner and avoid using judgmental language. As a matter of convenience in written SII reports, each statement can be identied with the appropriate letter (S or I). Rubric for Elevating SII Reports The following rubric has been developed to help students visualize different levels of assessment quality and to rate the sophistication of their SII reports. As assessments move up the scale, there is a discernible shift from assessing effort to meaningfully assessing performance. Level 1Observation Strengths and areas for improvement are presented as simple statements. The following statements are typical of this level: (S) The presenter was energetic (I) (I) The introduction was too long The score was not the only goal

Extraordinary individuals, therefore, possess a strong internal process of thinking about their circumstances, their performance capabilities, and their opportunities for effecting change. The SII method strives to make these attributes explicit in the dialogue between assessor and assessee. It embodies several characteristics known to improve critical thinking, including positiveness, processorientation, a recognition of contextual details, and the role of emotion as well as reason in human behavior (Brookeld, 1987). Organization of the SII Report While the assessee is performing, the assessor must collect information consistent with the chosen criteria (4.1.4 Assessment Methodology). It is important for the assessor to note the strong points of the assessees performance (things done well) and why they were considered strong; the areas in which the assessees performance could be improved, along with suggestions for how the improvement could be made; and any insights that might help the assessee in other contexts. The SII format provides a succinct way to communicate these ndings in a cooperative learning environment.

15

Level IIComprehension of Key Issues Strengths and improvements are clearly stated, and reasons are given for the strengths and suggestions for improvement. Insights tend to be related to the specic context of the assessment. The following statements are typical of this level: (S) The enthusiasm of the presenter inspired the audience to ask many questions (I) (I) Much of the material in the introduction was secondary to the purpose of the talk The team kept the problem statement in mind, not just the score

Prioritize ndingsStudents share only the greatest strength, the greatest area for improvement, and the best insight. This encourages participants to rank the signicance of their observations and to defend their thinking. Limit response timeThis is especially valuable for sharing oral assessment reports from multiple teams. Challenge participants to limit SII reports (all three parts) to less than 30 seconds. Build common understandingParticipants are asked to rephrase what they hear in others SII reports. This process can help clarify muddy ideas as well as emphasize important discoveries. Focus attentionThe instruction directs attention to a narrow set of learning skills or performance criteria. Focusing the assessment helps to minimize motherhoodand-apple-pie statements; and instead connects the commentary with specic behaviors. Rate performance on a scaleAs a reference for writing SII statements, the instructor provides several scales or rubrics for ranking performance in key areas. Assigning numerical scores can trigger recollection of supporting evidence that adds more specicity to a written SII report. Collective feedbackAt the end of a reporting session (oral or written), the instructor may use the SII format to comment on the entire spectrum of reports. This serves to reiterate key ndings and to establish performance expectations for future reporting sessions. Concluding Thoughts One of the driving forces for change in higher education is the need to develop students who are lifelong learners who can adapt to the ever-and-rapidly-changing world around us (Brookeld, 1987). Quality self-assessment provides a solid foundation for such self-growth (Gardner, 1998). By giving and receiving SII reports, learners at any level in the curriculum gain the practice and experience they need to become quality self-assessors and self-growers. SII reports support an assessment culture in which students are motivated to perform better and proactively seek to improve their own performance. References Brookeld, S. (1987). Developing critical thinkers: Challenging adults to explore alternative ways of thinking and acting. San Francisco: Jossey-Bass. Gardner, H. (1998). Extraordinary minds. New York: Basic Books.

Level IIIApplication in a Related Context This feedback builds on comprehension of key issues and gives specic ideas for improving performance in a related context. The following statements are typical of this level: (S) Taking time to practice your presentation can help you deliver your message in a condent and convincing tone (I) (I) The introduction should highlight a single hypothesis and explain why it is justied By focusing on the goal of good technical communication, rather than focusing simply on the score, the team reminded everyone about the educational objective of the project

Level IVTransfer to a New Context This feedback illustrates generalized understanding and is instructive in applying this understanding across a broad range of contexts. The following statements are typical of this level: (S) Researching the background of your audience can help you stimulate interest in and attention to your message (I) (I) Section divisions appear to be seamless in a carefully planned and practiced presentation By communicating your interpretation of the underlying purpose of an activity, you help everyone assess whether they could have learned more from the activity

Implementing SII Reports SII reports represent a powerful formative assessment tool that can be used with a great deal of exibility in the classroom. The following techniques have proven successful in elevating and adding variety to SII reports.

16

Section 2
Introductions
Tentative Agenda Pacic Crest: A Brief Overview Broad Learning Goals for a Program Assessment Institute Overview of a Program Assessment Institute Learning Outcomes for a Program Assessment Institute

17

Tentative Agenda for Program Assessment Institute


Introductions and Why You Are Here What is a Program Assessment System (PAS) Overview Develop a one-sentence description of your program Break Determine your programs ve-year goals/objectives Produce 5-8 sentences describing the processes of your program Assess the quality of your current program Lunch Identify your programs key processes and products Dene the appropriate scope Identify of the essential qualities your program seeks to have by 2005 Conduct an external review of other strong programs Break Rank the top ten essential qualities Select the appropriate number of qualities for your situation Write clear statements as performance criteria Schedule time for personal consulting and assessments Peer Assessments of design from Day One Identify factors for each performance criterion Identify instruments for measuring each factor

1 2
Day

Day

Break Design the assessment component Lunch Set the standards Dene overall success Identify accountability for each measure Break Create criteria for a quality program assessment system Consulting Session addressing current issues Assess the Institute Good-byes

18

Pacific Crest: A Brief Overview


acic Crest is the favored strategic partner of many higher education institutions when it comes to building human and organizational capacitywhether it be in learning, teaching, mentoring, designing instruction, or assessing. Our work in these areas has resulted in the development and articulation of an educational philosophy called Process Education, which focuses on the development of broad, transferable learning skills.
Implementation of this philosophy means using processes and tools to create new types of environments in which students take center stage and discover how to improve their learning and self-assessment skills within a discipline. This philosophy also supports the current institutional reform movement that calls for a shift in emphasis from an agenda driven by teachers desires and designs to one focused on student learning outcomes. It consistently seeks answers to the question, How do students learn most effectively and enduringly? and then works to translate the answer into teaching practice and, ultimately, institutional policy.

To these ends, Pacic Crest offers a variety of Faculty Development Institutes, Custom Publishing Services and the centerpiece of our ongoing commitment to Process Education, the Faculty Guidebook.

Broad Learning Goals for a Program Assessment Institute


Pacic Crests goals for this Institute are to help participants:
Produce a program assessment system to help meet program goals and standards. Design a system which benets your program (not solely for external purposes). Gain understanding of the relationship between measurement, assessment, and evaluation. Understand the role of self-study in building a program assessment system. Clarify your programs processes, systems, and structures, and identify how these help to produce the qualities of the program. Determine the why and how of measures with respect to their importance in a program assessment system (i.e., why are quality measures important and how do you produce them). Understand the importance of a fair and equitable evaluation system. Effectively apply limited assessment resources systematically to obtain the greatest incremental growth in the program.

19

Program Assessment Institute


This institute helps participants clarify a programs goals, determine the quality desired in a program, and identify key processes which will help produce those qualities. Annual assessments require a program to measure, analyze, and document its quality and provide action plans for making future improvements. For this reason, it is important that participants be able to identify and dene measures for both processes and outcomes. This institute is most effective when multiple teams from a given institution engage in the design of assessment systems for various programs that may be part of a more comprehensive system.

Learning Outcomes
Competencies

Write an essence statement that clearly illustrates why your program is essential Write a set of performance criteria that are important and meaningful for the program Write an annual assessment report Describe the key processes used to produce the program quality

Movement

Strengthen the design process of a program assessment system Improve an assessment mindset

Accomplishment Produce a draft of a program assessment system that can start being implemented for the current academic year Experience

Participants will be working within a team associated with a program that is very important to the participants. The team will experience many reasons why they have dedicated many years of their life associated with their program and continually try to improve it. The team will nd its program essence, reduce its scope, and desire to do more program redesign as its stakeholders clarify where the program future goals are directing it. The team will distribute its responsibility for program quality based upon a set of clear performance criteria and measures. The use of assessment will help increase bonds and respect across programs. Coach a program team in the design process to: 1) Clarify its essence 2) Inventory stakeholders 3) Reduce scope 4) State current and future goals 5) Describe program assets 6) Describe program processes 7) Write a set of performance criteria 8) Build a table of measures (8-12 measures) that dene when and where the measures are collected and store, instruments to do the measuring, benchmark past performance in the measures, future targets and person responsible for each measure 9) Learn how to produce an annual assessment report

Integrated Performance

20

Section 3
The Transformation of Education
The Transformation of Education (introduction) The Transformation of Education (table) Overview of Process Education Principles of Process Education Compass of Higher Education Group Exploration Activity: Exploring Educational Transformation Key Performance Areas and Performance Criteria Recommended Sequencing of Pacic Crest Institutes

We invite you to view the Learning Object for the Transformation of Education at:

www.pcrest2.com/transformation/lo
21

The Transformation of Education


Over the past 25 years or so, there have been tremendous sociocultural (economic, political, etc.) forces pushing, arguing, and pleading for change across the entire continuum of education. What has and continues to emerge at a seemingly ever-increasing pace is not an overarching model or even a singular and coherent description of what education should be. There is, however, much common ground where values and ideals are shared across historically disparate disciplines and interests. These shared values have been articulated and advocated by thinkers and practitioners such as Paulo Freire, Lev Vygotski, Maria Montessori, Carl Rogers, Howard Gardner, Daniel Goleman, Jerome Bruner, John Dewey, Thomas Friedman, among many others. These shared values appear, sometimes only implicitly, in current initiatives such as No Child Left Behind and 21st Century Skills. What these different perspectives all share is a belief in the potential growth in performance of learners if new roles are assumed by teachers and learners with each placing emphasis on processes which differ from those commonly and traditionally used in the past. The implication of these new roles and directions yields a much-transformed view of educational practices and attitudes. The table which follows captures the major dimensions of education and shows both current or traditional practice and attitudes as well as transformed practices and attitudes. Because transformation is more than just a simple change, and change is typically something that human beings resist, there are some common affective or emotional responses to those changes. These are likewise available in the table. The nal aspects of the table are seen in the right-most column: Cultural Assumptions and Tensions. This column provides a list of thought-provoking questions, designed to poke and prod buried or unconscious assumptions. None of these questions have right or wrong answers; the point is that whatever your answer, when you respond, youre doing so on the basis of a pre-conceived notion or value. Those pre-conceived ideas or values can form a kind of paradigm that effectively limits your ability to identify alternative ways of seeing and acting. Assumptions are like habits; breaking them or changing them requires rst becoming aware of them! A nal word: society and culture not only dictate who we are (how we see and dene ourselves and our value) to a very great degree, but underlie some of our most basic assumptions about the world around us. We have learned to see and conceive of ourselves, others, the world, values, morals, practices, etc., on the basis of our society and culture. Getting to the point where we become aware of some of those assumptions and thus able to either leverage them for greater success (however you choose to dene it) or begin to step beyond them to achieve that success is a tremendous challenge, and likely to prove nearly as uncomfortable as making the shift from what is to what can be.
22

The Transformation of Education


Dimension
Delivery (mode) Current Tendency Presentation
learning by listening

Common Affective Responses


Engagement requires energy and risktaking! Just tell me what I need to know; dont make me struggle to learn. What if they think Im stupid? What if they laugh at me? Im too embarrassed to share my work until I know its perfect.

Future Direction Activity

Cultural Assumptions and Tensions

Can knowledge be given to someone? When? How? Do you still have the learning by doing; knowledge others gave to you? How do you discovery as education; know? Can you learn everything you need Montessori method; to know through activity? Do you ever learn Jerome Bruner; things that are meaningful by listening? Why Confluent Education arent sermons interactive? Do you enjoy seeing people fail? If not, why encourage them to perform publicly? Why dont art museums display just anyones art, if we all make mistakes? Are you willing to appear on a reality show? If not, why not? How do you handle it when you make a mistake in front of your students? Are you willing to even put yourself in that position? Do people really want to improve, if its hard work and hurts? Or is ignorance truly bliss? Would people seek education if it werent mandatory/required/strongly urged? Do you? If you can read, study, and learn on your own, why go to school? voices worth listening to? Do all opinions carry the same weight? Is the choice of control one between authority and anarchy? If you met someone who you honestly believed was a better teacher than yourself, would you be willing to give him or her your job? Why not? Can you learn from someone elses experience? Are you more important than a stranger youve never met? Does the good of the many truly outweigh the good of the individual? Always? Is grading on a curve fair? It is ok to punish several people for the crimes/sins of one?

Context of Private Performance shame-based culture


where failure is private & success is public; fear of judgment

Public
nobody is perfect, we all make mistakes; support of a team or friends

Ownership

Directed
tabula rasa; students as blank slates

It should be up to my teacher to decide what Im going to learn; how could I possibly know?? Im afraid of that level of responsibility; please just give me what you think I need.

Self-Directed
Freire & Dewey learner experience is the foundation of any meaning constructed

Control

Faculty-centered
authority, tyranny

Learner-centered Who is worthy of being in charge? Are all


democracy; servant leadership; Piaget

Social

Individual
education as a an individual enterprise; competition of individuals creating great individuals; survival of the fittest

What if I drag down my team? How can I hide in such a small group? or I know Im smart enough to earn my A; dont interfere by making me responsible for others. This is hard! I cant do it! Its too much! Dont you know I also have a family/job/other courses? Dont ask for so much from me!

Collaborative
Deweys education in society; Sesame Street: Cooperation!; it takes a village

Expectations Low (sufficiency)


education is wasted on some people; the C student

High (unlimited)
positive thinking

Are all people worthy of the same amount of time and effort? When is the last time you had a conversation with your janitor/ mechanic about anything other than janitorial issues/vehicle issues? How often are you surprised by your students? If you are, does this say anything about your expectations?

23

Dimension
Goal

Current Tendency Learning


content mastery; rote learning; memorization; fill-in-the-blank and multiple-choice questions

Common Affective Responses


Just tell me what I need to know for the test/to get an A/to graduate. Give me specific information; dont make me solve problems.

Future Direction Learning to Learn


creativity; extracting from context to universals (Platonic levels of abstraction); learning skills ; an essay/project

Cultural Assumptions and Tensions How did you lean your times tables? Your ABCs? Did you learn number theory or study phonetics first? Do you use what you learned in high school biology in your everyday life? If not, why dont we stop teaching biology and teach more reading or even social skills? Ditto for calculus, trigonometry and most other courses you dont use. Is there a difference between who should engage in each kind of learning? What about a teacher? An electrician? A surgeon? Do you believe that everyone is actually born with unlimited potential? What about individuals who score low on an IQ test? Do they have the same potential as someone who scores as a genius? If potential is unlimited, why require exam scores before letting someone in to college? Why not let everyone in who wants to learn? Do people have the right to choose failure? Do YOU have the right to decide what someone else should or should not do? Can you force someone to succeed?

Efficacy of Learner

Level projected by educator


Aristotles natural masters and natural slaves; determinism

Let me be comfortable with what I think I can do; Im ok with sufficient performance. I get by.

Potential not presumed to be limited


free will; Albert Bandura

Efficacy of Educator

Success is up to the student


nature

I cant possibly be held accountable if a student chooses not to learn. Ill teach but its up to my students to succeed or fail. I have to actively observe rather than just read or listen; havent I already got enough to do?? If he/she cares whether I learn and grow, I risk disappointing him/ her. I want the safety of being anonymous!

Student Success is up to ME
nurture

Modeling

Telling
Hypocrisy; do as I say, not as I do

Showing
Be the change you want to see

Are your actions in accordance with your beliefs? Is it OK to say one thing and do something else? Ever?

Relationship Emotionally
distant
science versus social; the mind/heart gap; thinking versus feeling; objectivity

Emotionally invested (empathic)


Carl Rogers (personcentered); educating the whole child

Do emotions ever get in the way of other considerations? Do you respect someone who cries when they talk or are tears a sign of weakness? Does a leader ever have to make decisions that hurt people? Are you willing to do that? Is your classroom a place where personal problems are left at the door? Are students capable of doing that? Are you? Is that a good thing? If you see a certain change as good for someone, how can it be good for them, if it causes them pain? What gives you the right to decide its ok or even good for them to suffer? Isnt happiness better than suffering? Is it better to BE good or to FEEL good? Are they even related?

Challenge

Enabling
easy success improves self-image; struggle and pain are always bad; unconditional approval

Stop pushing me! Youre mean...why do you have to make it so hard?!

Empowering
raising the bar; teach a man to fish instead of giving him a fish; n + 1; struggle is good; tough love

24

Dimension Design

Current Tendency a script or canned design, rigid/nonresponsive/static


tradition

Common Affective Responses


I dont like change; I need to rely on things staying the same. Arent we supposed to stick to a script?

Future Direction a design that maximizes opportunity, is responsive, evolvable


jazz; success in evolutionary terms

Cultural Assumptions and Tensions Is it ok to take a risk when others, beside yourself, may suffer the consequences? Is predictability more important than a 50/50 chance of improvement? When and why? Why do we adhere to outdated laws and beliefs (i.e. when is the last time you referred to the sun coming up or going down)? Presuming you believe that the Earth rotates and moves around the sun, Is it because weve failed to evolve our perspective? What are we clinging to? Do you have the right to judge another person? What is the fundamental difference (if any) between testing a product and testing a person? Can you afford to have a doctor who learns from his mistakes? Or do you want a doctor who is a doctor because he didnt make any mistakes (passed, in some sense)?

Feedback/ Reporting

Evaluation
judgment; pass/fail

Assessment is great, but I NEED evaluation to know if I measure up (the point of my learning is up to my teacher, not me). My learning is all about the grades I get and how others see me.

Assessment
learning from mistakes; improvement

The future direction of education is the goal of Process Education. The term Process Education was rst used in the early 1970s and referred to the process of educating students rather than the end product of that education. In 1994, Pacic Crest used the idea of Process Education as a launching board to develop a philosophy that encompassed and impacted each of the key dimensions of education as shown the in Transformation of Education table.

Process Education
Process Education is a performance-based philosophy of education which integrates many different educational theories, processes, and tools in emphasizing the continuous development of learning skills through the use of assessment principles in order to produce learner self-development. This philosophy can be expressed through eight guiding principles that address the dimensions of education:

FACULTY PERFORMANCE PRINCIPLES


A Process Educator fully accepts responsibility for facilitating student success. In a quality learning environment, facilitators of learning (teachers) focus on improving specic learning skills through timely, appropriate, and constructive interventions.

FACULTY PERFORMANCE PRINCIPLES


Mentors use specic A Process Educator methodologies that model continuously improves upon the steps or activities they existing theories, processes, expect students to use in and tools using active achieving their own learning classroom observation and goals. research.

STUDENT PERFORMANCE PRINCIPLES


Every learner can learn to learn better, regardless of current level of achievement; ones potential is not limited by current ability. Although everyone requires help with learning at times, the goal is to become a capable, self-sufcient, life-long learner.

STUDENT PERFORMANCE PRINCIPLES


An empowered learner is one who uses learning processes and self-assessment to improve future performance. To develop expertise in a discipline, a learner must not only develop a specic knowledge base in that eld, but must also acquire generic, life-long learning skills that relate to all disciplines.

25

The Compass of Education provides a concept map which describes the post-transformation state of education Process Education in practice. An detailed breakdown of the Compass can be found on the Pacic Crest Learning Objects page: www.pcrest.com/LO

Note that there are more fundamental processes such as communicating or processing information that underlie many, if not all, of the processes in the outer ring. While these fundamental processes are indeed critical to the larger and higher-level process clusters, to keep this concept map from being overwhelmingly complex, only the process clusters are shown. If you are interested in a deeper examination of the fundamental processes, the Classication of Learning Skills (Faculty Guidebook modules 2.3.3, 2.3.4, 2.3.5, and 2.3.6) provide an excellent place to begin.

26

Exploring Educational Transformation


identifying assumptions, checking perceptions, being open

Over the last 20 years, there have been many efforts to transform and improve learning, teaching, instructional design, assessment, and other educational processes, across the educational spectrum and at all instructional levels. The Transformation of Education table provides a perspective from which current or traditional practices (including some of the sociocultural and even personal assumptions that encourage those practices) as well as potential future direction of practices may be viewed.

1. Understand the dimensions of education as portrayed in the table and apply these to your educational experiences. 2. Appreciate the relationship between current tendencies and the need/desire for movement towards a future direction (shifting education from the red to the green). 3. Appreciate the common affective responses that accompany the shift from current practices. 4. Begin to uncover and appreciate the assumptions (personal, social, and cultural) that underlie current tendencies and work against change.

1. Ability to effectively explain the Transformation of Education to a peer. Attribute 1 Attribute 2 Components are identied and articulated Relationships are understood and articulated

2. Embrace the Transformation of Education table as a framework for analyzing student, educator, and/or organizational performance, change, and pushback Attribute 1: Attribute 2: Attribute 3 Contextual and practical identication of one dimension of education (including the assumptions behind the practice) that will engage others Compelling question about the Transformation of Education for further discussion throughout the institute Growth and/or transformation potential is outlined

1. Working within your assigned team, analyze the Transformation of Education table available both online (www.pcrest2.com/transformation) as well as on the preceding pages of your handbook. 27

2. Answer the Critical Thinking Questions. 3. Produce a discovery, based upon your personal and collective educational experience, that demonstrates some aspect of the table. Teams should be ready to share this with the group in general. 4. Develop an inquiry question you would like other teams or your facilitator to answer.

1. Select three dimensions and give examples of the current tendency in practice (i.e., what does that practice LOOK like) as well as the future direction.

2. In which three dimensions is change most critical in order to empower students? Why?

3. In which three dimensions is change most critical in order to empower educators? Why?

4. Which ve dimensions are currently most important for your school? Why? For each of those dimensions, identify at least one assumption which is either encouraging or discouraging transformation in that dimension.

Activity End

28

Criteria for Key Performance Areas


Pacic Crest has designed its Faculty Development Program to grow performance in not only faculty but also in staff and administration; our goal is to produce quality performers in higher education across 16 key performance areas.
The Program Assessment Institute is particularly focused on development in the performance areas of Designer, Assessor, and Measurer.
Self-Grower: Consistently self-assesses in order to selfmentor ones own performance and growth while increasingly challenging oneself and mentoring others. Servant Leader: Cultivates a clear vision of a desired future and ably shares through understandable stories; develops plans others can follow and models behavior for others while conveying belief in their ability and helping them succeed in realizing this vision. Change Agent: Proactively convinces others that a particular project/effort is worthwhile and will be successful; persists and takes risks when facing difculties that would deter most people. Professional Developer: Views the development and empowerment of people as the engine for change, both individually and on the organizational level; realizes goals in the strategic plan; develops and facilitates effective programs to achieve these ends. Problem Solver: Ably identies and denes problems frequently not seen by others; identies issues and claries assumptions necessary to solve the problem; and effectively closes the gap between expectations and reality by using previous solutions to build upon past successes. Technologist: Constantly monitors state-of-the-art technologies; learns quickly, selects appropriate tools; increases performance by creatively applying technology in innovative ways. Collaborator: Values the synergy of relationships and teams; plays a variety of roles effectively while helping others perform their role effectively; compromises self for the betterment of all. Teacher: Uses a learner-centered approach to help learners prepare learning plans; cultivates productive learning communities; bonds with learners; helps learners meet their intended outcomes through the use of embedded assessment. Facilitator: Inventories and monitors collective needs; helps synthesize a clear set of outcomes; focuses on process rather than content; shares ownership in making decisions; and constantly strives for improved quality by strengthening the process. Life-long Learner: Constantly seeks additional knowledge by systematically using professional development plans; leverages experts and resources; assesses own learning performance; and validates own learning. Researcher: Identies and states quality research questions by operating from a consistent inquiry mindset; uses appropriate methods; effectively articulates ndings to a community of scholars. Measurer: Identies critical qualities; creates performance criteria; identies best items to measure; effectively times when and how to measure with appropriate accuracy and precision. Assessor: Focuses on the assessees needs; collaboratively designs an assessment process; stays focused on chosen design through careful observation; analyzes the data for meaning; uses interactive feedback to solidify strengths; offers clear action plans; shares insights to produce signicant understanding without being judgmental. Evaluator: Knows where value is essential; designs the appropriate times for determining whether or not value is being produced by setting clear expectations and standards; uses unbiased judgments to reward performance. Designer: Clearly denes desired results; creates precise dimensional learning outcomes; denes the activities and processes used to produce the results; identies ways to embed assessment in order to increase quality; produces an evaluation system to assure desired results. Mentor: Enters into a dened relationship with respect for the potential of the mentee; plays the role of coach and advisor by helping establish the mentees personal goals; identies activities and means to grow performance to achieve the desired results within a specic time period.

29

Mapping between Key Performance Areas and Pacic Crest Institutes


Each of Pacic Crests Faculty Development Institutes focuses upon at least three key performance areas:
Institute
Teaching Institute Learning to Learn Camp Advanced Teaching Institute Research on SoTL Institute Chairpersons Institute Program Assessment Institute Assessment Institute Performance Measures Institute Program Design Institute Course Design Institute Activity Design Institute Interactive Learning Systems Institute Designing Online Classes Institute Facilitating Online Learning Institute Designing Learning Objects Institute Leadership Institute Faculty Development Institute Strategic Planning Institute Student Success Institute Facilitators Institute

Area 1
Teacher Mentor Researcher Researcher Servant Leader Designer Assessor Measurer Designer Designer Designer Technologist Designer Facilitator Designer Servant Leader Professional Developer Measurer Mentor Facilitator

Area 2
Learner Assessor Facilitator Measurer Change Agent Assessor Self-grower Researcher Collaborator Teacher Facilitator Learner Technologist Technologist Technologist Change Agent Servant Leader Collaborator Self-grower Professional Developer

Area 3
Collaborator Facilitator Self-grower Collaborator Professional Developer Measurer Evaluator Evaluator Problem Solver Evaluator Problem Solver Teacher Facilitator Teacher Learner Problem Solver Change Agent Designer Teacher Mentor

30

Recommended Developmental Sequencing


Pacic Crest has designed its Faculty Development Program to grow performance of faculty, staff, and administration in higher education to produce quality performers. Below is our suggested sequencing of Faculty Development Institutes, in order to maximize this growth.

Assessment Institute Program Assessment Institute Performance Measures Institute Strategic Planning Institute Leadership Institute Chairpersons Institute Program Design Institute

Teaching Institute Faculty Development Institute Advanced Teaching Institute Course Design Institute Designing Online Classes Institute Facilitators Institute Facilitating Online Learning Institute

Teaching Institute Course Design Institute Activity Design Institute Interactive Learning Systems Institute Designing Learning Objects Institute Research on the Scholarship of Teaching and Learning Institute

Student Success Institute Assessment Institute Mentoring Institute Activity Design Institute Learning to Learn Camp 31

Teaching Institute Assessment Institute Student Success Institute Mentoring Institute Learning to Learn Camp

32

Section 4
Course Design for the Program Assessment Institute

33

34

Course Design: Program Assessment Institute


Step 1: Long-term Behaviors
1. Annually produces an annual assessment report that helps to produce action plans for program improvement as part of annual operational planning and a documentation of results that exceed stakeholders expectations in critical areas of performance 2. Consistently measuring certain attributes of the program so that all stakeholders are aware of ongoing quality and how to improve it 3. Teaming with other members of the program to improve operations, results, and student learning so that all program stakeholders feel better about the program each year. 4. Choose to assess more often and be less evaluative when the purpose is to improve quality. 5. Constantly being able to say no when pressures to expand program responsibilities when expanding the program without resources and responsibilities would result in lower quality performance.

Step 2: Course Context


Course Intentions

1. Get an organization to embrace a standard and effective way to design and implement a consistent approach to program assessment 2. Get faculty and staff to change attitudes about assessment from a compliance attitude to a self-directed approach for improving quality. 3. Get programs to buy-in to producing an annual assessment report for integrating program assessment annually
Broad Learning Goals

1. Differentiate between assessment and evaluation 2. Analyze a program essence, scope, goals, stakeholders, results, and processes 3. Writing performance criteria of a program 4. Identifying what really matters to measure 5. Building a table of measures 6. Writing an annual assessment report

35

Step 3: Learning Outcomes


Competencies

Write an essence statement that clearly illustrates why your program is essential Write a set of performance criteria that is important and meaningful for the program Write an annual assessment report Describe the key processes used to produce the program quality

Movement

Strengthen the design process of a program assessment system Improve an assessment mindset

Accomplishment Produce a draft of a program assessment system that can start being implemented for the current academic year Experience

Participants will be working within a team associated with a program that is very important to the participants. The team will experience many reasons why they have dedicated many years of their life associated with their program and continually try to improve it. The team will nd its program essence, reduce its scope, and desire to do more program redesign as its stakeholders clarify where the program future goals are directing it. The team will distribute its responsibility for program quality based upon a set of clear performance criteria and measures. The use of assessment will help increase bonds and respect across programs. Coach a program team in the design process to: 1) Clarify its essence 2) Inventory stakeholders 3) Reduce scope 4) State current and future goals 5) Describe program assets 6) Describe program processes 7) Write a set of performance criteria 8) Build a table of measures (8-12 measures) that dene when and where the measures are collected and store, instruments to do the measuring, benchmark past performance in the measures, future targets and person responsible for each measure 9) Learn how to produce an annual assessment report

Integrated Performance

36

Step 4: Knowledge Table


Concepts
Essence Performance Criteria Scope Attributes Processes Assessment vs. Evaluation Stakeholders Learning Outcomes (all ve types) Means

Processes
Program Design Assessment Teaming

Tools
Annual Assessment Report Table of Measures SII method Exemplars

Contexts
Administrative programs Academic programs Strategic planning Operational planning

Ways of Being
Assessment Mindset Process-oriented Team oriented

Step 5: Themes for the Course


Assessment Planning Design Teaming

Step 6: Methodologies
Program Assessment System Design Methodology Assessment Methodology Teamwork Methodology Writing an Annual Assessment Report

Step 8: Learning Skills


Cognitive: Social: Affective:

identifying assumptions, ensuring compatibility, validating completeness, strategizing, envisioning, subdividing goal setting, planning, achieving consensus, checking perceptions being playful, seeking assessment, challenging personal standards, being open

1. Planning 2. Recruiting 3. Achieving consensus 4. Compromising 5. Accepting responsibility 6. Being non-judgmental 7. Documenting 8. Sharing vision 9. Checking perceptions 10. Identifying values 11. Challenging standards 12. Seeking assessment 13. Inquiring 14. Quantifying 15. Benchmarking

37

Step 17: Performance Criteria


Designing: Client focused, blends process & content, systems thinker, cleverly integrates solutions to sub-problems, documenter Build a team approach to the design process that is inclusive, collaborative, distributes responsibilities, and collectively work towards improving program quality. Assessing: Sets criteria, observes with details, analyzes against criteria, creates relevant action plans, provides interactive feedback, has assessment mindset Produce a draft of a program assessment system that can be assessed by other stakeholders and is ready for implementation this current year. Measuring: Identies qualities, creates performance criteria, collects performance data, creates effective measurement instruments, measures what matters Write effective, inspiring, clear performance criteria that set expectations and captures most of the variation in the performance. Produce a table of measures that really matter, with means and instruments clearly dened, benchmarks and targeted goals set, and a champion for each measure clearly identied.

Step 18: Performance Measures


Ownership of the assessment process Collaboration Design document Future plan for implementation

Step 19: Assessment / Evaluation System


Two peer assessments of other program assessment systems Assessment during evening of day 1 of each program assessment system Plan of action at the end of the event for implementing the program assessment system

38

Step 20: Course Activities


Theme Activity Type Learning Skills In or Out of Class Purpose

Schedule

Activity Name Knowledge Table Item Step 9

Step 7 Step 11

Step 12

Steps 9 & 10 Steps 8 & 14


Time

Step 7

Day 1 Morning Design Case Study

Overview of Program Assessment System Design Assessment Interactive Lecture

Processes - Program Assessment System Design

Day 1 Morning

Assessment versus Evaluation Planning Planning Goal setting, planning

Day 1 Morning Teaming Teaming Design Planning Design Assessment Design Assessment Brainstorming Collaborative Brainstorming Project Demonstration Project Review Project

Team Outcomes

39

Ways of Being - Designer - Assessor - Measurer

Day 1 Morning

Essence

Day 1 Morning

Stakeholders

Day 1 Morning

Scope

Day 1 Morning

Goal Setting

Day 1 Afternoon

Assets and Processes

Day 1 Afternoon

Performance Criteria

Day 1 Afternoon

Table of Measures

Day 1 Afternoon

What We Have Learned

Schedule

Activity Name Theme Learning Skills Knowledge Table Item Step 9

Activity Type

Step 7 Step 11
Assessment Planning Case Study Peer Assessment

Step 12

Steps 9 & 10 Steps 8 & 14


Time

In or Out of Class Purpose

Step 7

Day 2 Morning

Peer Assessment

Day 2 Morning

Annual Assessment Report Teaming Planning Teaming Assessment Assessment Closure Peer Assessment Assessment Planning Project

Day 2 Morning

Adding the Evaluation Component

Day 2 Afternoon

Planning for the Future

Day 2 Afternoon

Preparing Your Action Plan

40

Day 2 Afternoon

Peer Assessment

Day 2 Afternoon

Lessons Learned

Section 5
Specifying and Dening the Program
Faculty Guidebook: 1.5.3 Dening a Program

41

42

1.5.3 Dening a Program


by
Faculty Development Series

Daniel K. Apple (President and Founder, Pacic Crest), Steven W. Beyerlein (Mechanical Engineering, University of Idaho), and Kelli Parmley (Director of OBIA, University of Northern Colorado)

Dening a program is the preparatory stage in designing a program assessment system. This module describes the necessary steps for dening a program and offers guidance for implementing those steps. The rst step in dening a program is to assemble the team that will dene it. The team drafts a statement that describes the programs essence and scope; and then participants explore current and future goals. Finally they identify key assets and products (results) along with the processes that contribute to the quality of the program. In order to obtain a clear picture of where a program is and where it should go, it is critical that the denition of the program be undertaken collaboratively with key stakeholders in mind. For an overview of a complete program assessment system, see 1.5.2 Methodology for Designing a Program Assessment System. Setting the Stage In a fast-paced environment, the temptation may be to give these rst steps little attention or to avoid collaborating with colleagues. However, dening a program sets the stage for designing the rest of the assessment system. Greater clarity and consensus about the program denition lend strength to the remaining steps in the process for designing an assessment system and they establish a solid foundation for developing a complete program assessment system. A clear program denition is an important bridge between the present (what you are) and the future (what you want to become). Dening a program helps link assessment to other important campus processes including strategic and operational planning. Assembling the Team All members responsible for a program should be involved in dening it. While the available resources and team size may affect the assembly and management of a team, getting many different perspectives serves to strengthen the process of dening a program. In situations where assembling a team is not feasible, another strategy is to assemble a smaller core group of individuals to produce an initial draft and then solicit assessment feedback from all program members. In smaller programs with only a few individuals involved, other strategies to get additional perspectives may be necessary, such as collaboration with a related campus program or a similar program at a different institution. Adequate time should be set aside and allocated for the single task of dening the program. Merely adding time to department meetings will not provide the necessary time or focus required by the team. The location chosen for team members to assemble should be free from interruptions and distractions, away from immediate work, preferably in a retreat environment that will enhance the ability of participants to work through and develop consensus. Other things to consider include logistical items such as food, availability of laptop computers and printers, and the comfort level of the physical surroundings. To help facilitate the process, materials should be given to participants in advance. Items to provide include existing descriptive program materials, materials from other programs, background on program assessment (including 1.5.2 Methodology for Designing a Program Assessment System), and guiding questions that will prompt thinking prior to getting together. Step 1Writing an Essence Statement The essence statement is a succinct, single sentence that articulates the core values of a program. The statement should be comprehensive, representing the whole program, not a subset. This statement should also give consideration both to where the program currently is as well as where the assembled team feels it should be. Those who draft the statement should ask themselves what it is that makes their program distinct or unique. What is it that they value in their program? Step 2Identifying Program Stakeholders Stakeholders are those individuals and groups who have a vested interest in a program. To obtain a perspective beyond that of those directly involved in a program, develop a list of stakeholders along with a description of their interest in the program. The following questions help to identify possible stakeholders. Who employs our students? What graduate schools do our students attend? Who funds our program? Who are our students and where do they come from (e.g., predominantly native freshman or transfers)? Who has linkages to our program (e.g., student affairs, education programs)?

43

Step 3Specifying Program Scope Specifying the scope of a program both in terms of what a program is and is not establishes the boundaries of a program. While there is typically agreement about the core of a program, there are often gray areas where participants and stakeholders differ about the boundaries of the program. The more gray areas, the more difcult it is for a program to produce the quality it desires. Minimize the gray areas by clearly specifying, in writing, the scope of a program. Focus on and include those items for which there is agreement. Items outside the scope can be explored as part of strategic planning. Step 4Listing the Top Five Current and Future Goals Developing a limited list of programmatic goals as a collaborative activity not only provides the necessary foundation for assessment but is also part of the review processes for most accrediting bodies (Middle States, 2001; Banta & Palumbo, 2000). When identifying goals, consider a time frame of three to ve years and identify the top ve program goals to accomplish both in the upcoming year and in the future. For static programs, use a longer time frame (ve years). For relatively new and rapidly changing programs, use a shorter time frame (three years). As program designers begin brainstorming goals and then rening and narrowing the list, the goals will often fall into the appropriate categories such as current (e.g., the upcoming year) and future (e.g., three years from now). The list of goals should include both student learning goals as well as broader program goals. Ideally, the goals should be specic to minimize multiple interpretations, and quantiable in order to enhance clarity and focus. Step 5Identifying the Top Five Products or Assets All programs yield important products. These products may relate to students or they may relate to other aspects of the program such as advising or curriculum. Programs may have assets that are distinctive, unique, and a core feature of the program. Give consideration to assets as well as products as they are likely to require some form of regular assessment to ensure their quality. The list of products and assets should be important to the program and should be explicitly described. Identifying and prioritizing a programs products and assets will clarify, in later steps, the most appropriate measures and instruments for assessment.

Step 6Describing Key Processes The prior step identies the key products or assets of the program. However, there are also key processes that contribute to a program. Consider and describe the processes that will be needed to accomplish the goals established in Step 4. Begin by listing the processes associated with the program. Explain the processes as mechanisms (how they transpire), describing them from multiple perspectives to enhance their meaning. Be sure to provide an overview of the entire process. Identify three to ve components for each process and write the description as a sequence that connects the components. Finally, include descriptions that explain how the process connects to outcomes and identify ownership and responsibility for the process. Concluding Thoughts Do not move on to the next steps of the assessment system design until you are satised that you have adequately dened the program. Periodically revisit the program denition to ensure that the later steps are consistent with it. Prior to each meeting or retreat, ensure that designated individuals are responsible for taking detailed notes on the work that is done. As soon as possible after each meeting, share the work products with the team for peer assessment. An environment such as Blackboard or Web CT can efciently facilitate this process and serve as a central location for maintaining documents. While it is important to achieve closure and move on to the next steps, realize that the process for designing an assessment system is iterative, not linear. References Banta, T. W., & Palomba, C. A. (2001). Assessing student competence in accredited disciplines: Pioneering approaches to assessment in higher education. Sterling, VA: Stylus. Middle States Commission on Higher Education. (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author.

44

Section 6
Establishing Program Quality
Background on Performance Criteria Faculty Guidebook: 1.5.4 Writing Performance Criteria for a Program

45

Performance Criteria
Areas of performance, clearly and explicitly defined, which allow all involved (performer, assessor, evaluator, stakeholder, etc.) to have a mutually understood set of expectations by which performance may be measured, assessed, and/or evaluated. Performance criteria provide simple-to-understand, realistic, and measurable values of excellence.

For institutions, faculty, and students, performance criteria serve as a contract, translating implicit expectations into explicit statements about what levels of effort and achievement are expected and what types of performance will be valued and recognized. It is important to recognize that performance criteria are applicable for all developmental roles, from program-level (Institutional Development) to course-level (Professional Development) to activity-level (Learner and Intellectual Development). It is critical, therefore, that the performance criteria for courseand activity-level support the broader program-level performance criteria. Some important characteristics: Performance criteria are meant to provide a mental image of what best practices look like in a specic role or group charter Not all performance criteria are applicable to every situation, individual performer, or group of performers, and should be modied to be applicable to individual performances. Performance criteria are works-in-progress and should be recognized as such. Although sufcient time should be allocated to creating the criteria, it is reasonable to expect these criteria to evolve through regular use. Performance criteria support an assessment philosophy All performance criteria should have two to three measures (also called attributes). Measures are the smaller elements used to measure performance.

46

1.5.4 Writing Performance Criteria for a Program


by
Faculty Development Series

Marta Nibert (Occupational Therapist and Educational Consultant)

In the process of creating a quality program assessment system, all stakeholders need to collaborate in the creation of clear, concise performance criteria that can be used to guide assessment of the program. This module offers a series of steps that stakeholder teams can follow to generate high quality performance criteria for a program. These steps involve brainstorming current and future program qualities, identifying qualities that will have the largest bearing on the future design of the program, and selecting critical areas for measurement. Key ndings are ultimately distilled into a nite set of readable statements that express the essential nature of the program being assessed, along with key indicators of how its success will be measured. These statements about performance of a program are the performance criteria for the program. Role of Stakeholders Accrediting bodies expect programs to involve their constituents (students, faculty, administration, alumni, and industry supporters) in the establishment and maintenance of the program objectives (Accreditation Board for Engineering and Technology, 2002). Beyond the accreditation visit, these statements can be used to share program intentions with other faculty, campus administrators, student applicants, and potential donors. These statements have greatest meaning when they are used to align administrative and instructional decisions with program intentions. Before a program can be implemented, stakeholders need to come to consensus about what the program is attempting to achieve and how that achievement can be dened, accomplished, and measured in specic performance (things that are done). These well-articulated descriptions become the measuring sticks for program effectiveness. They provide essential reference points to which all participants can return, time and again throughout the review process, to check on the clarity of their thinking and to ensure consistency in analyzing a program. Criteria for a Program The writing of performance criteria for a program parallels the process used to write performance criteria for a course or activity. What is important in any of these cases is determining which qualities or characteristics are absolutely essential to the program, course, or activity in question. The work of identifying these features enables team members to then dene the performance criteria that will determine how those program qualities will be achieved. In other words, if a key quality of the program is commercially talented artists, the performance criteria statement should spell out specically how that program quality can become evident. The following methodology will help assessors identify, verify, clarify, prioritize, and analyze these qualities. These preliminary steps will then be used to develop statements of performance criteria that incorporate the most important ideals that have emerged from this collaborative thinking-sharing-writing process. Determining Qualities to Select for a Program Many designers of continuous quality improvement emphasize the need for team effort to fully understand and appreciate a program or system (Scholtes, 1993; Productivity-Quality Systems, 1992) and stress the importance of buy-in from all key players (Badiru & Ayeni, 1993). Deming advocates the need for the inclusion of all classes of stakeholders in all these steps (Deming, 1982) and emphasizes the need for the entire team to brainstorm all the knowables. In so doing, they can create a comprehensive or profound system of knowledge about the program, though there will always be unknowables which create system variance (e.g., the next years enrollments, budget, political developments). Still, writing key performance objectives effectively demands that participants begin by reecting on what is most essential to their program, bringing to the endeavor as much information and insight as they can. Methodology for Writing Performance Criteria Step 1Review previous design work. In performing a program assessment, you will be creating your own design document that captures your work as you progress. After writing a one-sentence statement that captures the essence of your program, you will identify its goals and processes, as well as its scope and shareholders. With these documents in hand and with the collaborative experience of producing them behind the team, the participants will be ready to proceed to the steps outlined in this module. Step 2Brainstorm a list of current qualities. Next, the teams task is to brainstorm a list of characteristics that account for signicant aspects of program quality. These qualities appear across products, processes, and other components of the program. Overall quality results from a set of specic program qualities, i.e., those things that characterize the program in a positive sense. The team should consider those aspects that make the

47

Table 1

Methodology for Writing Performance Criteria

1. Review your stated programs essence, goals, scope, processes, systems, assets, products, results, and history. 2. Brainstorm a list of your programs current qualities; characteristics and descriptors that reect what the program is all about, especially those that represent quality. 3. Brainstorm a list of your programs future qualities; characteristics and descriptors that reect what the program will be about, especially those that represent quality. 4. Determine whether any key qualities are missing by visiting with stakeholders and by researching comparable programs. 5. Rank the top ten integrated current and future qualities for the program. 6. Analyze these qualities to pull out redundancy and overlap by renaming or removing duplicates. 7. Select and rank the most important (critical) areas for measuring performance; select the top six to eight. 8. Analyze the qualities by nding three to ve aspects of each quality that characterize what that quality really represents. 9. Clarify what each of these quality areas looks like by writing a clear statement of performance; this is called the programs performance criteria. program unique and give it an identity, are critical to the programs success, match characteristics found in other quality programs, and are attributes that stakeholders nd special. These descriptions can best be captured in the form of adjectives or adverbs connected with nouns (e.g., dedicated faculty, research-based, empowering). Additionally, assessors should explore stakeholders perspectives, program resources, graduates, and programrelated events to determine which features are most valuable to the program. It might be helpful to ask, for instance, When recruiting students or faculty, how would you describe your program to them? How would you describe your program to someone at a conference? Examine written materials about your program (e.g., marketing materials) to

see what they say or imply about your program. The ow of information and insights from this array of resources will provide an excellent pool from which to select key ideas for writing performance criteria statements. Examples: innovative scholarly rigorous applied success-oriented open access responsive value-added community-based challenging highly desired graduates friendly world-class faculty technical adaptable efcient

Step 3Brainstorm a list of future program qualities. The next question to consider concerns the direction in which you would like your program to move. What key qualities would you like to see as outcomes of your ideal program in the future? What capabilities do you nd in superior graduates or expert practitioners that you would like those in your program to emulate? (Mattingly & Fleming, 1992) How would you like to enhance your current program? Are there characteristics lacking in your graduates that reveal defects that are somehow embedded in the very design of your program? (Newble & Hejka, 1991) What attributes would you like to build into your program for the future? By determining the difference between the current and future status of your program, you can identify the areas that need attention. This type of analytically derived information will be invaluable to program leadership as they begin to map out future priorities of the program and of the institution it serves. The future program qualities that your team articulates, therefore, should reect anticipated or perceived shifts; those changes should be reected in planning. Brainstorm these ideas with your design team, remembering to include material from the products and processes your group has completed in the earlier phase of the program assessment system. Examples: state-of-the-art assessment-oriented evidence-based fully inclusive life-changing diverse environment heavily endowed trend-setting student-centered empowering problem-based 24/7 access transformational resource rich well-funded cutting edge

48

Step 4Determine key qualities that are missing. Combine the lists from Steps 2 and 3 to aggregate the collective qualities. Check the new listing to determine if there are any important characteristics/qualities missing or if any gaps exist. Investigate programs similar to yours (e.g., those of competitors, peers, and exemplars) and consider why they are viewed as being strong (or of high quality). Determine which of their characteristics you desire, and decide whether they are applicable to your program. It is extremely important to facilitate the participation of all stakeholders, such as board members, students, and representatives of the community, in this process. It is also important that the resulting set of qualities identied represents all critical areas of the program and captures the essence of it. Contact collaborative partners outside your program (e.g., funding agencies, peers who produce signicant contributions to their professional organizations) and get their opinions and feedback. Ask them to feed back to you their perceptions about what is special about your program; ask them to express in their own words their perception of who you are and how you contribute to their efforts or serve their needs. Examples: highly selective heavily endowed learner-centered high technology community visible highly employable resource rich job-ready graduates high retention

Step 6Analyze these qualities to pull out redundancy and overlap. In this step, you need to double-check for possible redundancy in your nal list of qualities. Are all your program strengths represented? Additionally, check this listing against characteristics of other programs in your institution. Have you included anything that is actually covered by other programs or college departments (e.g., advising, marketing)? Are you still operating within your stated scope or boundaries? Do a perception check and ask whether, collectively, the qualities (the program characteristics you have identied) cover every aspect of your program. In other words, do they fully describe the unique traits that make it what it is, that give it a special identity? Examples: heavily endowed resource rich cutting edge well-funded innovative state-of-the-art

Step 7Prioritize qualities; select the top six to eight. You now need to examine the special characteristics of your program in terms of your overall institutional strategic priorities and initiatives. All aspects of your program (both academic and non-academic) should ultimately feed into student learner outcomes and be kept in alignment with the institutional mission (Higher Learning Commission, 2003). Are you still targeting the most signicant areas? A program should select six to eight criteria. The number of criteria chosen depends on the length of time the program has existed and on its magnitude or complexity. In general, the longer or more complex the program, the greater the number of criteria it will need. One of the tools commonly used in continuous quality improvement systems is the pareto diagram, which arranges data into categories for easy visualization. Charting selected qualities with this tool can help create a holistic view of your chosen qualities (Productivity-Quality Systems, 1992). McNamara (2002) reminds program designers of the 20-80 rule, which claims that 20 percent of effort generates 80 percent of the results. Deming says 85 percent of quality problems are due to system design; clearly identifying quality areas will enable all stakeholders to get more systematic control of the program (Deming, 1982).

Step 5Rank the top ten integrated current and future qualities. The next step involves ranking the characteristics you have just identied in the previous steps. Begin by selecting the qualities that are most important. This is an excellent stage in which to enlist the assistance of community and alumni advisory groups for validation. This is also a good point at which to cross-reference selected qualities with additional requirements, such as accrediting bodies, state regulator boards, certication examination criteria, community needs, and college initiatives and priorities (James A. Rhodes State College, 2002). Make a rst pass at ranking the list by labeling criteria from low to high (on a scale from 1 to 5). Then, sort the scores. Next, starting at the bottom of the list, see if you can justify moving a characteristic higher up in the list. Move to the next highest ranked item and determine if it can be moved up. Two to three passes through the list will help ensure that no truly critical item is overlooked.

49

Step 8Analyze the qualities to nd three to ve aspects of each quality. What makes each of your qualities unique? Why are they important in dening your program? Analyzing each of the qualities, describe in different phrases what each one means. Ask what is meaningful or signicant in a given area of performance or if other possible meanings need to be considered. What important things must exist for this program characteristic to be true? For instance, if you claim a quality of computer literacy, is the institutional infrastructure in place to support it? Examples: Quality: student-centered Aspects: students dene their own learning objectives, faculty identify student learning needs, students are engaged in active learning, and faculty and students assess student performance. Quality: Aspects: success-oriented needs are being met outcomes produced external afrmation rewarding minimal failures Step 9Write the performance criteria as statements. The performance criteria are thoughtfully expressed performance expectations that are mutually understood by all stakeholders. They demonstrate the importance of key performance areas to the overall effectiveness of your program. They delineate the specic aspects of a performance and describe how they are tied to a larger integrated performance. They also provide direction about what programs need to do specically to satisfy the goals that have been previously set out in much more global terms. Performance criteria and qualities have a critical two-way relationship. The performance criteria you write must deliver the specic qualities that have been selected. For example, if you have specied that your program needs to recruit more students, the performance criteria need to spell out how that will be achieved. Try to visualize the integrated performance that you are seeking. Now put together a sequence of steps or actions to get the job done, checking to see that the plan is coherent and uent. Describe and then imagine putting it into a real context. For instance, is it reasonable to expect that you can increase student enrollment by 10 percent in

the next year or by 15 percent in the next two years? Will your plan achieve the qualities you had identied earlier as being descriptive of your programs unique character? Concluding Thoughts The writing of performance criteria is facilitated by strong writing prompts that identify the qualities that matter for program effectiveness. Once these qualities are visualized and captured, the task of writing the performance criteria statements that ow from them becomes easier. Key processes and products can then be highlighted and made apparent to all stakeholders. A road map for the design specications for your program will emerge from this process. A systematic approach for measuring program progress will be presented in the next module. References Accreditation Board for Engineering and Technology. (2000). Engineering Criteria 2000. <http://www.abet. org> Badiru, A., & Ayeni, B. (1993). Practitioners guide to quality and process improvement. London: Chapman & Hall. Deming, W. E. (1982). Quality, productivity, and competitive position. Cambridge: Massachusetts Institute of Technology Press. Higher Learning Commission. (2003, February). The criteria for accreditation and the operational indicators. Retrieved June 1, 2004, from <http://www.ncahigherlearningcommission.org/restructuring/newcriteria> [No longer active.] Mattingly, C., & Fleming, M. F. (1992). Clinical reasoning: Forms of inquiry in therapeutic practice. Philadelphia: F. A. Davis. McNamara, C. (2002). Basic guide to program evaluation. The Management Assistance Program for Nonprots. Retrieved June 1, 2004, from <http://www.mapnp.org/ library/evaluatn/fnl_eval.htm> Newble, D. I., & Hejka, E. J. (1991). Approaches to learning of medical students and practicing physicians: Some empirical evidence and its implications for medical education. Educational Psychology, 11, 3-4. Productivity-Quality Systems Inc. (1992). Improvement tools for educators. Miamisburg, OH: Author. Scholtes, P. (1993). The team handbook. Madison, WI: Joiner.

50

Section 7
Performing Annual Program Assessment
(Your facilitator will provide the content for this section)

51

52

Section 8
Constructing a Table of Measures
Faculty Guidebook: 1.4.1 Overview of Measurement Faculty Guidebook: 1.5.5 Identifying Performance Measures for a Program Faculty Guidebook: 1.5.6 Constructing a Table of Measures

53

54

1.4.1 Overview of Measurement


by
Faculty Development Series

Kathleen Burke (Economics, SUNY Cortland) and Sandy Bargainnier (Kinesiology, The Pennsylvania State University)

Measurement is the process of determining the level of performance. This module presents basic ideas for obtaining valid, reliable, and efcient measurements, and illustrates how these are central to proper assessment, evaluation, and research. Seminal Concepts Measurement is important because people care about quality. Quality describes how good something is in the context of meeting human needs. Quality is a holistic combination of the inherent or distinctive attributes of a person, product, process, organization, etc. Some examples of quality in higher-education contexts include: Quality of knowledge In a specic knowledge area (e.g., hydrology, statistics, western history), quality involves an individuals depth, breath, and connections in the context of ideas and facts that comprise the knowledge area. Quality of performance In a specic performance area (e.g., teamwork, running a project, playing an oboe), quality describes how good the performance is. Quality of a product For a given product (e.g., technical report or journal paper, an original song), quality describes how good the product is. Quality of an organization For an academic unit (department, math tutoring center, etc.), quality describes how effectively this unit meets the needs of key stakeholders. For a university, quality describes how effectively it meets the needs of the students. Quality occurs on a scale that spans from low to medium to high to exceptional. Measurement is the process of assigning a number or qualitative scale to indicate level of quality. Tools for making measurements have varied forms and names. Some common labels are scoring guides, rubrics, and measures. Here, we use the label measure to mean any tool that is used for the purposes of making a measurement. Validity refers to how well the measurement process actually measures what it claims to measure. For example, a measurement of student writing should indicate the quality of the writing, and should not be inuenced by things such as how much writing the student has done or whether or not the student has done things the way the teacher wanted them to be done. Table 1 Principles of Measurement

Measure what is most important; that is, measure what you care about. Base measurement on observable data that are analyzed appropriately. Focus the measurement on a well-dened area of quality; neither too large or too small. Select the appropriate measurement tool for the task. When possible, obtain data from multiple sources and triangulate. Recognize that outliers in a data set often provide clues about the integrity of the measurements. Establish the reliability of a measurement system by testing it before using it. Increase the validity of a measurement system by comparing and contrasting outcomes under varying conditions. Assess the overall usefulness of a measurement system by comparing the cost of the data and the levels of reliability and validity obtained.

Reliability refers to the repeatability of a measurement. That is, the more reliable a measurement, the more likely it is that the measurer will arrive at the same number or qualitative score if the measurement is repeated. In general, before the validity of a measurement process can be established, its reliability must be established. When multiple people use a measurement process, the level of consistency in their judgments is termed inter-rater reliability. Quality in learning, assessment, evaluation, and research is enhanced by quality in measurement. To attain quality in measurement in multiple contexts, we suggest adherence to the principles summarized in Table 1. Rationale for Measurement Assessment, evaluation, and research are three important processes in higher education. Although each is different,

55

Table 2

Comparison of Measurement, Assessment, Evaluation, and Research Measurement To assign a number or qualitative level to indicate level of quality Objective/unbiased Measurer Stakeholders in use of measurement A number or grade Assessment To improve quality Non-judgmental (collaborative) Assessor Assessee Evaluation To determine whether standards for quality are met Judgmental (not collaborative) Evaluator External decision-makers Documented level of nal performance; part of a permanent record; brings closure Unbiased; criteria based Research To produce new knowledge that builds up existing knowledge Inquiry-based (collaborative) Researcher Community of scholars and practioners Contribution to existing knowledge Theory driven; designed to control bias; can be tested; involves a high level of expertise; uses accepted methods

Categories Purpose Nature Performer Beneciary

Results

Action plan

Important Characteristics

Calibrated; reliable, scaled appropriately (with range and units)

Criteria based; assesseecentered

all three of them involve measurement. Table 2 shows the signicant differences between assessment, evaluation, and research. All three of these processes benet from conscious attention to the principles of measurement articulated in Table 1. Measurement targets should be meaningful to three different audiences: students, practitioners in the eld, and researchers. Students respond best to explicit learning targets that involve authentic challenges connected with knowledge mastery, reasoning prociency, product realization, and professional expectations (Stiggins, 1996). Practitioners expect to see course outcomes that support the diverse roles within the discipline or profession and in the workplace. Researchers depend on a clearly conceptualized cognitive model that reects the latest understanding of how learners represent knowledge and develop expertise in the domain (Pellegrino, Chudowsky, & Glaser, 2001). Researchers also expect alignment among the cognitive model, the methods used to observe performance, and the protocol for interpreting results. Educators vary both in their motivation for collecting data and in their skill in interpreting and reporting it. It is important to address the challenge of serving all three audiences with learning and growth that can be validly measured. The following sections explore the varying uses of measurement. Role in Assessment Assessment is a process of measuring and analyzing a performance, a work product, or a learning skill to provide high-quality, timely feedback that gives assessees clear and meaningful directives and insights to help them improve

their future performance (4.1.1 Overview of Assessment and 4.1.4 Assessment Methodology). Before a performance can be measured for assessment purposes, the criteria must be clearly dened and expectations or measures of each criterion must be set. The measurer will nd it easier to provide specic feedback that will be effective for strengthening future performance if he or she narrows the focus to three to ve performance expectations. If the goal is to grow a performance, a work product, or a learning skill, assessment must occur early (and often) to allow students ample time to rene and improve. For example, if a central course outcome is to improve student writing, it will be important for instructors to conduct multiple formative measurements of performance on steps in the process of preparing a research paper. In this case, an instructor might use an analytic writing rubric for a research paper as the assessment tool to measure and collect data that provides feedback to the student. At intermediate times throughout the semester, instructors can measure specic performance expectations, providing both student and instructor with assessment data that can strengthen writing quality. Role in Evaluation Evaluation is the process of measuring the quality of a performance (e.g., a work product or the use of a process) to make a judgment or to determine whether, or to what level, standards have been met (1.4.6 Overview of Evaluation and 1.4.7 Evaluation Methodology). Evaluation is used in many academic arenas, such as graded assignments and exams, grade point average (GPA), promotion and tenure, or grant acquisition. Measurements that are used

56

to make judgments are often based on external standards (e.g., accrediting standards, agency policies, accountability for funding). Before any performance can be measured for evaluation purposes, the performance expectations (standards based on the measure) must be clear for each criterion of quality. Furthermore, the evaluation should be unbiased and be documented in a permanent record (e.g., transcript, personnel le, grant record). In the case of a research paper, the nal grade may be assigned using information from a score sheet associated with a writing rubric. The more a measurement tool requires an evaluator to explain his or her judgments about whether standards have been met, the less effective that measurement tool is for evaluation. Role in Research The purpose of measurement in research is to validate new knowledge within or across disciplines (2.5.2 Research Methodology). Researchers begin with questions about a void in the existing body of knowledge; they then form hypotheses regarding relationships of measurable variables. Theory should be used to frame research questions and to guide methods for collecting reliable and valid data. In research, measurement falls into two categories: descriptive and experimental. If the researcher is attempting to answer a question descriptively, the appropriate tools include surveys, interviews or focus groups, conversational analysis, observation, ethnographies, or meta-analysis (Olds, Moskal & Miller, 2005). If the researchers study is experimental in nature, the proper methods include randomized controlled trials, matched groups, baseline data, post-testing, and longitudinal designs. Each of these research designs or techniques requires certain kinds of measures that will result in data that can be appropriately analyzed to provide a basis for interpretation (National Research Council, 2002). Inferences drawn from the measurement should directly relate the evidence obtained to the hypothesis being investigated. The quality of a measure is very important because limitations, biases, and alternative interpretations will affect validity. Researchers want to know whether their ndings can be generalized to a broader population or to multiple settings. The consistency of the measurement and the validity of the data are evidenced by the ability of other researchers to replicate the results. Peer review and publication of research are essential for disseminating new knowledge to other practitioners as well as to the public. Performance Measurement Many educators are reluctant to apply measurement instruments and techniques to complex and integrated

performances. Tasks like these are commonly referred to as constructed-response outcomes; they include learning portfolios, reective journals, self-growth papers, capstone reports, project reports, and experiential narratives. Learning portfolios can include multiple performance artifacts, such as a sequence of art works produced during a course and accompanied by reective journals and interpretive analyses. It is much easier to design constructed-response outcomes like portfolios than it is to create reliable and valid measures for assessing or evaluating their quality. To assess and/or to evaluate these complex outcomes, instructors often use custom-designed rubrics. Educators historic reluctance to adopt complex integrated performance outcomes stems in part from their assumptions about reliability and validity in measuring them. For many, selected-response instruments, such as multiple-choice and matching, are perceived to be more reliable and valid as well as easier to use. Instructors cannot measure performances that involve critical thinking, quality teaching, or service-learning projects by counting correct answers (Wiggins, 1998). These require qualitative judgments. As a result, some instructors opt to take advantage of the comfort that comes from using traditional select-response measurement instruments, and so spend most of their in-class time covering the content to align with the test. But select-response tests are often not authentic measures of intended outcomes. For example, when one applies for a drivers license, the simple indicators of the driving test and written test do not represent and are not intended to represent all key driving performances. Table 3 is a guide for selecting measurement tools for the ve types of learning outcomes described in the Learning Outcomes (2.4.5) module: competency, movement, accomplishment, experience, and integrated performance. A competency is a collection of knowledge, skills, and attitudes needed to perform a specic task effectively and efciently at a dened level. A common question about a competency outcome is: What can the learner do at what level in a specic situation? Movement is documented growth in a transferable process or learning skill. A common question about a movement outcomes is: What does increased performance look like? Accomplishments are signicant work products or performances that are externally valued or afrmed by an outside expert. A common question about an accomplishment outcome is: How well does student work compare with work products of practitioners in the eld? Experiences are interactions, emotions, responsibilities, and shared memories that clarify ones position in relation to oneself, a community, or discipline. A common question about an experience outcome is: How has this

57

Table 3

Alignment of Learning Outcomes and Measurement Instruments Example Applying knowledge in a specic context at a specic level Exercising transferable skills in a continuum with no upper bound (e.g., problem solving, communication, teamwork) Creating something with external value (project work, community service, artistic creation, thesis) Responding to and internalizing a situation Deploying working expertise in response to an authentic challenge (e.g., internship interview, student teaching observation, nal presentation, leadership situation) Task/Instrument Checklist or selected response exam with answer key Reective essay with analytic rubric Portfolio with scorecard or peer review form Personal journal with holistic rubric Performance appraisal with rating form

Outcome Type Competency Movement Accomplishments Experiences Integrative performances

experience changed the learner? Integrated performance is the synthesis of prior knowledge, skills, processes, and attitudes with current learning needs to address a difcult challenge within a strict time frame and set of performance expectations. A common question about integrated performance is: How prepared are students to respond to a real-world challenge? Over the last decade, rubrics have received considerable attention in education as tools for performance measurement (Arter & McTighe, 2001). Rubrics provide explicit statements that describe different levels of performance and are worded in a way that covers the essence of what to look for when conducting qualitative measurements. Rubrics should reect the best thinking about what constitutes good performance, a work product, or a learning skill. As discussed in Fundamentals of Rubrics (1.4.2), rubrics can be analytic (with an extensive set of factors and multiple scales) or holistic (with just a single scale). However, rubrics are only as robust as the clarity of purpose for measurement. Concluding Thoughts Measurement is foundational to classroom assessment, grading, program evaluation, and educational research. In the physical sciences, quality measurement is a central event; in education, measurement involves a series of linked decisions that are more qualitative in nature. In both, the goal is to align outcomes, performance tasks, measurement methods, and data analysis. Educators in all disciplines must learn to apply their measurement skills to the multiple uses of measurement in education. Regardless of the discipline or profession, best practices include clear communication of purpose, well-selected targets for measurement, sound methods for data collection,

and sampling to reduce bias and distortion. Faculty will become better teachers and researchers if they learn to seek consensus with their colleagues about what processes matter most in teaching and learning, and what tools measure learner growth most efciently and effectively. References Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, CA: Corwin Press. Olds, B. M., Moskal, B. M., & Miller, R. L. (2005). Assessment in engineering education: Evolution, approaches and future collaborations. Journal of Engineering Education, 94, 13-26. Pellegrino, J., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington DC: National Academy Press. Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. National Research Council. Washington, DC: National Academy Press. Stiggins, R. J. (1996). Student-centered classroom assessment (2nd ed.). Old Tappan, NJ: Prentice Hall. Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.

58

1.5.5 Identifying Performance Measures for a Program


by
Faculty Development Series

Kelli Parmley (Director of OBIA, University of Northern Colorado) and Daniel K. Apple (President and Founder, Pacic Crest)

The (previous) module 1.5.4 Writing Performance Criteria for a Program identies how to write performance criteria. Once those performance criteria have been revised and edited, it is then possible to look at each of them to determine which are really important to measure. Using the examples of performance criteria from academic and non-academic functions found in Table 1, this module will provide important guidance for identifying key performance measures for a program. The following module prompts one to identify the measures, distinguish direct and indirect measures, describe why independence in measures is essential, and highlight some of the common pitfalls that are encountered when identifying key measures. Generating Potential Measures The heart of a good program assessment system distinguishes between what is important to measure and what is available or easy to measure (Nichols, 1991). In looking at performance criteria statements, one might ask What aspects of this performance are most important to measure? For example, in the student-centered performance criteria from Table 1, it is important that the learner is truly taking ownership for learning. One might survey students about their academic career plans. Another method is to consider what questions one might ask someone else (e.g., students, faculty, alumni) in order to determine whether a particular performance was met. A third way could be to interview stakeholders affected by the criteria, such as parents of past and present students. However, as the focus of data collection shifts away from the primary participants cited in a criteria statement, it is vital to conduct direct questioning so that data are not diluted with non-essential information. In selecting measures, it is also important not to let historical measurements and existing data collection tools dictate what is most important to the program. Finally, performance measures should be independent, thereby ensuring that measures are not correlated. Another rationale for independence is that the cost of a measure is normally non-trivial, and it is expensive to take multiple measurements on the same variation. Using the performance criteria developed in the previous step, the following three key prompts can assist in identifying what is most important to measure. Although it may be tempting to identify what is readily available or easy to measure, these prompts are intended to assist in identifying what is most important to measure. What aspects of this performance are most important to measure? One should look at the performance criteria and nd those aspects that are most important to measure. For example, look at the second performance criterion student-centered. In the case of this criterion, selfdirected learning is a critical aspect of performance for a college or university that is student-centered. The proportion of accepted learning plans could be good evidence for measuring this criterion. For example, you might divide the number accepted by the total number expected. What does the performance look like when you see it? Literally step back and visualize the performance happening and again ask yourself, Which aspects are most critical to the performance? Think of yourself with a camera and focus the lens on the aspect of the performance that is most critical. For example, consider rich curriculum as the performance criteria (fourth row of Table 1). What do you visualize when you consider a programs curriculum that is rich? In this example, the English faculty envisioned one that is challenging, diverse, and growth-oriented, and they identied diversity as the most important aspect to measure. While the challenge will be to create an instrument that will measure diversity, it is important at this point not to worry about the difculty of measuring (its too complex) and to focus instead on what is most important to measure. What do you need to measure to convince others that the performance expectation was met? Another way to identify key measures is to consider what evidence you would need to present to stakeholders to convince them that the performance expectation was being met. The service-oriented performance criterion (seventh row of Table 1) is typically an important performance for administrative ofces. Critical evidence for this would be a strong measure of collective satisfaction of clients (percent satised versus individual satisfaction). Another method is to

59

Table 1 Performance Criteria

Examples of Measurable Attributes Measurable Attributes research skills mastery of professional practice percent of currently accepted learning plans

Academically sound graduates Scientically literate graduates who are able to analyze, synthesize, and evaluate information in the areas of basic human communication processes, communication differences and disorders, and other skills, and who are prepared for doctoral study and/or professional careers. Student-centered An enriched active learning environment in which faculty, staff, and students focus on students development through updated learning plans with personalized learning objectives, services to meet special needs, and continual assessment of student learning. Success-oriented A supportive community that responds to students special needs and refuses to accept failures by having the language, values, and expectations of success; signicant accomplishments and high performances drive all to seek more from themselves to produce the outcomes the community desires. Rich curriculum A rich and diverse range of courses that span genres, historical periods, major authors, ethnic backgrounds across many Anglophone cultures, and explore diversity issues in race, sex, and economic backgrounds. Qualied graduates Fully qualied graduates who are consistently accepted in graduate schools of choice because of their documented abilities to carry out independent laboratory work, undergraduate research, and effective problem solving. Research (Ofce of Research) Extensive, widespread research effort with collaboration among students and external researchers resulting in highly-funded research projects and signicant numbers of peer-reviewed journal articles and presentations. Service-oriented (Administrative Ofce) Consistently puts interests of others rst, claries clients needs, aligns work with institutional needs, provides effective consulting, and values prompt, effective, and conclusive responses to clients perceived needs. consider how you might ask someone else (e.g., student, faculty, alumnus) how they would determine whether the performance expectation was being met. There are any number of questions you might ask these stakeholders. However, an important aspect of service orientation is customer satisfaction, so if you had a strong measure of your clients satisfaction, you could convince others that you were service oriented. When one asks these questions, there is a tendency to start thinking about how that aspect of the performance can be measured. It is critical that measurement concerns are put aside until it is decided what is most important to the program. As you look at the measures you have identied, consider whether or not they measure the performance directly or indirectly.

graduation rate

diversity

placement success

number of faculty with the number of annual qualied publications research dollars

satisfaction of clients

Distinguishing Indirect from Direct Measures Frequently an aspect chosen for measurement will act as an unintentional proxy for what you truly want to measure. In other words, there may be high a correlation between the measure and the actual performance (Middle States Commission on Higher Education publication Characteristics of Excellence IR 117). However, that correlation does not mean that you are actually measuring performance. For example, see the fth performance criterion listed in Table 1, qualied graduates. One could choose the more readily available measure of placement rate (the proportion of graduates who were accepted into graduate schools) versus placement success. Placement success more directly captures the aspects that are important, such

60

as placement of choice and because of documented abilities. Placement rates, on the other hand, assume that placement success is attributable to the sheer numbers graduating; it lacks a quality measure. Another common example is to ask stakeholders (e.g., students) to report their perceptions or satisfaction in achieving learning outcomes. For example, in the rst performance criteria, academically sound graduates one measure might be students perceptions of their own abilities to analyze, synthesize, etc. However, student perception is an indirect measure, often easier to obtain but less important for the purposes of program improvement than direct measures of student research skills. It is imperative to realize that regional accrediting and professional accrediting bodies do not accept indirect measures of student learning and associated learning outcomes. This is especially true when grades are used as measures of student learning. Maximizing the Independence in a Set of Measures By selecting measures that are sufciently independent of one another, you ensure that resources are used efciently in data collection, and you present a more complete and holistic picture of program performance in any given area. This can be illustrated with the qualied graduates performance criteria. In this situation, two measures that would be more independent than consistently accepted to graduate schools of choice would be GRE scores for graduate school and job placement. These measures really address different sets of competencies and skills. Two measures that would be less independent would be the job placement rate versus the average rst-year salary of all graduates; as more graduates get placed, the average salary usually increases, thus illustrating the interdependence of these two measures. Because the cost of a measure is normally non-trivial, the measure should be independent and should not replicate other measures. This is true both within and across the performance criteria. A way for determining independence is to assure that what is measured is something signicant that is not correlated with other measures. Common Pitfalls Low-Cost Data Avoid being trapped by what is currently available or what is being measured because often this data is not

relevant to what really matters. Most people think that any data has value. Even when data is essentially free, there is still the cost of analyzing it and bringing meaning to it so that future performance will be enhanced. The process of program assessment is as useful for helping make decisions about what you will stop measuring as for what you will start measuring. Method Bias There is a tendency to choose to use particular instruments before one has identied what to measure, chosen the best venue for collecting this data, and determined the number of items required to characterize performance. Performance criteria should prescribe the measurement methods; the methods should not prescribe the performance criteria. Being tactical and precise in instrument selection simplies data collection and ensures the chosen measures are the most cost-effective. Dilution Often one does not do an effective job of prioritizing the potential measures. When this happens, essential attributes may not be measured well, while less important attributes may be measured very well. Some attributes of the performance criteria are more important than others. One criterion may actually capture as much as 60% of the performance. More criteria are not necessarily better; multiple criteria can serve to discount the value of each. When there are two potential attributes for measurement, ask whether they are similar enough to each other that you can consolidate them or choose just one. Relevance of Historical Data There is a tendency to continue to measure what has always been measured, even when this is no longer appropriate. As programs change, the attributes of the performance that should be measured need to be modied. Completeness and continuity of the historical record is meaningless if these data cannot be used to guide improvement efforts that ow from current performance criteria for a program. Grandstanding The purpose of program assessment is to foster continuous improvement. However, it can be tempting to pick only those measurable attributes that showcase current highperformance levels, and avoid areas in which performance needs to be improved. Such a one-dimensional approach ensures that the status quo is maintained, ultimately giving up long-term competitive advantages that result from regular attention to ongoing improvement.

61

Concluding Thoughts At the heart of a quality program assessment system is the minimum set of measures that provide feedback on what really matters. Make sure that these measures are derived directly from the performance criteria, that it is feasible to collect them, that the results can be analyzed meaningfully, and that the results and methods will be respected by the campus community. Selecting measures, like writing performance criteria, is best done as a collaborative activity involving stakeholders at all levels in the institution. By putting the spotlight on a minimum set of independent measures, involving multiple constituencies in interpreting measurement ndings, and promoting broad-based dialogue about how to respond to these ndings, an institution will be taking positive steps toward realizing its mission. References Banta, T. W., & Palomba, C. A. (2001). Assessing student competence in accredited disciplines: Pioneering approaches to assessment in higher education. Sterling, VA: Stylus. Middle States Commission on Higher Education. (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author. Nichols, J. O. (1991). A practitioners handbook for institutional effectiveness and student outcomes assessment implementation. New York: Agathon Press.

62

1.5.6 Constructing a Table of Measures


by
Faculty Development Series

Marie M. B. Racine (French & Linguistics, University of the District of Columbia)

In constructing a table of measures for a program, one must attend thoughtfully to the program assessment activities outlined in 1.5.3 Dening a Program, 1.5.4 Writing Performance Criteria for a Program, and 1.5.5 Identifying Performance Measures for a Program. Tables of measures link performance criteria for a program, important attributes to be measured, measurement systems for acquiring data, and the identication of those who are responsible for producing specic program outcomes. The table is formatted so that a wide variety of program stakeholders can use it as a quick reference. This module describes the steps involved in constructing a table of measures, explaining each step using the example of an academic affairs program that is focused on student success.

Role for a Table of Measures The table of measures is a template that summarizes multiple steps in the process of the 1.5.2 Methodology for Designing a Program Assessment System. It is intended to be used as a quick reference for those who are intimately involved in designing the program, and for stakeholders whose actions are critical to program success but who may have been only indirectly involved in its design: for example, faculty, parents, students, advisory boards, and accrediting organizations (Burke, 2004). To assess, one must observe performance and rate the quality of the performance based on specied criteria; one must also collect and analyze data and other evidence (Hollowell, 2006). To provide high-quality feedback that can be used to improve future performance, an assessor should also measure and analyze a particular outcome (Walvoord, 2004). As such, the table of measures should capture the essential indicators of program quality, identify what needs to be measured, specify how and when measurements should be taken, and identify the persons responsible for assuring quality in each area (Middle States Commission on Higher Education, 2005 and 2006). Table Structure A table of measures consists of six vertical columns; they are labeled criterion, attribute, weight, means, instrument, and accountability. The criterion column lists performance criteria. The attribute column describes what is going to be measured; in other words, the measurable characteristics that underlie each performance criterion. The weight column reects the relative importance and rank assigned to each attribute. The means column identies the appropriate vehicle or method that will be used for capturing the performance data. The instrument column identies a specic tool or gauge that is selected to measure the performance. The accountability column identies the individual responsible for delivering a quality result for each attribute.

Steps in Building a Table of Measures To illustrate the process of building a table of measures, we will use as an example the Ofce of Academic Affairs (OAA) at a comprehensive, public, urban, land-grant university. This school has an open admissions policy, offers a variety of academic programs, and prepares students for certicates as well as associate, baccalaureate, graduate, and professional degrees. This university functions as a higher education state system and is charged with identifying and meeting the needs of local residents, institutions, and communities. Before assembling the table of measures, the OAA wrote an essence statement to describe the core values of the program, its purpose, and what makes it unique. They consulted stakeholders across and outside of campus, dened the scope of the program, ranked the top ten goals for the program, and analyzed the top ve processes and products of the program. Performance criteria were than crafted, and up to three measurable attributes were identied for each criterion. Table 1 shows the OAAs Table of Measures. Step 1Organize Performance Criteria and Supporting Attributes. Align performance criteria and attributes by entering each selected attribute into a separate row. It is very important to select the areas of quality you really want to measure. The list of desired attributes is often long and therefore impractical to measure, so it essential to weight and prioritize them. In this example, the chosen criteria were student-centered, oriented toward student-success, aligns with the institutions vision and mission, supports professional development, values the contributions of faculty and staff. Nine attributes are named to support these criteria. Step 2Weight Attributes. Assign a relative percentage weight for each attribute so that all of the percentages in this column add up to

63

100%. Consider dropping entries with low percentages or combining these entries to produce a new item that is sufciently important. Continue to choose and iterate, adjusting weights for each attribute so that the table of measures accurately represents the priorities of the program. Reweight the column and then resort. Consider removing any attribute that receives a weight of less than 5%. This will usually produce 8 to 12 measures, each signicant enough so that if efforts are made to improve them, it will denitely elevate the quality of the program. In the example of the OAA, most of the attributes are weighted at the 10% level or higher. Step 3Determine the Means of Measurement. For each attribute identify the most accurate and reliable means with which to collect the data you need in order to monitor progress or success. This step helps clarify what needs to be set up in order to collect data, when it will occur, and how it should be structured. This step is an important part of the planning process because it is often impossible to reconstruct past performance data. The only way we can measure the critical areas of performance is to be aware ahead of time when this information can be obtained during an academic year. It is important not to confuse the means for collecting data with the instrument that measures the data that is collected. The means is a vehicle or technique used to collect the data about a performance: the instrument is a particular tool or gauge used to measure the performance reected in the data collected. For example, two means for collecting data are portfolios and surveys. Evidence of a students problem-solving skill development may be collected using a portfolio. A rubric is a useful instrument used to measure a students problem-solving performance. One might collect data about customer satisfaction using a survey. One might measure customer satisfaction using a satisfaction index. The following examples illustrate attributes, means, and instruments for different scenarios. attribute: means: level of knowledge attained standardized exams, College Board Tests

attribute: means:

student knowledge of tools for solving engineering problems Professional Engineering exam, survey of employers one year after graduation

instrument: test score, satisfaction index Step 4Select a Key Instrument. Select a key instrument, tool, or gauge that is suitable for measuring performance in each attribute. For each means, determine if an instrument exists to measure each specic attribute. If no instrument exists, then one must be built. Test the instruments to determine their accuracy, precision, reliability, appropriateness or feasibility, and comprehensiveness with respect to their associated attributes. In the example of the OAA, many existing data collection instruments are invoked but in several cases the data is post-processed to more directly answer questions about the program attribute. Step 5Designate Owners for Each Measured Attribute. In order for the program to improve from its baseline performance to its target performance, it is important for the program to have a champion for each important selected attribute. Assign the responsibility for each attribute to a campus leader. These champions should be distributed across the program, but should have sufcient authority to remediate quality issues by redirecting budgets, manpower, and policies. If an attribute doesnt have a logical a champion, that attribute should probably be dropped from the table of measures. In the example of the Ofce of Academic Affairs, academic leaders in a diverse set of units are responsible for initiating data collection, analyzing ndings, overseeing continued success, and implementing necessary changes to improve program quality. Interpreting a Table of Measures If we examine Table 1 and look at the criterion, oriented toward student success, we see that OAA wants to give this area primary emphasis. That quality is parsed into the two most important attributes to be measured, namely, 1st.year retention rate and Graduation rate/program completion rate. The means for collecting the supporting data is an Institutional Research Report. To measure the OAAs performance in increasing the 1st year retention rate, the instrument used is a chart of the number and percent of students returning for the following academic, broken down by demographic background. The instrument used to measure the Graduation rate/program completion

instrument: test score

attribute: means:

monetary per-capita expenditure budget

instrument: discretionary expenditures/FTE

64

Table 1 Criterion (Quality)

Academic Affairs Program Focused on Student Success Attribute (Measure/Factor) Documented student learning outcomes for each program Weight (%) 10% Means (Vehicle) Annual assessment reports National survey of student engagement Student course evaluations Instrument (Specic Tool) Statistics on programs reporting and using outcomes Weighting of responses to key questions Weighting of responses to key questions Spreadsheet with demographic and academic data on students who do and dont return for the next academic year Spreadsheet with statistics on number and percentage of students who obtain different degrees in 2, 3, 4, 5, and 6 years Web page with meeting topics and action items devoted to student success Workshop assessment forms Student success articles in annual newsletters and publications

Accountability Department chairs Assessment coordinator Directors in Academic Affairs

Student Centered

Students satisfaction with climate and support services Students satisfaction with their college experience

10%

10%

Department chairs

Retention rate Oriented Toward Student Success

15%

Institutional research report

Vice provost OAA direct reports

Graduation rate and program completion rate

20%

Institutional research report

Department chairs

Aligns with the Institutions Vision and Mission Supports Professional Development

Importance of OAA in working the strategic plan Added value to faculty and staff as facilitators of learning Visibility of OAA student success stories

10%

Time allocated to student success in provost council meetings Workshop feedback Faculty and staff annual activity reports Weighting of contribution to student success in faculty and staff salary determinations

Provost

10%

Vice provost Assessment coordinator Vice provost VP for advancement

5%

Values the Contributions of Faculty and Staff

Role in annual performance appraisals

10%

Scoring rubric

Department chairs Direct reports Provost

65

rate is a chart of the number and percent of full-time and part-time students graduating or completing programs in two, three, four, ve, or six years. Department chairs, OAA directors, and the Provosts ofce share responsibility for promoting and assessing student success on this campus. These responsibilities constitute a sizable portion of these job descriptions. Concluding Thoughts Producing a table of measures is deceptively simple. However, if it is to have long-term value, its creators need to invest time uncovering what is distinctive about a program, how this is manifested in a small set of key attributes, when and under what conditions each of these attributes can be measured, and who should take responsibility for sustaining quality in each attribute. This can only occur by thoughtfully navigating each of the steps in the 1.5.2 Methodology for Designing a Program Assessment System. When we make the investment to faithfully follow the methodology and to organize the results in a table of measures, we can assure many program stakeholders that the things we choose to measure are the things that really matter most. References Burke, J. C. (Ed.). (2004). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco: Jossey-Bass. Hollowell, D., Middaugh M. F., & Sibolski, E. (2006). Integrating higher education planning and assessment: A practical guide. Ann Arbor, MI: Society for College and University Planning. Middle States Commission on Higher Education. (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author. Middle States Commission on Higher Education. (2005). Assessing student learning and institutional effectiveness: Understanding middle states expectations. Philadelphia: Author. Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments and general education. San Francisco: Jossey-Bass.

66

Section 9
Documenting Program Quality
Faculty Guidebook: 1.5.7 Writing an Annual Assessment Report Faculty Guidebook: 1.5.8 Assessing Program Assessment Systems

67

68

1.5.7 Writing an Annual Assessment Report


by
Faculty Development Series

Kelli Parmley (Director of OBIA, University of Northern Colorado) and Daniel K. Apple (President and Founder, Pacic Crest)

An annual assessment report is a mechanism that can sustain attention to continuous quality improvement and demonstrate ones accountability for external audiences. Once a quality assessment system is designed, the challenge for program participants is getting started. This module claries the purposes and uses of an annual assessment report and identies a template for an annual assessment report. Purposes and Uses for an Annual Assessment Report Assessment systems have two chief purposes, improvement and evaluation. While the use of information for these two purposes requires a distinctly different mindset (4.1.2 Distinctions Between Assessment and Evaluation), the process of measuring, recording, and reporting of information is common to both assessment and evaluation. An annual assessment report is a framework for reporting information for both improvement and evaluation. In the case of assessment, reported information is used to provide constructive feedback for purposes of improvement. In the case of evaluation, performance is compared against a standard and the results are permanently and publicly recorded. An annual assessment report serves two additional and important purposes. Since the intended audiences are key program stakeholders, the report can be used to communicate with them about other processes such as budgeting, planning, recruiting, and fund raising. Note that in the case of an academic program, there might be multiple stakeholders such as students, faculty, alumni, and administrators. Secondly, since an assessment system should be regularly assessed to ensure continuous improvement (1.5.2 Methodology for Designing a Program Assessment System), the annual assessment report provides the evidence necessary for assessing the assessment system by providing direction for improvement. Components of an Annual Assessment Report The report needs to be written with the programs multiple audiences (stakeholders) in mind. Therefore, the physical appearance of the report should be of publication quality (e.g., color, glossy print) similar to other professional annual reports. Table 1 presents a template for an annual assessment report. The following provides greater descriptive detail of components found in the template. Packaging the Report Front Cover The front cover of the report should clearly identify the program title and the year, and it should list the names of the key program contributors (e.g., faculty). Visual imagery that captures the essence of the program and its key processes or products (e.g., pictures, graphics, colors) should compliment this information. Back Cover The inside back cover of the annual report should be used to list important activities and events that are planned for the subsequent year such as lecture series, brown bag lunches, and alumni events. The back of the cover should include important contact information for the program (e.g., address, email, phone numbers) and list additional program participants (e.g., student workers, part-time employees, volunteers). Providing Program Background Inside Front Cover The inside cover of the annual report provides important context for the report, and is an important tool for communicating with stakeholders. It should specify the information from the rst ve steps of the assessment system. Illustrating Continuous Quality Improvement Interior Pages The interior pages should be printed front to back with each side devoted to an area of quality (performance criteria) identied in the Table of Measures of an assessment system (1.5.2 Methodology for Designing a Program Assessment System). These should be prioritized such that the rst and second most important performance areas are captured on the front and back of the rst interior page. The subsequent pages should address each of the remaining performance areas. One suggestion is to devote a page to assessment; this provides an opportunity to identify your overall progress in implementing and assessing an assessment system.

69

Design for a Page The page for each performance area should be divided to include ve elements. These are found in the section Design for a Page of Table 1. By including top accomplishments, additional accomplishments, and efforts, those who are responsible for the program are able to convey a great deal of information about quality in that performance area, yet do so in a prioritized fashion. The top accomplishments should be signicant contributions to performance in that area of the program. Each of those signicant accomplishments should be thoroughly described based on evidence gathered (e.g., the attributes, means for collecting the data, and instrument vehicles in the assessment system). For a program with a newly designed assessment system, strong evidence may not be available, but this does not prevent a program from writing the annual report. Use what evidence is available to describe the strengths of this accomplishment and its contribution to the quality area. For any particular performance area there are additional accomplishments and efforts. These are less signicant than the top accomplishments of the program, but indicate contributions to quality in that area. Descriptions of additional accomplishments should be single sentences, while additional efforts should be stated briey in phrases. Lastly, each page should also identify short-term and long-term activities and plans for improving the program. In this section, care should be taken to avoid evaluative language and to emphasize opportunities for improvement. Identifying these activities and plans on an annual basis provides a clear linkage from one year to the next. The short-term activities and plans should inform campus operational processes (e.g., budgeting and planning) and should be assessed in the subsequent years annual report. The long-term activities and plans should inform strategic and program planning processes. Concluding Thoughts A quality assessment report can be one of the most valuable activities of the year. It helps the program recount all their accomplishments and provides the basis for direction in the next year. It provides clear documentation to obtain resources, internally, through the budget process and helps to obtain more resources externally through grants and development. An annual assessment report supports the strategic and program planning processes by clearly identifying long-term activities and plans. Lastly, the accreditation and other evaluation processes become much easier because work is done on an annual basis.

Table 1 Template for an Annual Assessment Report Packaging the Report Front cover Program title Slogan/phrase with special meaning Year Key program contributors Images that capture the essence of the program, key processes, and products Inside back cover Schedule of activities and events of interest for the coming year Ways to get involved (e.g., open houses, presentations, social activities, celebrations, symposiums) Back cover Contact information (who to contact for informal requests) List of program members Providing Program Background Inside front cover (carefully crafted for the audience) Program essence and scope Key features (processes/systems) Key assets (products) Goals (current and future) Illustrating Continuous Quality Improvement Interior pages (front to back) First page (immediately following the front cover) most important performance area Second page (back of rst page)second most important performance area Subsequent pagesone page for each of the remaining performance areas Last page (before back cover)continuous quality improvement (assessment) Design for a Page Performance area (specied at the top of the page) First page 1. Top two or three accomplishmentsparagraph with strong evidence and value articulated 2. Top 10 additional accomplishmentssingle sentences 3. Additional efforts 4. Planned activities and improvements for next year 5. Strategic plans for the next ve years

70

1.5.8 Assessing Program Assessment Systems


by
Faculty Development Series

Kelli Parmley (Director of OBIA, University of Northern Colorado) and Daniel K. Apple (President and Founder, Pacic Crest)

The practice of continuous improvement applies not only to program performance, but also to the assessment systems that are used to assess programs. Assessment systems that are efcient and current are less time consuming to employ and are more likely to yield reliable data. As strategic planning processes shape institutional vision, mission, and priorities, the assessment systems by which an institutions programs gauge performance and help direct improvement should align with these goals. Therefore it is crucial to review assessment systems annually with the goal of continuously improving the process. This module identies the characteristics of a quality assessment system, provides a tool that assessors can use in assessing a program assessment system, and it describes how to use feedback effectively to develop an action plan for improving the assessment system. What Makes a Program Assessment System a Quality Assessment System? The process of assessing an assessment system begins with the same approach used to assess at other organizational levels or in other contexts: it involves a mindset that is not focused on the actual level of quality, only how to improve it (4.1.2 Distinctions Between Assessment and Evaluation). To ensure currency and alignment with institutional goals, regular assessments of a programs assessment system is also consistent with the assessment standards of regional (Middle States Commission on Higher Education, 2002; Higher Learning Commission, 2001) and many professional accrediting bodies (Accreditation Board for Engineering and Technology, 2002). Table 1 identies a tool for assessing a program assessment system. The columns in the table are structured for the assessor to provide the strengths, areas for improvement, and insights to assess each element of the system. The rows in the table identify key criteria to use for assessing quality (4.1.9 SII Method for Assessment Reporting). Essence Statement In contrast to an inspirational mission or vision statement, which is often lengthy, an essence statement should provide an immediate sense of the core values of the current program. It should be concise, yet comprehensive, and be stated in a complete sentence. Scope The scope of a program is anchored in present performance and should clearly articulate the core of the program and its boundaries. Statements about what the program is not identify what is outside the scope of the program and why gray areas occur. Current and Future Goals The top ve current and future goals should represent a three-year to ve-year timeframe for the program and be specic enough to indicate direction and magnitude. A teaching and learning center might identify as a goal collaborative initiatives among faculty within and across
F lt G id b k

disciplines and schools. However, feedback might suggest that the magnitude must be claried. Will the teaching and learning center be maintaining their level of offerings or will they be increasing them? Processes and Systems The top ve processes and systems of a program (e.g., curriculum design) should help contribute to the accomplishment of the current and future goals and should produce the products, assets, and results. The distinction between the two is important for determining the performance criteria, which may be process-oriented or product-oriented. Performance Criteria It should be clear that the performance criteria, when present in a program, will produce quality. Consider this: Faculty provide proactive and developmentallybased advising that is centered on students needs within a systematic framework. This performance criteria statement provides a concise statement in a specied context that is understood and valued by multiple stakeholders in that program. Attributes Attributes (measurable characteristics) provide the means for helping to differentiate levels of performance for each criterion. There should be no more than three for each criterion, and they should be measurable and signicant. Examples of attributes for the advising performance criterion include timely interaction with students, students graduate within four or ve years, and students can effectively create a semester course schedule. Weights The attributes should be prioritized by the weights assigned to each. The weights should indicate the signicance of each attribute to the overall program performance. In this regard, they should add up to one hundred percent but no single factor should account for less than ve percent. Means for Collecting Data The means for collecting the data (e.g., portfolio, survey) should be a reasonable, cost-effective venue for collecting the data in a timely manner.
131

71

Instrument An instrument is used to measure the data that is collected. For example a survey may be the means for collecting the data, but a satisfaction index would be the instrument for measuring satisfaction. Similarly a standardized test may be the vehicle for gathering data on student learning, but test scores are the instruments used to measure knowledge. Instruments should be appropriate, valid, reliable, and accurate. Benchmarks Benchmarks identify the current level of performance of a program, while future targets identify the level of performance the program is striving for. Future targets should be clearly related to the performance criteria. They should be based on performance (as compared to effort) and be attainable, yet challenging. Accountability A specic individual (as opposed to a job title) is assigned responsibility for a factor. Responsibility should be distributed among program participants. Using the Tool Effectively The tool for assessing an assessment system is the means for an assessor and the assessee to increase the quality of an assessment system. The tool provides a framework for structuring the feedback both for the assessor and the assessee. The following tips suggest ways to use the tool more effectively. 1. An interdisciplinary assessment review committee of ve to fteen people (from various disciplines across campus) should be established to assess a program assessment system. 2. Set up a time schedule where assessment systems are reviewed on a monthly basis with the full cycle of assessing program assessment systems occurring over a three-year period. 3. The program should identify areas in which it would most like feedback. 4. Based on the feedback priorities identied by the program, the committee should use smaller review teams of two or three people to assess. While providing additional (perhaps contrasting) feedback, be careful not to send mixed messages to the assessee. 5. Before beginning the review, the team should read and analyze the criteria for assessing a program assessment system. 6. The review team should strive for feedback that is of high quality, not quantity. 7. Be careful not to use evaluative statements; there are no standards (good or bad), because the emphasis is on how to improve. 8. Provide an opportunity to complement written feedback with a face-to-face report.

9. The form should be used as an electronic template where feedback is recorded directly into the chart (versus hand-written notes on a paper document). 10. The feedback should be very explicit and directive about how to improve; it should give direction and assistance, not platitudes. Turning Feedback into an Annual Plan of Action Using the feedback provided by the review committee the program participants can establish a course of action. Two to three percent of the programs resources should be explicitly set aside for purposes of improvement. Within those parameters, the program participants need to scope the changes and address the basic question: What is a reasonable amount of change to make based on what was learned from the feedback? 1. Prioritize and choose the changes to be made to the assessment system based on which changes will leverage the most improvement. 2. Specify a detailed list of activities that must take place in order for a proposed change to occur. 3. The activities should be accompanied by dates for completion and an individual should be assigned to be responsible for carrying out each activity. 4. Program participants should be updated on a regular basis, perhaps with a standing agenda item on the regular program meeting schedule. 5. The changes made to an assessment system should be included in the programs annual reporting process as evidence of improvement. Concluding Thoughts An assessment system must be healthy, dynamic, and continually advanced in order for it to help the programs strategic plan to be aligned with the institutions strategic plan. Therefore, the program should assess its program assessment system once a year and invest two or three percent of its annual program resources for implementing program assessment and improvements. The long-term outcome result of assessing the assessment system is greater buy-in from program participants. References Accreditation Board for Engineering and Technology. (2002). <http://www.abet.org> Higher Learning Commission. (2001). <http://www.ncahigherlearningcommission.org> Middle States Commission on Higher Education. (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author.

72

Table 1 Criteria

Tool for Assessing an Assessment System Strengths Improvements Insights

1. Essence statement Represents all of its stakeholders Is comprehensive Is concise Values are identied and appropriate 2. Scope Claries what is outside the programs core Claries what the program does do Differentiates core aspects (current) from future aspects Claries why misconceptions can occur (gray areas) 3. Top ve current and future goals of the program Are specic Are measurable Are clear in direction and magnitude Represent a three-to-ve-year time frame 4. Processes and systems Are descriptive Identify key processes Provide intent, direction, and connections 5. Products, assets, and results Are explicit Are descriptive Are important Are obvious 6. Assessment report Documents accomplishments Provides strong evidence Provides clear action plans Documents the past years improvements made to the assessment system Presents a professional image

73

Table 1, continued Criteria

Tool for Assessing an Assessment System Strengths Improvements Insights

7. Performance criteria Are concise Are free from jargon; understandable by multiple audiences Provide context Produce quality Are valued by multiple stakeholders 8. Attributes (measurable characteristics) Are not too small Are not too large Are single dimensional Are measurable Contain appropriate units 9. Weights Sum to one hundred Factors less than ve are not included Are assigned an appropriate value Are aligned to a factor 10. Means for collecting data Are cost-effective Are timely Are obvious Are reasonable Capture performance data 11. Instruments Are reliable Are appropriate Are valid Are accurate 12. Benchmarks and future standards Are related to criteria and factors Are based on performance as compared to effort Dene the level of success used for evaluation Are benchmarked Are challenging Are attainable 13. Accountability Is assigned to a specic individual (not just a title) Is assigned to all internal stakeholders Is distributed appropriately

74

Appendix A
Models of Program Assessment
University of the District of Columbia Institutional Measures of Effectiveness January 2005 ....................................... (Page 77) Ofce of the Provost and Vice President for Academic Affairs Program Assessment Institute Work .................................. (Page 101) Program Assessment System for a Comprehensive Developmental Education Program ............................................... (Page 109)

75

76

University of the District of Columbia Institutional Measures of Effectiveness January 2005


Essence The University of the District of Columbia (the University) is the sole public source for accessible, inclusive, affordable, and comprehensive public higher education in the District of Columbia and provides additional life-long learning opportunities. The University delivers quality instruction and uses studentcentered approaches to empower and benet both individuals and its local communities. The University, an urban land-grant institution, is a very diverse community, a gateway to the world, and a signicant investment engine for the District of Columbia. Stakeholders Internal Stakeholders

Educational Stakeholders

Students Faculty Alumni Board of Trustees Staff Mayor & Executive Branches Government Employees of the District Agencies City Council Congress Federal Cabinet Ofces such as, Department of Education Department of Agriculture Department of State Department of Commerce Department of Health and Human Services

District of Columbia Public Schools District of Columbia Private Schools Surrounding K-12 Districts and Private Schools Post Graduate Programs Regional and Professional Accreditation Agencies Consortium of Universities of the Washington Metropolitan Area External Empowerment Programs Business and Industry Hospitality Industry International Communities and Governments Granting Agencies Donors Hospitals Clinics Long-term Care Facilities

Governmental Stakeholders

Economic Stakeholders

Health and Human Services Stakeholders


Community Stakeholders

DC residents Surrounding Local Neighborhood Community Civic and Faith Based Organizations Metropolitan Area International Community Media 77

Scope
What we are Unique Characteristics Urban land-grant institution identifying and meeting the needs of D.C. residents, institutions, and communities* Urban university/ commuter school with Metro access Tenant organization-land owned by Federal Government Designated by an Act of Congress as a Historically Black College and University (HBCU)** Very affordable--high educational value Academic Characteristics Classied as a Carnegie comprehensive University offering graduate and professional programs Open admissions Philosophical Characteristics Empowering students who are underprepared (educational leveling) Organizational Characteristics Funded by the District of Columbia, Tuition and Other Sources

Open admissions

Predominantly Black, multicultural, and international Tenant organization-land owned by Federal Government Designated by an Act of Congress as a Historically Black College and University (HBCU)* Accessible and on prime location

Post secondary vocational opportunities Primarily a teaching institution

Life-long learning institution Primarily a teaching institution

Profession programs such as Law, Nursing and Allied Health, Engineering, Social Work ,and Architecture Graduate programs such as Education, Business, Speech Language Pathology, Counseling, and Public Administration More than a traditional four year college Member of Middle States Commission on Higher Education and other national and regional profession accreditation bodies

Very affordable--high educational value

The State University System for the District of Columbia

Responsive to the stakeholders needs

The State University System for the District of Columbia

Product of consolidation of three predecessor institutions

More than a traditional four year college

More than a traditional four year college Member of Middle States Commission on Higher Education and other national and regional profession accreditation bodies

78

What we are Unique Characteristics Academic Characteristics Philosophical Characteristics Provide accessible and affordable public facilities consistent with our land-grant mission Organizational Characteristics Provide accessible and affordable public facilities consistent with our land-grant mission

1. 2. 3. * Public Law 89-791, signed by President Lyndon B. Johnson in 1966, established Federal City College (one of the predecessor institutions) as an urban land-grant college. 4. 5. ** So designated by the 106th Congress in 1999 by H.R. 485 which stated: 6. 7. Under existing law, UDC is, by denition a Historically Black University that qualies for Historically Black Colleges and Universities (HBCU) funds because it meets the three salient requirements: (1) UDC was created from colleges established before 1964; (2) it served primarily black people; and (3) it is an accredited institution.UDCs Ongoing Goals for each of the Next Three Years (2004 2007) Academics Expand relevant and state-of-the-art learning and instructional environments that support student centered learning Assess, improve, document, and promote the quality of academic programs Grow student enrollment by increasing admissions, retention, and completion rates for the universitys wide-range of programs Recruit, enroll, and retain more of the best and brightest of the Districts college students as students at the University Student Development Improve student life and support services for a more holistic collegiate experience Help new students to become college ready Create opportunities for students to discover their potential, nurture its development and realize it fully Keep talented District of Columbia college students at home Improve and increase student leadership opportunities to further develop civic engagement, community service, and ethical character Strengthen the career development needs of students and increase job placement opportunities for students Develop Lyceum series focusing on public policy and service to stimulate and enhance intellectual and cultural development and support civic engagement Service to the Community Strengthening the mission of empowering K-12 to go to college Expand across all units, the addressing of the needs of the community to effectively fulll the importance of the urban land-grant status and obligations Strengthen the service orientation and mindsets through more service learning and institutional 79

volunteerism Acquire District and other funds to research and design solutions for citizen services that increase effectiveness and efcient use of government resources Administration and Institutional Effectiveness Build and sustain a solid leadership and management team Do an annual updated needs analysis of the stakeholders so the university can respond effectively Develop non-DC government revenue sources as well as improve DC funded resources to advance and maintain University infrastructure and critical programs, advance quality, and address new needs Advance the resources for operations such as telecommunications, information technology, and infrastructures. Improve the image of the university with all its stakeholders by improved public relations, website, information systems, and relationships with its stakeholders as to increased visibility and respect Develop a program to identify, recruit, and train trustee board members and assess board performance to achieve continuous improvement Develop on-going processes of cultivation and strengthening university-governmental relations

80

Goals to be accomplished by 2007 Teaching and Learning Achieve greater recognition for our empowerment of the under-prepared students leading to successful completion of their academic plans Advance the quality of teaching/learning through an evolving set of cultural values and practices that embrace stronger pedagogical approaches especially active learning, student-centered learning environments, innovative instructional design, effective use of technology, and applied internships and disciplinary practice Increase the standards for performance throughout the university, especially the expectations for student achievement and have in place more rewards for outstanding performance To become the major source for delivery of professional and workforce development for the districts employees and federal employees Develop a reward system for faculty based on teaching and mentoring that measures, documents, and awards stellar performance based upon student success as well as exceptional performance in obtaining grants and producing quality scholarship Establish and maintain additional facilities and support services (including a Student Center) which strengthen and support holistic student development and teaching and learning Scholarship Improve signicantly the national reputation of the University through its academic programs especially in its facultys set of accomplishments in teaching and research Administration Strengthen management processes, systems, and structures to leverage what is available in the market place, including what is available from the District governments systems and processes, for the specic needs of the university environment Advance the planning, budgeting, decision making, tracking, measurement, assessment, and evaluation of the universitys critical resources to increase the productivity, effectiveness, and efciency of the Universitys administrative units and services Develop a culture of continuous improvement and quality performance Institutional Advancement Fund the University Master Plan for capital improvements Set annual targets to increase external funding Create a pool of donors that enables the University to enhance its discretionary funds and allows the President, Vice-Presidents, and Deans to meet specially identied needs that fulll the mission of the University Create a cohort of advocates coordinated by a key person that produces effective resource development by the many agencies that support higher education Developing and Promoting Quality Increase the understanding that the University is the District of Columbias University System for addressing the wide-range of public mission that each state expects and funds as its obligation Advance new relationships and partnerships by providing meaningful services and assistance to key stakeholders Produce a community where all stakeholders enjoy performing on a daily basis with professional pride, have a sense of valued contribution, and have an urge for continuous quality improvement Provide improved and responsive customer service to internal and external stakeholders Maintain program accreditations and seek accreditation for other academic programs as appropriate 81

Institutional Research Assessment and Planning Establish and staff Ofce of Institution Research, Assessment and Planning Develop Assessment system and reporting cycle for all University units and programs Establish appropriate data bases for academic and institutional planning The Universitys Key Products, Assets, Accomplishments, and Cultural Values Full Regional, State and Professional Accreditations Engaged, competent, and intellectually active Faculty & Staff Visionary and strategic leadership team Location Capital plant Graduates who are competitive for graduate school and employment Diversity of the student, faculty and staff Richness of the diverse experiences of the administration, faculty, and staff Commitment to the Universitys mission Faculty, Staff, and Students actively engaged in the Districts growth and development through a variety of partnerships Expertise in working with students with a wide range of abilities, including academically underprepared students to develop college readiness The Universitys Key Processes, Systems, Structures and Policies

Faculty tenure process: A well-dened set of criteria for faculty performance, a strong mentoring system for the rst three years, an assessment system to provide annual feedback, an effective midtenure review, and fair evaluation of performance against the standards. Merit and performance reward system: An annual process that has all professionals produce a professional plan for the year, collects performance data, and produces ongoing self-assessments and an annual assessment report that is reviewed and used to mentor and to provide evidence for performance based rewards. Robust nancial support system: A strong planning process that monitors progress and has the capability to analyze and model (what if) and thus improve the nancial operations of the university. On-line registration: A standard component of the Universitys integrated student information management system, providing students, faculty and staff with readily available processes for registering students according to their chosen or probable degree program. An Integrated Assessment System: A robust system for periodic assessment of programs, systems, and processes. Scheduling of Facilities and Rooms: A process for scheduling events and reserving space and providing appropriate support and equipment. Professional Development: A process for developing professionalism in administrators, faculty and staff, promoting academic and administrative leadership and increasing cultural sensitivity.

82

Promoting the Universitys Image: An aggressive, proactive plan that communicates the strengths and contributions of the University and enhances its image regionally and nationally. The planning and decision making process: A process for allocating and reallocating limited resources that produces immediate results and also positions the University for strategic future internal and external investments. Enrollment Management: A program for the recruitment, enrollment, and retention of students with specic benchmarks and targets. Key administrative processes such as procurement, personal, and payroll: Key administrative processes, such as contracting and procurement, and personnel and payroll are supported using District of Columbia provided information systems. The University has worked with the District leadership to alter a few operational sub-processes to support the unique needs of the academy. An example includes the establishment of a University only ofce of contracting and procurement (OCP) commodity cluster that solely supports the University. Other areas of the District are supported by multiple OCP commodity clusters which are focused on the product or service being purchased. Personnel and payroll sub-processes have also been adjusted to be performed by University based human resources personnel who are familiar with requirements involved in hiring adjunct faculty. Strategic academic planning: An overall academic planning process which indicates academic priorities, sets the future direction of academic programs and units, and develops and implements an Assessment Plan and Process for the Ofce of Academic Affairs. Student information management system : An integrated, robust software system which supports admissions, student records, degree program management, classroom utilization planning, registration, nancial aid, student accounts, alumni and a broad range of other student services which are common to any University. The system meets U.S. Department of Education requirements regarding record keeping and student progress tracking to assure nancial aid program compliance.

83

Top Ten performance Criteria for the University Qualities Performance Criteria Student centered: The whole university community collaborates and focuses on meeting the specic needs of each student through a unique developmental process which produces selfdirected learners capable of charting their own future development. All individuals clearly know what they and others expect, plan effectively, are driven to exceed expectations, and desire the feelings that comes from accomplishing these challenging outcomes.

Success oriented:

Customer-focused: An organization that engages its clients through effective communication, analysis of their needs, and the provision of effective services that exceed their expectations as agreed upon with the clients. High performing: Self-reective individuals, who set high standards, exceed expectations, are committed to successful and innovative outcomes, and have superior performance through a well developed skills set and dedication. The University community members consistently reach out and help those in need by volunteering, mentoring, and by fostering civic responsibility for enhancing the quality of life in our urban community. The institution builds the future upon strong economic modeling using return on investment, makes the tough decisions, has strong strategic and operational planning and adapts to the economic needs of the extended regional community. Professionals within the University community constantly are seeking out the best practices within their disciplines, faculty are researching their teaching and learning practices, and investments are made in professional and faculty development leading to effective and appropriate selection and implementation of state-of-the-art technology and best practices. The University fullls the mission of a university system through providing a foundational program for the under-prepared learner, vocational opportunities, a range of liberal arts program, an extensive set of professional programs, teacher preparation, and quality key graduate programs for the DC professional community.

Service oriented: Economically efcient:

Innovative:

Comprehensive:

Community oriented: The University is a citizen of the local neighborhood as well as the metro region, a leader for advancing the individual and strengthening community, seeks opportunities for strengthening educational experiences across the District, and offers a full range of programs on campus advancing cultural experiences, community efforts, and professional development in fulllment of its land-grant mission. Renowned: The University is considered one of the strongest Historically Black Colleges and Universities (HBCUs), with expertise on the empowerment of the individual, research in regionally critical areas, viewed as the rst choice for the DC resident, international prestige, and valued as the place to invest developmental dollars. 84

Assessing the last year performance: 2003- 04


Performance Criteria Strengths why & how
Establishment and engagement of the core student leadership in a collaborative teaming for advancing the quality of the University established a student leadership institute, student ambassador, town hall meetings, school pride week program students participate in some of the advocacy efforts Student leadership retreat The number of professional development and special projects to shift the university culture to more student-centered model. Engagement with Pacic Crest in the areas of Process Education (PE) teaching institute, course design, program assessment, and learning to learn camps. Student participation in the Self-Study University Scholars Series (USS) and other special presentations. Advance the number of students receiving certicates, baccalaureate, graduate and professional degrees and certication, and other life long learning opportunities. Retains students who may not be initially college ready to be successful college students. Some faculty members have received signicant federal grants and collaboration opportunities with a major research university (Georgetown University Medical Center) and federal agencies (NASA).

Improvements clarication Insights-- meaningful of the issue and the action learning plan (how)
Advance the best-practices of a student-centered learning environment to involve more faculty in the planning of professional development, direction, and make that a special charge for the associate provost to foster and stimulate a culture and practice of engaging students inside and outside of formal courses Engage students more in shared governance. Add them on major committees senate, Having systematic focus groups and open sharing sessions. Create a more interactive means for collecting effective feedback from the students. Students are a very powerful resource that when informed and engaged become a very critical resource for advancing the individual as well as the university. They resourceful, have energy, time, skills and capacity to do projects, lead efforts and contribute to on-going efforts. A community of empowered learners becomes amazing peer models for raising the level of expectations throughout the student population.

Student Centered

Success Oriented

Increase effective mentoring, advising, and counseling support and resources. Institute strong Foundations of Learning Courses in rst year; integrate Process learning throughout rst year core courses; have summer Learning to Learn Camp. Institute incentive system which allows faculty, their school/ college, Ofce of Sponsored Research, and the Ofce of Academic Affairs to receive a percentage of indirect costs. Offer opportunities for research presentations and publicize research efforts. Establish funding to offer faculty research grants to increase publications.

Commitment to targets requires strategic planning of interim measures to achieve intended results.

Retention is a by-product of successful experiences, and the probability of success is increased with appropriate support.

85

Performance Criteria

Strengths why & how


Several community forums which provide customer access to executive level staff through the town hall meeting format.

Improvements clarication Insights-- meaningful of the issue and the action learning plan (how)
Conduct formal standardized customer satisfaction survey. Improve mechanisms to provide better communication ow of information. Establish a customer service committee to brainstorm new strategies to increase customer satisfaction while incorporating best practices in customer service. Include improving customer service as part of staff development activities institution-wide. A large majority of customers appear very satised with their experience at UDC.

Customer Focused Students and alumni leadership serve on the Board of Trustees. Growing emphasis on the promotion of a student centered environment and the delivery of quality services to the entire community.

High Performing

Substantial increase in the number of accredited programs (e.g., Electrical, Mechanical and Civil Engineering, Nursing, Dietetics, and continued efforts to meet accreditation standards for other professional programs (e.g., successful site visits of Education and Law programs). Pending applications for University re-accreditation by the Middles States Commission on Higher Education and re-accreditation of Speech Language Pathology, Social Work, and Mortuary Science Programs. Recipient of highly competitive NIH, NASA, and other fed grant awards and co-grantee and collaborator with Georgetown University of scientic research programs.

Increase the level of scholarship among the university community helping mentor the research opportunities, strengthening research skills, and advancing the proposal writing and publishing skills of the faculty in both disciplinary as well teaching and learning areas. Advance the role of faculty and staff in carrying out the landgrant mission by developing structured opportunities involving collaborative projects with internal and external faculty, researchers, and citizens of the District of Columbia.

Leverage scarce resources with creativity, innovation and synergistic decision-making There are substantial reasons (quality, affordability, and mission) for external agencies and colleges to collaborate with the University to address urban issues, and efforts should be made to expand such relationships.

86

Performance Criteria

Strengths why & how


The University participated in a wide range of activities over the last year e.g., community fairs, economic and workforce development conferences, Adult Literacy programs, Learning to Learn Camp, Saturday Academy, academic and sports camps, TRIO, Institute of Gerontology, Master Gardening, pro bono legal services, Speech and Hearing Clinic, Youth Development Program.

Improvements clarication Insights-- meaningful of the issue and the action learning plan (how)
Need for better knowledge of what we are doing and why Need to systematize the various efforts so as to both capture opportunities and results so that allocation of resources can be better accomplished in the future. Design and develop a service oriented program initiative by providing opportunities for becoming a place for major service activities and programs. Some service oriented activities showcase the University and its attributes in the best light and can lead to positive impact on its future. Understanding this allows for selectively leveraging opportunities and results back into other efforts. We are doing more than we think and know we are doing. As we better know and promote what we are doing, and the quality of what we are doing, many of the factors causing poorer images shifts to cause positive images. University needs to expand its resources to supplement the budget provided by the District through increased public and private grant revenues and diversied revenue streams. This is necessary in order to reach the level of resources both needed and desired by the University to fulll its vision and mission.

Service Oriented

Economically efcient

Continued to meet the citys management requirements and had a clean audit. The set of management practices such as effective accounting processes and controls. The closely developed working relationship with the Districts governmental leadership as well as scal responsibility has produced condence and professional relationships that have resulted in increased funding.

Increase the effectiveness of nancial management reporting to the internal departments within the university. Find a set of opportunities to shift from paper reporting to access for electronic resources with shared access areas. Increase the user-friendly nature of the administrative systems The city will be installing new systems that we will provide training and coaching on the benets and use to solve current access issues. Maximize the District of Columbias resources through the provision of cost effective training and professional development programs and the development of research targeted to address and resolve the Districts economic and social problems and challenges.

87

Performance Criteria

Strengths why & how


Engagement of Ofce of the Chief Technology Ofcer (OCTO) to better understand the University and for the University to understand the range of services provided by OCTO led to new capital dollars, rebate of dollars from the Ofce of Property Management (OPM) Use of best practices of engagement of Faculty Union leaders in mutually respectful relationship by the administration led to resolution of a number of historical grievances and a new contract, 10 years overdue. On-going efforts to cultivate and maintain positive managementlabor relations.

Improvements clarication Insights-- meaningful of the issue and the action learning plan (how)
Continue to build our internal IT capacity and analysis of total cost and then benchmark against other peer universities to determine best approach For faculty union issues, restructure the administration decision-making partnership to be led by Academic Affairs management with University General Counsel providing legal guidance and advice. For staff union issues, restructure accordingly with staff management leading the decision-making on union issues with University General Counsel providing legal guidance and advice. Continue to work the legal backlog to ensure that decisions are made with a view to cost / benet. If you dont understand the market cost of the services you received, its easy to believe that you are getting a good deal which may not be so good for the University??? Ill feelings can be overcome if acknowledged and then honestly addressed. Capitalize on our location by tying our resources to resources in the Capital area.

Innovative

Engagement of District Chief Financial Ofcer (CFO) by President / Administration to gain special funding to facilitate faculty Increase quality and quantity renewal thru enhanced retirement of students matriculating at the plan. University by developing pipeline from Associate degree (AA) Engagement of OPM and Ofce programs and innovative high of Contracting and Procurement school programs such as Early (OCP) on $90,000+ bathroom and Middle College, HISKIP, and capital project to bid project to TRIO. comply with DC Supply Schedule (DCSS) requirements but not having to manage 6 different contracts by having only DCSS contractors bid the total job

88

Performance Criteria

Strengths why & how


Broad range of undergraduate, graduate, and professional programs. Advanced the number of professional development and training opportunities by the university. Advance processes and systems in sponsored research programs to meet external and granting agencies requirements and expectations. Established required committees, design web sites, informational programs, greater sponsored research, and understanding rules and requirements.

Improvements clarication of the issue and the action plan (how)


More professional development programs grant writing, scholarship, designing research initiatives look for ways to solidifying the stafng to support research efforts. Develop a system of rewards that promote faculty through incentives the accomplishments of research.

Insights-- meaningful learning


A robust sponsored research program requires a change in the expectations in the existing faculty as well as the recruitment of a new breed of researchers with credentials.

Comprehensive

Programs are developed to meet individuals developmental, academic, and career needs. Community Oriented

Publicize community offerings more

Solicit ideas, needs from Provides experiences that build on community the communitys diverse ethnic, geographic, intergenerational, and Offer community programs that enhance day-to-day life for multicultural diversity. community Stable, seasoned faculty and staff Obtain full accreditation for with long-standing commitment to selected academic programs, e.g., the mission of the University. Law and Education. Outstanding programs in such areas as Public Interest Law Program, Cancer Biology, Research and Outreach, Nursing and Allied Health, Jazz Studies, and the Institute of Gerontology. Admit at risk students and transform them into successful college graduates. Provide greater social, psychological, academic support for students Greater incentives and support for faculty research Strengthen number of faculty and improve facilities

Effective community oriented programs enhance community perception and drive UDC to be the education provider of choice

Renowned

Have more positives than thought and have to do a better job of highlighting our strengths and accomplishments. Need to enhance the public image by articulating and promoting our successes to the external public.

89

The ranked set of Institutional Measures Graduation rate Student retention rate Assessment of student needs and learning goals Job placement Teaching/learning practices Number of accredited professional programs Servicing the needs of students, faculty, staff, and stakeholders Number of sponsored research programs Graduate school placement Effectiveness of initial placement of students Number of opportunities for certication and degrees Extent to which programs meet the set of community needs

The quality of the University is based on the performance of various units of the University in alignment with the Universitys mission and vision. Their qualities are going to be measured by a set of institutional measures of effectiveness within their domains. (e.g., public image, communication with the public, alumni and citizen participation in institutional advancement, customer service improvements, student services, planning, and scal responsibility)

90

PC vs. Measures
Student retention rate Assessment of student needs and learning goals Job placement Teaching & learning practices Number of accredited professional programs Servicing the needs of students, faculty, staff, and stakeholders Number of sponsored research programs Graduate school placement Effectiveness of initial placement of students

Graduation rate

Number of Extent opportunities in which for programs certification meet the set and degrees of community needs

Student centered X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X

Success Oriented

Customer Focused

High Performing

Service Oriented

X X

X X X X X X X X X

91

Economically Efcient

Innovative

Comprehensive

Community Oriented

Renowned

Table of Measures
Instruments Baseline Performance 2003/2004 Non-researched numbers right now Guesses 35 (plugged #) 10 20 30 50 60 40 0 70 70 80 70 50 25 40 50 Deans Assoc Provost Outreach Dean Deans Assoc Provost Deans Math & Eng Unnamed director of rst year experience Each Department 6 15 80 65 75 140 Provost Provost VP Student Affairs Provost President VP- research Goal Performance 2006/2007 Accountability

Measures

Means

Graduation rate

Registrar

Student retention rate

Registrar

See instrument 1 4 year regular 4 year devel. 2 year 4 year transfer See instrument 2 1st year Ongoing- yearly Completion

Assessment of student needs and learning goals Percentage of learning plans meeting specications (see instrument 3) See instrument 4 60 4 12 65 45 55 100 80 See instrument 5 See instrument 6 See instrument 7 Students Faculty Community See Instrument 8 See Instrument 9 See instrument 10 50 75

Learning plans through a new Foundations of Learning course

Job placement

92

Teaching & learning practices Number of accredited professional programs Servicing the needs of students, faculty, staff, and stakeholders

Career planning and survey Teacher portfolios

sponsored research programs

Provost Annual Assessment Report FSSE NSSE External survey of community satisfaction Sponsored Research Program Assessment Report Alumni Survey

Annual analysis of 1st year retention Provost Annual Assessment Report

Assoc Provost for Enrollment Management Provost President

Graduate school placement Effectiveness of initial placement of students Number of opportunities for certication and degrees Extent in which programs meet the set of community needs

External survey of community satisfaction

List and Denitions of Instruments


Instrument 1: Graduation Rate - (4 different measures of graduation rates for different UDC missions rst-time, full-time undergraduate students; community college students, developmental education program; transfer program) 1. 4 year programs without remediation: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who have entered without remediation requirements that graduate within a 6 year period (non-transfer students i.e. less than 15 credit hours of transfer) 2. 4 year programs with remediation requirements: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who have entered with remediation requirements that graduate within a 7 year period (non-transfer students i.e. less than 15 credit hours of transfer) 3. 2 year programs: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time or part-time a 2-year degree program, with or without remediation requirements that graduate within a 4 year period. 4. 4 year programs of transfer students: success of transfer students (students entering with at least 15 credit hours) - the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who graduate within a 6 year period Instrument 2: Retention Rate (1st year retention and completion rate of courses)

1. 1st year retention the denominator includes full-time and rst-time students who have less than 15 college credits coming in (whether enrolled fall or spring). The numerator counts every student who comes back during the next academic year even if he/she skips either the rst or second term. 2. Continual retention students enrolled last year who didnt graduate or transfer are in the denominator and the numerator are those students who enrolled in at least one term the following year. Instrument 3: Learning Plans

With a designed rubric, the strength of a learning plan will be reviewed to see if it meets the dened specications for learning plan development. This is for all full-time undergraduates whether 2 year or 4 year programs. The measure will be the percentage of students who have a documented learning plan that are matriculated within a degree program by the end of their rst year. Part-time and graduate students will not be included, but transfer students will be included.

93

Instrument 4: Job Placement The inventorying of the graduating seniors through a survey through the alumni ofce to determine current professional activity to determine if they have accepted a professional job in their eld, obtain relevant employment, employment not utilizing the major eld of student, graduate school, break from school or work, or unemployed. The measure will be the % of students employed in their profession, related profession, or a professional entry job (not include part-time or lowly paid entry jobs in the service area (like McDonalds) minus the students who go to graduate school) Instrument 5: Teaching and Learning Practices The faculty will be submitting through their deans a teachers portfolio every three years, with an annual assessment report documenting the accomplishments and action plans for professional development and professional activities. The Provost ofce will use a rubric to determine the level of best practices being exercised by each faculty member and the annual measure will be the average of all full-time faculty using this rubric. Instrument 6: Accredited Programs At the end of each academic year, each individual professionally accredited will be documented in a list and an ofcial count will be documented. Instrument 7: Satisfaction of some key Stakeholders There are established instruments that are very effective in measuring two of the key stakeholders. NASSE is the instrument that is used for measuring student satisfaction. This instrument will be given to graduating seniors and for incoming students at the end of their rst year to provide measurement of satisfaction with regard to rst year experience and the whole experience. There will be a sampling of students who withdrew from the institution to gain perspective of their experience. An equivalent tool is available for faculty called FSSE. This will be given on a three year cycle for all faculty, but additionally the 1st year instructors will capture their satisfaction after completing their rst year. This will be done annually. Once every ve years, a survey will be designed and a cross-sectional sampling of the community will be surveyed to determine the degree to satisfaction. An index will be developed using the survey to track the changes over time. Instrument 8: Sponsored Research Programs The annual report of institutional research will identify all outside sponsored research. The measure will weight the following 40% total dollars newly acquired this year; 30% number of publications published in acceptable journals and peer-reviewed conferences; 20% number of different research programs being funded; 10% dollars of proposals sent out during the year Index structure- (current year $ brought in)/baseline (average of 2000/2004) * 40 + (current year publication total)/baseline (average of 2000/2004) * 30 + (currently funded programs)/(average of 2000/2004 funded programs) * 20 + (current year total dollars proposals submitted)/(average of 2000/2004 proposals sent out estimate) *10 Example (?)/$5million * 40 + (?)/15*30 + (?)/40*20 + (?)/$10million*10 6/5 * 40 + 20/15* 30 + 35/40*20 + 15/10*10 = 120.5

94

Instrument 9:

Graduate School Placement

The measure here is the ratio of students 12 months out the number of students who enrolled in a graduate program. Instrument 10: Initial Orientation, Assessment, and Placement

1. Completion rate of courses the denominator is xed at week three of the term after add/drop is nalized. The numerator counts the A, B, and C. This instrument focuses on a targeted set of gatekeeper courses that include: Math 101 General College Math 1, Math 102 General College Math 2, Math 105 Intermediate Algebra, Math 115 PreCalculus -Intensive, Math 151 Calculus I, Eng 111 English Composition 1, Eng 112 English Composition 2, English 113 - Technical Writing calculated each term. 2. 1st year satisfaction the NASSE survey given to 1st year students will pick out the relevant questions and an index measuring satisfaction in these areas will be created. Instrument 11: Ofcial Count of Distinct credit bearing programs

Active Program - The enumeration of each ofcial program that offers a certicate or degree, and has graduated or will graduate at least one person per year. Instrument 12: Satisfaction of the community

This measure will draw upon the survey of key stakeholders and an index developed around a set of key questions.

95

Appendix A: Signature of the UDC Graduate The UDC graduate: thinks critically and holistically to analyze issues accesses critical information, veries its validity, and maps alternatives uses converges quickly to common solutions/diverges creatively to solve complex problems analyzes data and builds models to help others understand situations better effectively applies disciplinary knowledge and skills to a wide range of challenges effectively accesses, learns, and uses state-of-the-art technology professionally continues to learn by aggressively pursuing opportunities for professional development uses principles, tools and approaches from many disciplines to perform in interdisciplinary situations communicates accurately, diplomatically, graciously, and respectfully with patience and empathy teams with others effectively in producing greater results and consensus decisions accepts challenges, navigates effectively within organizations, and takes action decisively gives back to the community time, effort, resources and critical leadership uses other peoples languages, honors their ways of living, values, beliefs, customs, and traditions knows his/her cultural roots, personal history, uniqueness of being an HBCU graduate, and has developed strong sense of self and vision models integrity by aligning actions with high morals, values, and principles in times of dilemmas works hard and perseveres through difcult times to live up to commitments consistently self-assesses his/her performance to improve future performance sets high expectations for self and others and constantly delivers performances of only the highest nature

96

Appendix B: Prole of Quality Educators: Connecting to the learner 1. Respect and strongly believe in their learners potential for success. 2. Effectively identify learner needs, wants and capabilities early in the process. 3. Value and effectively tap into learners prior knowledge and experiences to help meet new learning challenges. 4. Value adapting a process to meet individual needs without compromising standards. Facilitation 5. Produce quality learning environments that induce risk-taking. 6. Use a variety of effective active learning tools to shift the primary responsibility to the learners. 7. Consistently model the discipline skills and share their excitement. 8. Manage time effectively with appropriate pacing and variation in its use. Mentoring 9. Challenge the learner to dene their own learning objectives, plans, and expectations so they can realize their desired learning and growth outcomes. 10. Employ timely, effective interventions on process versus content. 11. Demonstrate strong mentoring skills, especially in raising the bar while also effectively providing affective management. Curriculum Design 12. Clearly articulate in writing the courses competencies (learning outcomes) and performance criteria related to the evaluation process. 13. Produce effective curriculum, including activities, methodologies, rubrics and connective processes that align with course performance criteria and learning outcomes. 14. Seamlessly integrate learning opportunities and resources in and out of the classroom to produce holistic development of learners. Assessment 15. Continuously assess learners cognitive, social, affective and psychomotor needs through observation, reading, active listening, and questioning. 16. Embed self-assessment, peer-assessment, and other forms of assessment as valuable learning and growth processes for both learners and themselves.

97

Appendix C: Prole Professional in Higher Education 1. Effective at developing strong project plans, build productive teams that collectively buy into the goals and philosophy, consistently identies their needs, facilitates each persons role and performance, and constantly monitors the plan and assesses to make the critical decision to improve performance. 2. Is constantly aware of themselves in their environment, thus able to culturally assimilate without losing their own values, present themselves in a very professional manner, organized in both operations and in time effectiveness, and is consider trustworthy and honest. 3. Technically competent in their ability to quick grasp and use all forms of technology, with awareness of where technology is headed, and also can access expertise and resources to advance their effective use. 4. Are known for their ability to deliver an effort on time that exceeds expectation by not over committing, taking on challenges they know they can accomplish, know what the expectations are and will not let barriers become excuses. 5. Is purposeful in thought and takes in ideas and models from a variety of people and sources, makes solid connections and synthesizes them into a coherent and well developed framework. 6. An effective organizational community member that clearly understands their job performance criteria, the organizational structure, processes, and systems, so that they can exibly move between, but can remain within role function when necessary, comes across as very supportive in helping other perform their roles, have the teamwork skills to make all team endeavors enjoyable and successful. 7. Effective with communication by being an active listener, uses other disciplines languages when working with clients, can easily reposition a message to connect with an audience, resulting in strong presentations and effective reports. 8. Has developed and uses strong learning skills and annually lays out their professional development activities and efforts, that align with their long-term learning plan and leverages daily situations balancing current productivity vs. opportunities for learning and development. 9. Values and practices both self-assessment and reection to help personally and professionally to improve performance and the quality of life based upon both personal and professional values, and takes these skills and helps other to improve their performance through quality peer-assessment and mentoring. 10. Understands and walks the belief that life true value is how much they assists other in increasing the quality of their life vs. themselves by proactively reaching out and providing means of empowerment to assist in the development and growth of individuals and organizations who need help. 11. Locates and identies key problems that are dene with consensus with clearly articulate issues and working assumptions, and then systematically partitions and integrates known workable solution into a validate and document solution that has been generalize across additional opportunities. 12. Effective in supporting engineering design by being able to effectively determine a clients needs, take care of the client during process, quickly get a prototype so that everyone can see and assess the designs ability to meet clients needs and expectation and nally tests thoroughly its quality so that it evolves to meets specications. 13. Can pose quality questions of inquiry, develop a through literature search of what known, then pose an hypothesis that answers the key research question, then develops the experimental design and processes the results in way that addresses and answer the research question;. 14. Evolving an international network of professionals within and outside the discipline by building personal relationships with key individuals through professional and community collaborations, efforts to strengthen both professional and community organizations. 98

15.

Able to grasp larger viewpoints than most by taking on a variety of perspectives to be build a strong framework of the cultural, social, organizational, economic, technology and other key inuences on the current status, while seeing what will happen in future thus allowing to take on larger perceived risks

Appendix D: Prole for a Quality Graduate Student Cognitive Skills Has mastered a body of theory & methods appropriate to their discipline. (expert in eld) Constantly identies important problems and dene appropriate research methods. (researcher) Efciently inventories, interprets, identies criteria, evaluates, and links resources relevant to a particular project in teaching or research. (information processor) Social Skills Accepts responsibility for building and facilitating teamwork to complete a project within explicit time constraints. (manager) Meets and interacts with colleagues outside the institution who are working on similar teaching and research problems. (collaborator) Communicates research in peer-reviewed literature, often in collaboration with faculty and graduatestudent peers. (published writer) Personal Development Skills Leverages others teaching, writings, and research programs to integrate a broad range of disciplines and alternate/contradictory views. (holistic thinker) Uses continuous assessment to improve work products and processes. (real-time assessor) Has built the emotional fortitude required in successful grant making, including proposal preparation, budgeting, and responding to critique. (entrepreneur)

99

Appendix E: Business manager 1. 2. Strategic Planning - Ability to develop a strategic plan and evolve it annually to constantly get buy-in to the strategic direction of each and every new stakeholder. Business Plan - The ability to put together an annual business plan that documents to all members of the organization what the projects, activities, outcomes, responsibilities, and resources are and how they are organization. Marketing - Perform needs analyses to prioritize opportunity, knows the best means to connect client to build a community presence, Sales - Builds and manage prospective lists to strengthen organizational ties, resulting in increased closure rate, and to leverage referrals into sales completion Budgeting identify existing and potential resources to effectively allocate resources to support key activities and rewards past performance Recruiting, hiring and retaining, and determination Mentoring valuing others and their needs, and facilitating their growth and development, and advising and challenging others to reach potential

3. 4. 5. 6. 7.

Appendix F: Mission Statement for the Urban Land-grant Component of the University Proactively seeks out the needs of the District of Columbia community to enhance the quality of life in critical areas such as literacy, education, health, entrepreneurial partnerships, environmental issues, transportation, infrastructural efforts, violence, and work force development and through research, service, and educational means meets the communitys needs efciently and effectively.

100

Ofce of the Provost and Vice President for Academic Affairs Program Assessment Institute Work
Essence The Ofce of the Provost and Vice President for Academic Affairs provides visionary and inspirational leadership, allocating resources to support academic excellence and innovation, including the advancement of administrative effectiveness for all Academic Affairs units. Stakeholders Internal Stakeholders Board of Trustees President Faculty Senate Extended Communities Business and Industry International communities DC residents Internal community stakeholders Academic Programs Faculty Vice President of Student Affairs Students Staff Current and Future Goals Current Goals Increase the quality and productivity of all degree and non-degree programs. Improve administrative effectiveness and efciency. Revise the existing Academic Policies Manual. Develop recommendation for tuition and fees increases. Implement a faculty early retirement incentive plan. Fill senior level administrative vacancies. Develop a student recruitment plan and formal advising process. Develop a strong student retention plan. Revise faculty handbook. Goals to be accomplished by spring 2007 (Must prioritize and reduce some) Increase student enrollment for all degree and non-degree programs Improve student retention Establish an effective faculty and staff development program. Increase the effectiveness and efciency of student support services. increase faculty research and scholarship productivity Establish an Honors Program to support a greater share of top performing students. Strengthen and increase the partnerships with DCPS Fulll the community college mission Advance land grant mission through partnerships, curricular innovations, and interdisciplinary research efforts Students appropriately placed in courses and academic support services. Programs that are accredited, of high quality and currency, and produce competitive graduates. A faculty and staff that is more current and professionally active, resulting in a more effective learning environment for students. Increased faculty research and scholarship productivity. Educational Institutions Professional accreditation agencies Consortium of Regional Colleges District of Columbia Public Schools Professional associations Higher education association Governmental Mayor City Council Congress Funding agencies

101

Scope The provost ofce is accountable for the dened academic mission of the university including the oversight of each academic area. It also supervises directly and indirectly the Land grant units that provides specic research, public service and educational experiences. The ofce provides facilitation of setting up outreach efforts. It is responsible for enrollment management, assisting in obtaining external funding from grants, contract services, and packaged educational programs. The ofce provides leadership on campus as well as within the academic communities in the region. The provost ofce also oversees advancements in management, budgeting, and systematic administration of academic functions. Leading the traditions that fosters an academic community. The provost ofce does not have the power or responsibility to manage faculty, departments, or the tenure process, but is responsible for their oversight. The endowment and external gifts is not a role for either. Students activities and campus enrichment efforts are also outside of Provost ofce. Thinks about boundaries when considering the following: Interface issues above, below, asides, external, Ownership, Boundaries, Authority, Accountability, Oversight Key Processes of the Provost Ofce The Ofce of the Provost and Vice President for Academic Affairs identities the population of potential UDC students and develop strategies for attracting them to the University. We advise, counsel, and place students, based on their educational histories and assessments of their skills; and create a learning environment which supports self-directed learning. We conduct ongoing student services satisfaction surveys, using the results to improve service delivery effectiveness, We establish a cyclical schedule for reviewing and revising all academic programs for continuing improvement. We focus on scholarship in learning, teaching, and curriculum design through quality educational research to continually increase the effectiveness in producing the intended learning outcomes on a daily, weekly, and annual basis. The faculty evaluation and promotion process involves the establishment of criteria and a methodology of documenting that faculty meets these criteria. Faculty and staff development is based on a continuing assessment of their needs and the strategic needs of the University. The Ofce of the Provost and Vice President for Academic Affairs encourages faculty research leading to publication, other public presentation, and community service. There is an on-going system of collecting environmental scan data used to establish University outreach priorities. Someone needs to expand on these processes: ? Student recruitment process. Advising and counseling Skills assessment and placement Program review process Faculty evaluation and promotion process Self-study and accreditation process Faculty and staff development process Student services assessment, benchmarking, and improving Encouraging and supporting faculty research Determining appropriate faculty research and scholarship Continual environmental scanning and revision of programs accordingly Products/Resources/Assets A diverse faculty with extensive years of experience A diverse student body with strong aspirations Programs that are accredited, of high quality and currency, and produce competitive graduates Project reports for the land grant research efforts Need to add a few more assets/resources/results

102

Performance Criteria Provost Ofce is:


1. Visionary: Researches best practices through networking with other institutions, looks to what future challenges hold, import state-of-the-art technology and helps others to see trends, make connections and understand implications before they become common place. 2. Student success oriented: Uses continuous efforts to increase enrollment, retention and mentoring, enriches learning environments, recruits faculty who align with the values of student success, and institutes new developmental and empowering approaches to produce competitive graduates who have experienced a series of important successes during their academic careers. 3. Student centered: constantly advances the quality and quantity of programs to meet student needs through increasing the exibility in delivery, advances faculty skills, aligns programs to meet market needs, enhances academic support and administrative services, adds important student support services, and listens to input from students 4. Faculty and staff centered: Respects faculty and staff, honors their experience and individual expertise, is inclusive in decision making and policy setting, works towards strong community and supports the professional development of faculty and staff. 5. Scholarship oriented: Supports research and other scholarly activities, advances disciplinary practices and innovations in teaching and learning, and recognizes and rewards exemplary performance. 6. Community Oriented: Committed to UDC land grant mission, responds to communities ideas, constantly conducts needs analysis and supports through evolving programs the needs of the greater communities 7. Leadership driven: Advocates for the academic mission, serves as a catalyst for change by fostering risk taking, innovation and teaming, makes both decisive and hard decisions, expects everyone to perform with quality against measurable objectives and will hold people accountable for results 8. Quality oriented: Values and uses a wide range of assessment practices like peer assessment, external reviews, and self-assessment with continuous quality improvement mindset and establishes guidelines and processes for producing quality outcomes. 9. Performance based: Sets goals and objectives to meet stakeholders needs, claries performance criteria and standards, assesses performance, dynamically responds to challenges, and rewards strong performers. 10. Well-organized administratively: Thoughtfully plans strategically and operationally, works to clarify policies, advances systems, coordinates efforts, develops better procedures, and makes sure that efforts align with institutional mission, responds quickly, and assesses performance to constantly improve efforts. 11. Communicative: Values informational exchange thus listens to concerns from students, faculty, staff, community and other administrative units, provides systematic exchanges, holds open discussion forums, stimulates dialog, and maintains transparency. 12. Productive: efcient with resources, focused, prioritizes decision making based upon mission, needs, and values, adheres to deadlines, gets things accomplished, and uses effective analysis.

103

Prioritized List of Performance Measures


1st year retention Teaching/Learning Practices Level of faculty/staff engagement Level of student engagement Graduation rates Amount of Innovations Publications/External Presentations Grant dollars Course completion rates Support of the land grant mission The strength and impact of the Program Assessment Systems

104

Table of Measures Means Registrar Peer Coaching system & Teachers Portfolio NASSE 2 3 4 5 6 7 Provost Dir of 1st year studies Deans Dir of FD Director of Research Director of FD 1 Bertha Instruments Baseline Goal Accountability

Criteria

Measures

Student Success

1st year retention

Student Centered

Teaching/Learning Practices

Faculty/Staff Centered NASSE Registrar PAS FAR AFAR

Level of faculty/staff Annual Faculty Assessment engagement Reports(AFAR)

Student Centered

Level of student engagement

Student Success

Graduation rates

Visionary

Innovations

105
Research Center Registrar AFAR PAS 9 10 11 8

Scholarship Oriented

Publications/ External Presentations

Scholarship Oriented

Grant dollars

Director of Research Bertha Terry Dir of Assessment & IR

Student Success

Course completion rates

Community Oriented

Support of the land grant mission

Quality Oriented

PAS The strength and Annual Assessment Reports impact of the Program Assessment Systems

Denitions of the Instruments Instrument 1: 1st year retention The denominator includes full-time and rst-time students who have less than 15 college credits coming in (whether enrolled fall or spring). The numerator counts every student who comes back during the next academic year even if he/she skips either the rst or second term. Instrument 2: Teaching and Learning Practices The faculty will be submitting through their deans a teachers portfolio every three years, with an annual assessment report documenting the accomplishments and action plans for professional development and professional activities. The Provost ofce will use a rubric to determine the level of best practices being exercised by each faculty member and the annual measure will be the average of all full-time faculty using this rubric. Instrument 3: Level of faculty/staff engagement There is an established instrument that is effective in measuring faculty satisfaction and engagement, called FSSE. This will be given on a three year cycle for all faculty, but additionally the 1st year instructors will capture their satisfaction after completing their rst year. (this will be done annually) This will account for 50% of the measure and will be augmented with a engagement rubric measuring the level of effort a faculty member produces in a year (score sheet) Instrument 4: Level of student engagement NASSE is the instrument that is used for measuring student satisfaction and engagement. This instrument will be given to graduating seniors and for incoming students at the end of their rst year to provide measurement of satisfaction with regard to rst year experience and the whole experience. There will be a sampling of students who withdrew from the institution to gain perspective of their experience. An index will be prepared with identication of which questions and what weighting will be given to each question to come up with a nal score. Instrument 5: Graduation Rate - (4 different measures of graduation rates for different UDC missions rst-time, full-time undergraduate students; community college students, developmental education program; transfer program) 1. 4 year programs without remediation: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who have entered without remediation requirements that graduate within a 6 year period (non-transfer students i.e. less than 15 credit hours of transfer) 2. 4 year programs with remediation requirements: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who have entered with remediation requirements that graduate within a 7 year period (non-transfer students i.e. less than 15 credit hours of transfer) 3. 2 year programs: the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time or part-time a 2-year degree program, with or without remediation requirements that graduate within a 4 year period. 4. 4 year programs of transfer students: success of transfer students (students entering with at least 15 credit hours) - the percentage of students in each year cohort (entering fall or spring terms) who matriculate full-time in a undergraduate degree program who graduate within a 6 year period Amount of Innovations Through a center of innovations, the annual inventory of new innovations will be inventoried that meet threshold requirements for being substantial.

106

Publications/External Presentations The number of peer reviewed publications in journals and at conferences. The number of non-peer review publications in journals and at conferences. The number of presentations at conferences, outside venues, professional settings, colleges, school systems, etc. External funding, including grant dollars The dollar amount of new dollars brought in during this year and not the amount of dollars expended. The dollars include: the fully committed dollars, not conditional dollars; consulting dollars that go through the university system; Course completion rates 1. Completion Rate of courses the denominator is xed at week three of the term after add/drop is nalized. The numerator counts the A, B, and C. This instrument focuses on a targeted set of gatekeeper courses that include: Math 101 General College Math 1, Math 102 General College Math 2, Math 105 Intermediate Algebra, Math 115 PreCalculus-Intensive, Math 151 Calculus I, Eng 111 English Composition 1, Eng 112 English Composition 2, English 113 - Technical Writing calculated each term. Support of the land grant mission This measure will draw upon the survey of key stakeholders and an index developed around a set of key questions on the land grant mission. An index connecting the key relevant questions on the land grant mission. The strength and impact of the Program Assessment Systems The number of documented changes in institutional effectiveness do to an implemented Program Assessment System and documented in the Annual Assessment reports.

Instrument 7: Satisfaction of some key Stakeholders Once every ve years, a survey will be designed and a cross-sectional sampling of the community will be surveyed to determine the degree to satisfaction. An index will be developed using the survey to track the changes over time. Instrument 10: Initial Orientation, Assessment, and Placement

1. 1st year satisfaction the NASSE survey given to 1st year students will pick out the relevant questions and an index measuring satisfaction in these areas will be created. Instrument 11: Ofcial Count of Distinct credit bearing programs The enumeration of each ofcial program that offers a certicate or degree and has or will graduate at least one person per year, that is active program. Instrument 12: Satisfaction of the community

107

108

Program Assessment System for a Comprehensive Developmental Education Program


Step 1: It is the only means for a signicant segment of the community population to be successful at the college Step 2: 1. 2. 3. 4. 5. 6. 7. 8. 9. All entry level courses Community business Students Parents Instructors Spouses Local government High schools Academic Programs

Step 3: Current/Ongoing Goals and Goals for 2008 Ongoing Goals Course Completion rate improvement Increasing student learning Increasing Term-to-term retention Getting students ready for future success Shifting ownership of learning Goals for 2009 An active Program Assessment System that drives CQI An integrated team effort for supporting student success A program that has been designed for student success Obtain a 70% throughput for students succeeding in the program Graduation rate of program successful students exceed campus-wide graduation rates

Step 4: Scope of the program What we are Readiness program for academic success Learning to Learn Process Building self-esteem and self-condence through accomplishment Changing expectations and practices to support a student centered learning environment A set of courses designed for empowerment of students in key foundational areas 109 What we arent Skill center for non-developmental courses Learning community for ongoing support of students outside of class Not a GED program Not designed for individuals who need special services, e.g. mental health counseling Courses are not transferable to other colleges

What we are Interdisciplinary team working collectively for student success Integrative design across disciplines to leverage the growth of the students skills across courses A model of quality education process that exhibit many of the best research based practices Efcient and effective integration of academic and student services resulting in much more learner development

What we arent Not isolated departments working independently on their own specic problems

Step 5: Products/Assets Things in place that are valuable 1. Mentoring Manual for faculty teaching developmental courses. 2. Peer Tutors who have succeeded in the program 3. Dedicated and experienced instructors in developmental education 4. Administrative support for the program and its success 5. Step 6: Processes/System Things in place that produce value 1. Mentoring: The initial assessment of learner needs by the program to assign an experienced and trained mentor for each student who weekly engages with their mentee to nd out the happenings of the previous week, plans for this week, and issues that need to be addressed for increasing student success through prompt and timely interventions. 2. Placement: Prior to enrolling, students or accessing program resources, the students are assess in critical areas of performance to determine which courses/resources will provide the students the strongest opportunity for success in both the short-term and long-term. The assessment is communicate to appropriate stakeholders of the college. The students options are limited based upon the assessment data so that the program/resources can be effective in meeting the learner needs. Since placement is a non-perfect science, the constant review of the placement process will be assessed to improve annually. 3. Tutoring: The process of recruiting talented and motivated student mentors, providing training and support for understanding their performance expectations, providing the environment and effective scheduling of the tutors/mentors for the targeted student, and assessment of the effectiveness from the perspective of the learner at the end of every major activity of use. 4. Instructional design 5. Facilitation of learning and growth 6. Assessment Step 7: Performance Criteria for the Program a. Identify the qualities that you want your program to labeled Success oriented Growth oriented Collaborative Interdisciplinary Well-designed Efcient Innovative CQI oriented Diverse

110

Performance Criteria The Program and its participants: Set high standards where students and faculty reach meaningful accomplishments that prepared program participants emotionally, spiritually, socially, and cognitively to be successful in any future endeavor. Set clear and challenging goals because they desired to improve their performance skills through constant self-assessment of each major performance. often team in ways that synergized each other, efforts, and outcomes that meet collective and individual needs in an enjoyable community environment come from many different disciplines with willingness to expand into other disciplines, take on new perspectives and ways of knowing so that perceived boundaries can be taken down to produce more holistic and effective approaches collectively developed shared vision, with agreed upon outcomes, identify incoming conditions, and construct logical building blocks, in a developmental process that minimizes redundancy, produces clean interfaces, and efcient operations of its use. Consistently come up with fresh ideas that are triggered by many factors coming from a culture of risk-taking, exploration and trials where individuals synthesize across situations and experiences producing alternative approaches in meeting needs and solving problems Success oriented meeting desired outcomes producing accomplishments emotionally more developed Prepared for future challenges High standards Growth oriented improvement is learning skills self-growth attitude uses assessment in reection goal oriented high performance Collaborative teaming shared vision meeting each individuals needs synergizing efforts and outcomes socially enjoyable Interdisciplinary multiple disciplines different perspectives transference of ways of knowing 111 more holistic approaches limitation of perceived boundaries

Well-designed clear outcomes sequenced clean interface developmental buildup non-redundancy Efcient - no longer needed Innovative Transferring ideas from afar Synthesizing across disciplines or contexts Finding unique approaches while solving problems Taking risks outside the cultural norm Creative perspective in ways of doing CQI oriented Diverse

PC
Weight

Attributes/ Measures Instrument


% - veried completions within 3 years over the students who succeeded during the academic year three years ago

Method of collecting data Benchmark Target09 40% 65% Nikie Registrar

Accountable Person

Success oriented Completion of learning plan Registrar VP leadership survey Assessment Portfolio Self-growth paper Annual assessment report

Success oriented Throughput of the program Success oriented Campus leadership Growth oriented Self-assessment practice Growth oriented Growth in Students Growth oriented Growth in Faculty

112

Collaborative Collaborative Collaborative Interdisciplinary Interdisciplinary Interdisciplinary Well-designed Well-designed Well-designed Innovative Innovative Innovative CQI oriented CQI oriented CQI oriented Diverse Diverse Diverse

Appendix B
Selected Glossary

113

114

Selected Glossary
Accelerator Model: A model for teaching/facilitation based on the idea that learners perform optimally when learning challenges create an appropriate level of stress; that is, enough challenge to engage students, yet not overwhelm them Active Learning: A mode of learning which puts learners in situations where they are asked to take responsibility for their own learning, thus becoming highly engaged in the construction of knowledge Activity: The core unit of instructional design which organizes a unit of time, in or out of class, to address a subset of course learning outcomes Assessment/Assessing: A process of determining the quality of a performance, work product, or skill and giving feedback that documents progress (strengths) and suggests ways to improve future performance (areas for improvement) in ways that will help the performer improve his or her future performance Assessor: The person who is giving the assessment feedback Assessee: The person whose performance, work product, or learning skill(s) is being assessed Formative Assessment: Assessment given during the course of a performance or course to help the assessee to prepare better for a nal or summative evaluation Summative Assessment: Assessment given at the completion of a performance, or work product. Feedback is given to help the performer improve in future efforts, but this also includes a nal determination of the quality of the performance, work product. Assessment Culture: A set of predominating group or organization attitudes and behaviors leading to the habit of continuous improvement Assumption: Presuming that a notion, fact, statement, axiom, or postulate is true Attribute: A descriptor that helps to dene what is meant by a particular performance criterion by identifying a distinguishing characteristic associated with the performance Behavioral Outcomes: Habits that a learner ought to have internalized as a result of successfully completing a program or a course of study Best Practices: A management idea which asserts that there is a technique, method, process, activity, incentive or reward that is more effective at delivering a particular outcome than any other technique, method, process, etc. Blooms Taxonomy (of Educational Objectives): A pedagogical framework for classifying educational objectives based on their cognitive complexity (knowledge, comprehension, application, analysis, synthesis, and evaluation) 115

Classication of Learning Skills (for Educational Enrichment and Assessment) An organizational scheme for instructional design and facilitation that helps educators and learners isolate transferable learning skills that apply to multiple disciplines and which are needed for successful performance in work and in life Collaboration: To work jointly with others or together, especially in an intellectual endeavor Community of Practice (COP) A group of people who share a concern or a passion for something they do and who interact regularly to learn how to do it better Construction of Knowledge: A process by which a learner makes sense of new information by integrating it with what he or she already knows so that all of the information ts into a usable framework. Sometimes this also involves bringing old information together in new ways, or modifying what is already known to more easily accommodate the new information. Knowledge construction can also mean creating new ideas that lead to new knowledge. Constructive Intervention: A timely interruption of the learning or work process by a facilitator with questions or actions intended to improve students learning skills. The purpose is to help students build skills, not to provide answers Course Design: The planning process and the product resulting from determining course learning outcomes, content, methodologies, and activities that will be included in the course as well as plans for assessment and evaluation Critical Thinking: A process for actively exploring situations by asking relevant questions that elevate understanding in order to better decide what to believe or what to do Critical Thinking Questions: A tool used in designing guided-inquiry learning activities that guides students to explore and observe, then to invent or develop an understanding of relevant concepts, and nally to apply this new understanding. Domain: A sphere of functioning performancecognitive, social, or affective (within the Classication of Learning Skills) Empowerment: Capability resulting from ones ability, willingness, and condence to act. Availability of support and resources also play a role. An empowered person is in control of a situation, not controlled by the situation. Enriched Learning Environment: An alignment of physical space, learning tools and other resources, curricula, cultural and social processes, facilitation, and assessment practices that, combined, motivate, sustain, and enrich the learning process to produce both high quality learning and personal growth

116

Evaluation/Evaluating: A process for determining the quality of a performance that takes a retrospective look at a given process, program, or individual, and based upon pre-established standards, decides its utility, its value, or its applicability Evaluatee: The person whose performance or work product is judged against a set of standards established outside of the persons control Evaluator: The person who renders or reports a judgment (conclusion) concerning the performance or work product of an evaluate against a set of prescribed standards Evidence: Collected information that supports a conclusion Exemplar: A model worthy of imitation Facilitation/Facilitating: Actions taken to help others learn or perform. In a learner-centered paradigm, facilitation takes the place of teaching, stressing the centrality of the learners work in the learning process. Forms of Knowledge: Knowledge identied and classied under ve types (see below) to help those who design curricula by clarifying all of the content that learners need to master Concepts: A generalized idea about something or a classication label Processes: Sequences of steps, events, or activities that, over time, result in changes or products Tools: Any devices, instruments, or resources that aid in accomplishing a task Contexts: Understanding relevant concepts, processes, and tools for a particular situation that contribute to successful performance Way of Being: The thoughts, attitudes, behaviors, and use of language characteristic within a culture, discipline, or knowledge area General Skills: Skills (core abilities) that institutions want all of their students to have mastered by the time they graduate; these are at the heart of general education courses Goal: The end state or benchmark to which one aspires Growth/Development: Positive developmental change resulting from identifying, developing, and internalizing skills and strategies that allow learners to reach higher levels of performance in one or more domains Guided-Inquiry Learning: Learning through exploration and discovery facilitated by an instructor who provides students with resources and a guide to follow (contains objectives, performance criteria, and a plan that often includes carefully designed critical thinking questions) Information Processing: The most basic level of learning skills in the cognitive domain. Information processing includes the skill clusters of collecting, generating, organizing, and retrieving data and validating information. 117

Insight: The identication of new and signicant discoveries/ understandings that were gained by studying a performance. Insights include an explanation of why a discovery/new understanding is important or signicant and how it might be applied to other situations. Institutional Effectiveness: The degree to which an institution (esp. an educational institution) meets its claims as stated in its vision, mission, values, and strategic plan Instrument A specic tool used to obtain evidence for a measurement Knowledge: The lowest level in Blooms taxonomy of educational objectives: that is, the awareness or possession of information about facts, ideas, skills, truths, and principles Knowledge Skills: Specialized skills anchored in a discipline or a specic context Knowledge Table: A tool for analyzing specic cognitive schemes or frameworks (concepts, processes, tools, contexts, and ways of being) within any particular area of knowledge, often for the purpose of course design Language: Shared vocabulary with common understanding of meaning Learner-Centered: The idea in education (formerly known as student-centered) that instructors and institutions should focus on what learners want and need Learner Development: The continuous growth of learning skills and processes. Development never ends, but learners who are most fully developed take responsibility for their own learning, direct their own learning, and set their own goals in response to assessment and the desire to improve performance. Learner Ownership: The degree to which learners take responsibility for their own learning. This includes not only commitment to getting high grades and recognition, but also demonstrating buy-in and shared commitment by being intrinsically motivated to do ones best to learn. Learning: The process of developing skills, acquiring existing knowledge, or discovering new knowledge through instruction or study Learning Assessment Journal (LAJ): A journal used to document learning and thoughts, designed to increase the learners awareness, intentional use, and continuing development of his or her own learning processes and skills Learning-Centered: A mode of instruction that views learning as a process that is central and worthy of explicit, conscious development. Due to the rapidly changing nature of our world, it is no longer enough for students to absorb a prescribed body of content knowledge and call themselves educated; they must be able to continue learning throughout life and to improve their skills in learning on a continuous basis. 118

Learning Outcomes: Clear and precise articulations of what learners are expected to be able to do or achieve by the end of a learning experience. Types of learning outcomes include: Competencies: The collection of knowledge, skills, and attitudes needed to perform a specic task effectively and efciently at a dened level of performance Movements: Documented growth in a transferable process or learning skill Experiences: Interactions, emotions, responsibilities, and shared memories that clarify ones position in relation to oneself, a community, or discipline Accomplishments: Signicant work products or performances that transcend normal class requirements and are externally valued or afrmed by an outside expert or client Integrated Performance: The synthesis of prior knowledge, skills, processes, and attitudes with current learning needs to address a difcult challenge within a strict time frame and set of performance expectations Learning Paradigm: An orientation towards education that gives primacy to the process of learning over instruction, setting goals which include the conscious development of learning skills as well as content knowledge, and judging success by measuring how well students learn rather than by how expertly material is taught. Learning Process Methodology (LPM): A sequence of steps for learners that makes explicit the working habits of expert learners Learning Skills Skills employed in the process of learning, embedded in a learners behavioral repertoire, and transferable across disciplines and contexts, which enable him or her to improve mastery of subject matter. They are essential for constructing knowledge because they modulate or inuence what learners can achieve at any level. These skills, once identied, can be consciously improved and rened, increasing the rate and effectiveness of learning. Learning Styles: Automatic, habitual patterns of learning or processing preferences which are based on habituation of routines and which are acquired over a learners entire lifetime. Learning-to-Learn Camp: An intensive 5-day introductory learning experience designed to equip students with the learning skills they will need to succeed in a higher-education setting. Students learn how to learn by developing cognitive, social, affective, and academic skills. Through both success and failure in a mentored community setting they gain condence in their ability to perform in college as well as accept responsibility for their own learning. Levels of Learner Knowledge: Categorization of educational objectives to represent the increasing complexity in the way learners formulate, connect, and present their thoughts (information, conceptual understanding, application, working expertise, and research) Life Vision: A mental image of what one would like ones future self and situation to be Lifelong Learner: One who applies learning skills to new situations throughout life 119

Long-Term Behavior: Habitual behaviors and qualities educators want students to exhibit on their own two or more years after a course or program is completed Measurement/Measuring: The process of monitoring and documenting a performance or a product against a scale Mentoring: Guiding another person in efforts to improve Metacognition: Mindfulness of ones own thinking and learning processes, leading to increased self-awareness and selfcontrol. Methodology: An explicitly dened set of multi-step instructions for performing a complex process, designed to enable those who are novices in a skill area to work smarter without having to learn the steps through trial and error Mission: A specic task with which a person, group, or institution is charged; a pre-established and often selfimposed objective or purpose Model: n. An example for imitation or emulation; a description or analogy used to help visualize something (as an atom) that cannot be directly observed v. To serve as an example or to demonstrate the way a process is done so that others may learn by emulating Objective: Goal for an activity. At the course and program level, these goals coalesce in learning outcomes. Paradigm: A philosophical or theoretical framework Paradigm Shift: A change of thinking, letting go of one philosophical or theoretical framework or perspective, and adopting a new one Peer Assessment: Assessment of a performance or work product done by a colleague or peer Peer Coaching: Invitation to a colleague to observe and provide feedback about a performance in a classroom using specic criteria that are agreed upon in advance Performance: The means by which one produces valued results Performance Criteria: Clear and explicit description of a performance which allows all involved (performer, assessor, evaluator, etc.) to have a mutually understood set of expectations by which performance may be measured, assessed, and/or evaluated 120

Portfolio: A collection of work samples one has done that can serve both as a personal record of growth and accomplishment, and as a demonstration to others of the quality of work one is capable of producing Preassessment: A non-judgmental assessment of ones abilities, strengths, and areas in need of improvement, prior to the start of a course, program, or activity. Such an assessment might be used to determine an individuals tness for a unit of study, or it may be used as feedback to the student to give him or her an idea of the amount of preparation he or she will need to do prior to the start of the unit of study. Prerequisite Knowledge: Background knowledge or understanding required before new learning can occur Problem: A question, matter, situation, issue, or person that is perplexing, thought provoking, or difcult to deal with Problem solving: A process whereby a best outcome is determined for some situation, subject to certain constraints, by nding, creating, or developing solutions to a question, matter, situation, issue, or person that is perplexing or difcult to deal with Process: A series of actions that add value to a nal result Process Education: A performance-based philosophy of education which integrates many different educational theories, processes, and tools in emphasizing the continuous development of learning skills through the use of assessment principles in order to produce learner self-development Product: In a learning context this refers to any tangible evidence of performance that can be measured, assessed, evaluated, or be used to demonstrate accomplishment Professional Development: Growing and systematically acquiring knowledge and skills in a given discipline. The goal is usually to demonstrate mastery and expertise in elds of knowledge and areas of expertise expected by others within that profession. Prole: An exemplar providing a detailed description of the qualities and habits of a star performer within a particular area. Quality: n. An attribute or facet of something that can be used to describe it adj. (High quality) excellent, good, or exemplary Real-Time Assessment: Assessment that takes place during a performance or immediately afterwards so that the performer gets immediate feedback. Reection: A thought or opinion resulting from careful, unfocused consideration 121

Reective Thinking: Thinking in a way that enlarges the understanding of ideas, issues, and values, and which improves the quality of student thought from unclear to clear, from unreasoned to reasoned, from unexamined to examined Research: Disciplined discovery and public dissemination of new knowledge that is not currently known by a community. Resource: Anything that can be drawn upon or used in the service of a goal. Examples include money, tools, facilities, support, materials, time, skills, knowledge, and information. Role: The functional description of a persons prescribed orientation, duties, jobs, or obligations Rubric: A scoring tool for measuring the level of performance achieved which describes in words what performance looks like at various levels Scale: A means in measurement for determining the quality or quantity of evidence Self-Assessment: Assessing ones own progress and performance by thinking critically about it for the purpose of growth Self Grower: Having developed strong performance/learning skills, self-growers continually use strong self-assessment skills to improve future performance. Servant Leadership: Style of leadership that stems from an attitude of personal service with the goal of empowering others within the organization. SII Method (of Assessment Reporting): A method of recording and reporting assessment ndings which includes a description of the strengths of the performance (including why these are strengths), the areas in which the performance may be improved (including how to implement improvement), and insights for application in other settings. Student Success: An ultimate performance goal of higher education institutions and individual educators is to promote the development of students who are able to function effectively as students, and eventually as workers, citizens, and people in all aspects of life Study Plan: One of the steps in the Learning Process Methodology in which the learner identies his or her available time, energy, and material resources; inventories the concepts to study, models and examples to apply, and questions that must be answered; and then develops a study schedule and a plan to self-assess his or her learning process Team Building: The process of getting a group to work together productively Team Reection: The process of retrospectively examining a shared group experience to learn from that experience 122

Theme: An implicit or recurring idea; a specic distinctive quality, characteristic, or concern. When designing a course, it is helpful to identify themes that permeate the course to guide the organization of learning outcomes and course content in a meaningful way. Training: Specialized instruction and practice to become procient in doing something Transfer: To apply learning from one situation or context to another Underprepared Student: Any student who needs to develop his or her cognitive, social, or affective abilities in order to succeed in a postsecondary educational experience Value System: The base from which one works after ones core values have been clearly dened, prioritized, and integrated into ones plans and actions; the system represents the fundamental beliefs and ideas that serve as personal criteria for choosing among alternatives.

123

124

Appendix C
Useful Forms

125

126

Table 2

Table of Measures
Attribute Weight Means Instrument Benchmark Target Accountability

Criterion

127

Table 2

Table of Measures
Attribute Weight Means Instrument Benchmark Target Accountability

Criterion

128

Name Performance Date

Performance Criteria

Performance Assessment

SII

Performance criteria are standards of performance, clearly and explicitly defined, which allow both the performer and assessor to have a mutually understood set of expectations by which performance may be measured and assessed. Performance criteria provide simpleto-understand, realistic, and measurable values of excellence.

1.

2.

3.

Notes
In order to complete a high-quality assessment, it is critical that you closely and carefully observe aspects of the performance with special attention to how the performance meets the established performance criteria.

Copyright 2009 Pacic Crest

continued on other side

129

Strengths
Identify the ways in which a performance was of high quality and commendable. Each strength statement should address what was valuable in the performance, why this attribute is important, and how to reproduce this aspect of the performance.

1.

2.

3.

Areas for Improvement


Identify the changes that can be made in the future, between this assessment and the next assessment, that are likely to improve performance. Improvements should recognize the issues that caused any problems and mention how changes could be implemented to resolve these difficulties.

1.

2.

3.

Insights
Identify new and significant discoveries/understandings that were gained concerning the performance area; i.e., What did the assessor learn that others might benefit from hearing or knowing? Insights include why a discovery/new understanding is important or significant and how it can be applied to other situations.

Instructor Feedback

Strengths: Areas for Improvement: Insights: 130

Вам также может понравиться