Вы находитесь на странице: 1из 5

Clinical Simulation in Nursing (2013) 9, e229-e233

www.elsevier.com/locate/ecsn

Featured Article

Improving Consistency of Assessment of Student Performance during Simulated Experiences


Julie A. Manz, MS, RN, Maribeth Hercinger, PhD, RN, Martha Todd, MS, APRN, Kimberly S. Hawkins, MS, APRN, Mary E. Parsons, PhD, RN*
Creighton University School of Nursing, Omaha, Nebraska 68178, USA KEYWORDS
evaluation; instrument development; nurse; nursing student; simulation; student assessment

Abstract: Assessment of student practicum performance is a complex process, fraught with difculties. The challenges associated with assessment of performance are multifactorial. The authors contend that these challenges have three main foci: the difculty in determining which behaviors are necessary and sufcient to demonstrate competence, the potential subjectivity of evaluators, and the possible alteration of student performance based on the expectations of individual faculty members. An innovative strategy was implemented to improve consistency among nurse educators during simulated clinical experiences. This article describes the development of an orientation process for users of a simulation evaluation instrument, the C-SEI. Furthermore, this orientation process may provide a framework for nurse educators to improve consistency of evaluation in any realm of student performance. Cite this article: Manz, J. A., Hercinger, M., Todd, M., Hawkins, K. S., & Parsons, M. E. (2013, July). Improving consistency of assessment of student performance during simulated experiences. Clinical Simulation in Nursing, 9(7), e229-e233. doi:10.1016/j.ecns.2012.02.007. 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.

High-delity simulation revolutionized the way nurse educators approach clinical education. In many nursing programs, simulation began to augment or replace traditional methods of clinical instruction. While the use of simulated clinical experiences (SCEs) increased, educators encountered a dearth of objective methods for evaluation of student performance during simulation. Evaluation of students in clinical, whether simulated or traditional, resulted in concern about the consistency of evaluation of student performance. To address this gap, ve nurse educators who were involved in simulation developed a quantitative evaluation instrument for use in SCEs, the CreightoneSimulation Evaluation Instrument (C-SEI).
* Corresponding author: parsonsm@creighton.edu (M. E. Parsons).

The American Association of Colleges of Nursings (AACN; 2008) Essentials for Baccalaureate Nursing Education provides a framework for baccalaureate-level curricula, including such concepts as patient-centered care, interprofessional teams, evidence-based practice, quality improvement, patient safety, informatics, clinical reasoning/critical thinking, genetics and genomics, cultural sensitivity, professionalism, and practice across the lifespan (p. 3). The AACN framework compels nurse educators to evaluate students abilities to appropriately assess patients, communicate effectively, and apply critical thinking and technical skills. These components served as the foundation for the development of the C-SEI, a quantitative evaluation instrument designed to assess student performance in SCEs.

1876-1399/$ - see front matter 2013 International Nursing Association for Clinical Simulation and Learning. Published by Elsevier Inc. All rights reserved.

doi:10.1016/j.ecns.2012.02.007

Improving Consistency of Assessment In 2008, a pilot study was conducted to evaluate the reliability and validity of the C-SEI (Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). Pilot study results suggested that there were certain items for which the interrater reliability was lower than expected. The results prompted investigators to develop a strategy for eduKey Points cating future users of the  Assessment of student instrument to improve conperformance is chalsistency. The purpose of lenging due to potential this article is twofold: (a) subjectivity among to explain the development evaluators. of a process for users of  Strategies are needed the C-SEI and (b) to proto encourage dialogue vide a template for nurse and consensus among educators to use to improve nurse educators. consistency of evaluation  An innovative orienin any realm of student tation process was performance. developed to provide a framework for achieving consistency Instrument in assessment of student performance. The C-SEI is specically designed to provide a quantitative evaluation of nursing students in SCEs. The instrument was designed to be comprehensive in nature, in that it evaluates not only students ability to perform a technical skill (such as inserting a Foley catheter) but also their ability to interpret assessment ndings appropriately; communicate effectively with the patient, family, and interdisciplinary health care team; respond to abnormal ndings; develop an appropriate plan of care for each situation; and reect on the simulation as a whole. Second, the instrument is designed to evaluate students performance as a group or team, rather than individual performance. Third, the instrument is designed for use with (a) a variety of scenarios, from chest pain to diabetic ketoacidosis; (b) a variety of patients, from children to adults; and (c) a spectrum of complexity, from beginning-level SCEs focused on patient identication and skill acquisition to complex advanced scenarios. The C-SEI is organized according to four components that are essential to nursing practice: assessment, communication, critical thinking, and technical skills (AACN, 1998) (see Figure 1). The C-SEI is a 22-item scale scored dichotomously: Either a 1 is assigned to the group for demonstrating competency in a behavior, or a 0 is assigned if the group does not demonstrate competency in the behavior. If an item is not applicable to an SCE, then the item is eliminated and not included in the scoring. The C-SEI is scored by summation of the ratings for each of the items used. A passing score is calculated by multiplying 0.75 by the number of applicable items. A passing score of 75% is consistent with grading criteria within the facility in which the instrument was developed; however,

e230 the passing score can be altered according to each programs criteria. Content validity was based on the AACN (1998) Essentials for Baccalaureate Education and established through an expert review panel consisting of seven nurse educators experienced in the use of simulation. Members of the panel represented nurse educators from diverse clinical specialties and were actively involved in simulation. Several of the members were involved in the design of the simulation laboratory and had contributed to the body of simulation research through various national and international conferences. Content validity was conrmed as members of the panel identied that each item was necessary, tting, and easy to understand. Interrater reliability from the pilot study ranged from 84.4% to 89.1% in the four main components. However, the interrater reliability for individual items ranged from 62.5% to 100%; interrater reliability for three of the items was <70% (Todd et al., 2008). This variation generated discussion of the inconsistencies in evaluations of student performance.

Challenges of Student Assessment


For the past three decades, nurse educators have struggled with the subjectivity and inconsistency associated with assessment of student performance (Bondy, 1983; Lenburg & Mitchell, 1991; Mahara, 1998; Wood, 1982, 1986; Woolley, 1977). Some of challenges that have been identied in the literature are presented here in the form of three scenarios.

Scenario 1
Some students are participating in an SCE that includes the insertion of a Foley catheter. For the item on the C-SEI titled Performs Procedures Correctly, would nurse educators nd that students demonstrate competency if the students forgot to wash their hands? forgot to check patient identiers? broke sterile technique? carried out the procedure with perfect technique but did not adequately explain the procedure to the patient? did not respect the patients privacy or interact with the patient throughout the procedure? The challenge for evaluators is the determination of competency among all of these criteria for proper performance of this skill: how does one determine if the competency is demonstrated (score 1) or is not demonstrated (score 0)? Nursing is holistic and multidimensional; insertion of a Foley catheter includes much more than just the psychomotor skill of insertion of the catheter. There are additional components to catheter insertion, such as establishing rapport, protecting the dignity and privacy of the patient, communicating therapeutically throughout the procedure, all while maintaining a sterile and safe technique when inserting the catheter. This phenomenon illustrates two of the difculties in objective assessment of student performance

pp e229-e233  Clinical Simulation in Nursing  Volume 9  Issue 7

Improving Consistency of Assessment

e231

Figure 1

A snapshot of the C-SEI.

in nursing education. The rst of these difculties is deciding which factors, or combination of factors, are necessary and sufcient to demonstrate competency. Second, the nature of nursing practice is holistic and requires a type of assessment of performance that is broader in scope than a competency checklist such as the objective structured clinical examination (Walsh, Bailey, & Koren, 2009).

Scenario 2
Two nurse educators observe the exact same SCE. One nurse educator may assign the grade of 1 (demonstrates competency) while the other may assign a 0 (does not demonstrate competency). Because of the complexity involved, not only is it difcult to determine which elements are essential for competency evaluation, but it is also difcult to ensure consistency among faculty evaluators. Nurse educators bring their own value systems to the evaluation experience and vary in philosophies of teaching and learning. Some educators place more emphasis on certain aspects of nursing than on others, often without being conscious of this emphasis. For example, one nurse educator may consider sterile technique to be the most critical component of Foley insertion, while another nurse educator may value the use of therapeutic communication throughout the procedure. Guba and Lincoln (as cited in Mahara, 1998) state that a nurse educator draw[s] on her or his personal experience and knowledge of nursing, education and evaluation to apprehend what is salient, determine what needs to be examined further, and . . . judge the soundness of the students practice rationales in each particular instance. (p. 1343) These variations inuence decision making in evaluation of student performance.

subjectivity of student assessment (Bondy, 1983; Lenburg & Mitchell, 1991; Mahara, 1998; Wood, 1982, 1986; Woolley, 1977). Because of the prevalence of these issues within nursing programs, it is reasonable to assume that students are also aware of inconsistencies in evaluation practices among nurse educators. Second, assessment of student performance is directly inuenced by the context of the learning environment, specically the teacherestudent relationship (Benner, 1982; Mahara, 1998). Therefore, student performance itself may also be inuenced by these relationships. Not only do varying philosophies about teaching and learning affect consistency among nurse educators; they may also inuence how students perform. As these scenarios clearly illustrate, evaluation of student performance can be an inconsistent and subjective process. In recognizing these difculties, nurse educators can work to improve the objectivity and consistency of evaluation processes. We discussed the implications of these inconsistencies for the psychometric properties and the usefulness of the C-SEI. The C-SEI was designed for universal applicability at any level with any simulation scenario, which is of great benet to nursing education (Todd et al., 2008). To improve consistency in evaluation with the C-SEI, we created an innovative strategy to improve consistency. The focus of the strategy is to stimulate discussion among nurse educators about what is essential in student performance.

Strategy for Improving Consistency


A process was developed to facilitate accurate and appropriate evaluation of student performance through use of the C-SEI. The researchers determined that the most efcacious method to improve consistency in the evaluation of student performance was through an experiential process. This process provides an opportunity for nurse educators to observe scenarios in which nurse educators model discussion and decision making about the evaluation process and the determination of demonstrated competency. Through observing this decision-making process, in which consistency is reached, nurse educators may be able to facilitate similar negotiations in their nursing programs. The orientation process for using the C-SEI is in the format of a CD, which has a Web-based appearance (see Figure 2). The process begins with viewing an

Scenario 3
During orientation, a nurse educator is explaining expectations in the simulation environment. A student says, It seems that faculty members emphasize different requirements. What are your expectations? This scenario illustrates two key issues associated with assessment of student performance. First, there is a long-standing discussion among nurse educators about the inconsistency and

pp e229-e233  Clinical Simulation in Nursing  Volume 9  Issue 7

Improving Consistency of Assessment

e232

Figure 2 Snapshot of assessment component of C-SEI. A description of the component is provided, along with links to the discussion worksheet example, discussion worksheet template, assessment video, and transcript of video. The assessment video demonstrates faculty discussion regarding expected minimum behaviors to demonstrate competence in each of the items under assessment with the C-SEI.

introductory video, which highlights key points about the C-SEI, and introduces a hypothetical case scenario featuring an older man with an acute exacerbation of chronic obstructive pulmonary disease (COPD). The process involves viewing videos of nurse educator discussions while referring to the printed documents. Users are directed to a link to print the C-SEI Discussion Worksheets. Two forms of the Discussion Worksheets are available. One form is a completed example of the Discussion Worksheet

using the COPD scenario, and the other is a blank template for future use (Figure 3). These documents provide guidance to the nurse educators as they complete the process. Nurse educators are instructed to view video clips related to the four components of the C-SEI and view examples of completed Discussion Worksheets that have resulted from the video discussion of the COPD scenario. These videos provide a visual example of experienced

Figure 3

Sample blank and completed discussion worksheets (Assessment Component of C-SEI).

pp e229-e233  Clinical Simulation in Nursing  Volume 9  Issue 7

Improving Consistency of Assessment evaluators conversing and negotiating essential elements of satisfactory student performance for each of the competencies. Last, the CD contains links to a copy of the C-SEI, the publication describing the initial psychometric properties of the C-SEI, assumptions of evaluation of students in simulation, and guidelines for SCEs.

e233 psychometric testing of the C-SEI, formal evaluation of the orientation process, and pilot testing the use of the C-SEI in the clinical setting.

Acknowledgments
The authors acknowledge funding from the Health Future Foundation, Creighton University. The authors acknowledge Dr. Joan Norris for her persistence with this project and review of the manuscript.

Discussion
The orientation process designed to accompany the use of the C-SEI addresses the key complexities in assessment of student performance. First, the robust discussion among evaluators establishes the cluster of expected minimum behaviors for the scenario that are necessary for an adequate performance. Without this formal process, nurse educators may not engage in purposeful dialogue about assessment of student performance. For instance, to demonstrate competency of a technical skill such as Foley insertion, nurse educators may decide that the students need to correctly identify the patient, perform hand hygiene, and maintain a sterile technique. Second, dialogue provides a forum for nurse educators to voice opinions and communicate their values while providing the opportunity to gain a broader perspective from the ideas of others, to compromise, and to come to a consensus. Finally, this process results in clear, direct, and consistent expectations for students, thus decreasing nurse educator subjectivity. This process is one example of how nurse educators can address some of the difculties associated with evaluation. Many of the challenges with evaluation in simulation described here are also concerns in the evaluation of student performance in the clinical setting (Bondy, 1983; Lenburg & Mitchell, 1991; Mahara, 1998; Oermann, Yarbrough, Saewert, Ard, & Charasika, 2009; Wood, 1982, 1986; Woolley, 1977). Many clinical evaluation forms are dichotomous, assigning either an S (satisfactory) or a U (unsatisfactory) to student performance, much like the rating for the C-SEI. This process could be duplicated for use in evaluation of students in the clinical setting, generating critical dialogue among nurse educators to produce a set of minimum expected behaviors that apply to the clinical setting. The process described here contributes to the knowledge of student assessment and evaluation in nursing education. Recommendations for future research include further

References
American Association of Colleges of Nursing. (1998). The essentials of baccalaureate education for professional nursing practice. Washington, DC: American Association of Colleges of Nursing. American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Washington, DC: American Association of Colleges of Nursing. Benner, P. (1982). Issues in competency-based testing. Nursing Outlook, 30, 303-309. Bondy, K. N. (1983). Criterion-referenced denitions for rating scales in clinical evaluation. Journal of Nursing Education, 22, 376-382. Lenburg, C. B., & Mitchell, C. A. (1991). Assessment of outcomes: The design and use of real and simulation nursing performance examinations. Nursing and Health Care, 12(2), 68-74. Mahara, M. S. (1998). A perspective on clinical evaluation in nursing education. Journal of Advanced Nursing, 28(6), 1339-1346, Retrieved March 8, 2012, from http://onlinelibrary.wiley.com/doi/10.1046/j.13652648.1998.00837.x/full. Oermann, M. H., Yarbrough, S. S., Saewert, K. J., Ard, N., & Charasika, M. (2009). Clinical evaluation and grading practices in schools of nursing: National survey ndings Part II. Nursing Education Perspectives, 30(6), 352-357. Retrieved March 8, 2012, from http:// www.nlnjournal.org/doi/full/10.1043/1536-5026-30.6.352. Todd, M., Manz, J., Hawkins, K., Parsons, M., & Hercinger, M. (2008). The development of a quantitative evaluation for simulations in nursing education. International Journal of Nursing Education Scholarship, 5, e1-e17. Walsh, M., Bailey, P. H., & Koren, I. (2009). Objective structured clinical evaluation of clinical competence: An integrative review. Journal of Advanced Nursing, 65(8), 1584-1595. doi:10.1111/j.1365-2648. 2009.05054.x. Wood, V. (1982). Evaluation of student nurse clinical performance: A continuing problem. International Nursing Review, 29(1), 11-18. Wood, V. (1986). Clinical evaluation of student nurses: Syllabus needs for nursing instructors. Nurse Education Today, 6, 208-217. Retrieved March 8, 2012, from http://www.sciencedirect.com/science/article/pii/ 0260691786901164. doi: 10.1016/0260-6917(86)90116-4. Woolley, A. S. (1977). The long and tortured history of clinical evaluation. Nursing Outlook, 25, 308-315.

pp e229-e233  Clinical Simulation in Nursing  Volume 9  Issue 7

Вам также может понравиться