Академический Документы
Профессиональный Документы
Культура Документы
40 (2007) 1237–1259
* Corresponding author.
E-mail address: ttsue@kumc.edu (T.T. Tsue).
0030-6665/07/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.otc.2007.07.005 oto.theclinics.com
1238 TSUE et al
was emphasized by the British Royal Colleges of Surgery in the reply to the
General Medical Council’s determination on the Bristol case, wherein they
state: ‘‘there should be no learning curve as far as patient safety is con-
cerned’’ [3]. The malpractice crisis has also spread to include some suits al-
leging residency program ‘‘educational malpractice’’ and responsibility of
program directors for purported resident graduate negligence [4]. Reprisal
litigation from residents dissatisfied with or terminated from their training
programs also beckons the need for validated objective assessments during
training.
This focus on outcomes has also spread into the way residents are taught,
evaluated, and certified. In 2001, the Accreditation Council for Graduate
Medical Education (ACGME) initiated its Outcomes Project [5]. This
long-term initiative focuses on the educational outcomes of residency train-
ing programs rather than the previous emphasis on the ‘‘potential’’ for a pro-
gram to educate its residents through an organized curriculum and
compliance with specified program requirements. The ACGME accredita-
tion process has shifted from verifying program components to verifying
the program’s educational product. At a minimum, programs are mandated
to use assessments of their educational outcomes to continuously improve
their educational product: a resident graduate competent in all six of the
ACGME general competencies. This programmatic feedback process in-
volves many levels of assessment beyond measuring just resident knowledge,
skills, and attitudes; it also may require evaluating graduate, faculty, pa-
tient, departmental, and institutional outcomes. Residency programs are ex-
pected not only to consider aggregate learner performance data (eg,
percentile ranking on in-training exams, first-attempt certification exam
pass rate), but also external program performance measures. These ‘‘exter-
nal indicators’’ are not yet defined for OTOHNS programs, but can include
metrics like clinical quality measures, patient survey results, and complica-
tion rates. Although it is expected that such changes to residency program
evaluation will be a dynamic, evolving process, documentation of the feed-
back loop will be necessary for both program and institutional accredita-
tion. Finally, similar information will likely be required in the future as
a component of the maintenance of certification process developed by the
American Board of Otolaryngology (ABOto). The thrust toward board
maintenance of certification requirements is congruent with the sentiments
for continued measurement of physician competency. Although the
ACGME has placed the focus on educational outcomes and not clinical out-
comes, there is obvious significant overlap.
All of these interrelated forces, both public and within the medical profes-
sion itself, have highlighted the need for valid assessments of trainees’ compe-
tency as surgical specialists. Although the thorough evaluation of competency
in all areas of a physician’s practice by a feasible, reliable, and valid assess-
ment process is important, at the core of an OTOHNS practice is surgical
competency. Surgical competency obviously involves more than just doing
ASSESSMENT OF SURGICAL COMPETENCY 1239
the operation. Darzi and Mackay [6] describe the four essential components
or framework of surgical care in which a surgeon must be competent: diagnos-
tic ability, treatment plan formulation, technical skill performance, and
postoperative care. All of these components involve competency in cognitive
and personality skills such as decision making/judgment, knowledge, commu-
nication, teamwork, and leadership [7]. Thus, surgical competency requires
competency in all of the six ACGME general competencies and not just pa-
tient care. Technical skill performance, of all areas of surgical care, has
been the most challenging in terms of objective assessment. Within OTOHNS
itselfdlet alone any other surgical fielddthese skills remain variable in terms
of their nature and complexity. The current and potential future solutions to
the challenge of evaluating this component of surgical competency remain the
focus of this article.
board. The root-cause analysis of this deadly error identified the crew’s in-
creased stress and fatigue that contributed to poor decision making, commu-
nication, and teamwork [10]. To measure competency in the integration of
nontechnical skills during a technical procedure, behavioral marker systems
have been developed and are used widely in these industries. These assess-
ments allow a qualified trainer to identify and rate the behavior of the trainee
during a simulation. For almost 25 years, the aviation industry has used a be-
havioral marker system called crew resource management, which has been
shown to significantly improve aviation safety.
In the last few years, two behavioral marker systems have been developed
for training residents in anesthesiology and surgery: Anesthetists’ Non-
Technical Skills and Surgeons Non-Technical Skills [11,12]. These behav-
ioral marker systems identify elements of behavior such as communication,
teamwork, situational awareness, and decision making. Currently, the use of
simulations and behavior marker systems, though certainly demonstrating
their value in the aviation and nuclear power industries, presents consider-
able cost and time challenges for inclusion in residency training.
In reality, there is no ‘‘ideal’’ assessment that fulfills all of the above re-
quirements. There is no assessment that evaluates all of the objectives or
outcomes that need to be measured. Thus, difficult choices must be made
about what can realistically be assessed. Progression through the OTOHNS
residency has classically been based on the apprenticeship model, relying on
the traditional graded-responsibility experienced-based model. The main
feature of this model is a teacher-centered approach based on loosely struc-
tured, one-on-one supervised situations where principles are taught and the
learner is assessed on the basis of the teacher’s interpretation of current
standards of practice [15]. This traditional approach has helped to exacer-
bate the current ‘‘reality’’ of the limitations of today’s surgical competency
assessment techniques. Progress and eventual graduation rely on subjective
evaluations by faculty. This requires accurate evaluator recall of past inter-
mittent and widely varied events and generally stems from an overall ‘‘ge-
stalt’’ rather than any objective measures. Anonymity remains difficult for
OTOHNS programs because of their smaller size, making concerns or
threat of retaliation a reality. The number of faculty evaluators on a given
rotation is even smaller, and each has a potentially different definition of
competency. Additionally, the influence of resident duty-hour limitations,
decreased clinical reimbursements, a continuing trend toward superspecial-
ization, and a focus on increasing health care resource efficiency has also
hampered progress toward an ‘‘ideal’’ assessment system. These influences
decrease the amount of educational resources availablednamely moneyd
and faculty and student time. The comparably rapid expansion of knowl-
edge, technology, and techniques within OTOHNS not only tends to use
these already limited resources at a faster rate, but also provides a moving
target in terms of what needs to be evaluated.
Making it feasible
Incorporating a feasible assessment system, even given the above-de-
scribed constraints and challenges, is a realistic and necessary goal. Box 1
summarizes some general steps that can help with the implementation of
an efficient evaluation process. Incorporating these steps within an otolaryn-
gology residency program is discussed below.
First, delineate the minimum requirements needed. The ACGME Com-
mon Program requirements delineate the necessary minimum assessment
methods for residency programs, but these minimums may be influenced
by the JCAHO and local credentialing requirements as well. Current recom-
mendations include the use of an end-of-rotation global assessment tool and
at least one other method. As most of the technical skill component of sur-
gical competency falls under the patient care competency, recommendations
suggest a focused assessment method such as direct observation and concur-
rent evaluation. Use of multiple assessment methods to measure technical
1242 TSUE et al
skill reduces the subjectivity of the process as well as overcoming the differ-
ent limitations inherent in each particular method.
Second, it is important to identify what resources are available for the as-
sessment system. Limitations on learner and evaluator time as well as avail-
able personnel, equipment, facilities, and funds that can be dedicated to the
activity need to be determined. Coordination across surgical subspecialties is
an excellent way to facilitate availability of more resource-intensive assess-
ment methods. Even mobility and sharing of techniques between OTOHNS
programs are possible and certainly would add to a particular method’s
attractiveness.
Third, use and/or adapt assessment methods currently in use. This not
only reduces the sense of ‘‘change’’ by evaluator and learner, but also saves
significant implementation time and effort. Additionally, there may be
proven external performance measures that are currently used by the univer-
sity or department that can be easily adapted for learner assessment (eg,
quality control measures, Press Ganey patient surveys). Additionally,
proven assessment methodologies from other fields that also require compe-
tency evaluation of high-stakes skills are potential resources for adoption
(see below).
Fourth, use multiple evaluators and perform the assessment at multiple
performance milestones throughout the training program. Engaging differ-
ent evaluators spreads out the responsibility and should not influence the
outcome of a reliable assessment method. Focusing on specific program
milestones, and spreading the assessments out over the 5-year training pe-
riod, should improve the usefulness of the evaluation outcomes by matching
a specific skill with its assessment. If assessments can be combined with the
learning activity, the efficiency of the process should be even higher. This
can even extend to involving learners in the development and application
of the assessment process. Learners who have achieved a level of compe-
tency can assess those still progressing toward that goal. This interaction
should educationally benefit both parties. In contrast, routine assessment
that is temporally based, such as after each rotation or academic year,
and not related to a specific milestone level, can dilute this feedback efficacy.
ASSESSMENT OF SURGICAL COMPETENCY 1243
Fifth, educate both the evaluator and the learner about the assessment tools
and processes. Providing specific objective definitions of assessment levels,
such as what ‘‘competent’’ or ‘‘satisfactory’’ means, should improve the use-
fulness and applicability of the tool across learners. Learners are then mea-
sured against a known scale rather than against each other. This can also
allow more self-assessment by the resident, as the objectives are well known
and defined, potentially guiding more independent study and practice.
Sixth, use the latest technology available to administer the assessment as
well as collect and analyze evaluation results. Electronically administered
tools are easier, especially for the technologically advanced, and can be ac-
cessed from nearly anywhere when an evaluator has time to complete the
process. Less completion time required should increase compliance while si-
multaneously allowing faster analysis and shorter time to feedback.
Direct observation
This method involves a senior colleague, usually a faculty member, ob-
serving a learner during a surgical task. The observer then documents an
ASSESSMENT OF SURGICAL COMPETENCY 1245
structured criteria can also be used, possibly providing a more favorable en-
vironment for trainee feedback. Such a system allows multiple step-by-step
reviews with many learners and focused identification of specific errors. This
method does have a higher cost in terms of materials and editing time, and
does not necessarily improve on reliability or validity [20,33,34]. In contrast,
by condensing the edited video, evaluator time should be decreased, and vid-
eotaping procedures allows for better learner anonymity, eliminating gen-
der, racial, or seniority biases [35].
Hand-motion analysis
Efficiency and accuracy of hand movements are a trademark of an expe-
rienced surgeon’s dexterity. Hand-motion analysis during a standardized
surgical task is possible using the commercially available Imperial College
Surgical Assessment Device. Through the use of passive trackers on the dor-
sum of each hand while performing a task through a magnetic field, currents
are induced in the trackers that allow hand position to be determined using
Cartesian coordinates. Number of movements, path length, speed of mo-
tion, and time on task can be measured and compared as a valid assessment
of skill during a standardized procedure. Streaming video allows segmental
focus into specific key steps of the observed procedure. These objective mea-
surements have been shown to be an effective index of technical skill in both
endoscopic and open procedures [40–43]. They have also been shown to
have a good concordance with OSATS results [44].
1250 TSUE et al
Simulation
Simulation methods attempt to imitate or resembledbut not duplicated
real-life clinical situations. Like real cases, simulation can provide a number
of options to the learner but in a safe, standardized, and reproducible testing
environment that removes the worry of compromising patient safety or out-
come. Without the inhibiting fear of an irreversible change from an error,
feedback can be immediate, focused, and efficient. A controlled environment
can allow a ‘‘cleaner’’ and more subtle assessment of performance that may
not be possible in real-life situations. Simulation can simultaneously provide
improved learning and assessment, and it affords the learner the opportunity
of repeated practice of a noncompetent area, measuring that progress with an
objective metric. Simulator metrics can provide motivation for the trainee,
and eventually set standards for certification, allowing objective comparison
of trainees both to each other and to a normative value. Simulation must al-
ways be considered an adjunct to competency judgments determined by ex-
pert assessment of observed performance in the OR and by measured
outcome variables from real procedures. Many studies need to be done to
fully validate each simulator, especially in the realm of predictive validity.
Simulation involves a wide range of growing techniques as technology
progresses. Most current simulators are able to distinguish between novice
and competent trainees, but are not yet sophisticated enough to distinguish
between the competent and the expert. Thus, simulators may be more appli-
cable to assessing the early phases of technical learning and skills [46]. Low-
fidelity simulators tend to be mechanical representations of a procedure’s
smallest fundamental components. These are generally organized into timed
stations and require faculty evaluators to observe the learner at each station.
This method forms the core of the above-described OSATS method. Such
ASSESSMENT OF SURGICAL COMPETENCY 1251
inanimate devices (eg, sewing a Penrose drain laceration) are relatively inex-
pensive and made from readily available products, but still require a signif-
icant time commitment by evaluating faculty. Body part models, which can
further improve the semblance to real life, are expensive. As stated above,
the OSATS method using bench-top models has been shown to correlate
with OR performance, but direct translation to a broader range of surgical
procedures still needs to be proved [30].
Live animal models or human cadavers can further improve the simula-
tion. Live animal models can simulate the ‘‘feel’’ of real surgery, as they are
living tissue, but generally do not reflect the exact anatomic correlate as hu-
man cadaver models can. Cadaver models do lose the feel of real tissue han-
dling, and the temporal bone laboratory is an example of this. The OSATS
using bench-top models shows good correlation with both animal and ca-
daver models, but at a significantly higher overall cost [24,47]. Higher-fidel-
ity simulators include mannequins that incorporate electronics to simulate
normal and pathologic conditions, and have the ability to respond realisti-
cally to interventions by the trainee. Human models with high-performance
simulator technology that go well beyond ‘‘resuscitation Annie’’ are now
available. These are frequently used by anesthesiologists for critical-incident
and team training, but can have obvious direct applications to airway situ-
ations in the OTOHNS as well [25].
Computer-based simulators are becoming increasingly available. Such
‘‘virtual reality’’ simulators also have varying degrees of fidelity. They range
from using abstract graphics that measure partial task skills to full-OR sim-
ulators. Users are able to interact in real time with a three-dimensional com-
puter database through the use of their own senses and skills. The main
challenges of creating more advanced simulators include simulating realistic
surgical interfaces (coupling of instrument to tissue); geometric modeling of
objects and their interactions; and an accurate operative field with advanced
signal processing to simulate such phenomena as texture, light, smoke, and
body fluids [48]. The first virtual reality system used in surgical skills assess-
ment was the Minimally Invasive Surgical Trainer-Virtual Reality, which
was a lower-fidelity system that focused on simulating basic laparoscopic
skills rather than the appearance of the surgical field [49]. It was developed
as a collaboration between surgeons and psychologists who performed
a skills analysis of the laparoscopic cholecystectomy. The Advanced Dundee
Endoscopic Psychomotor Tester is another example that is essentially a com-
puterized system connected to standardized endoscopic equipment [50].
Computers are now better able to replicate not only realistic organ sur-
face image and topography, but also the instrument ‘‘feel’’ a surgeon would
expect from a real patient (realistic haptic fidelity). Rapid advances in tech-
nology, and successful use in certification in many other high-stakes fields
(see above), have made the availability of simulators in measuring surgical
competency a reality. The major thrust of development has been in mini-
mally invasive procedures, especially laparoscopic, because of the more
1252 TSUE et al
different aspects of temporal bone anatomy. Zirkle and colleagues [40] stud-
ied the use of the VR TB as an assessment tool for OTOHNS trainees. Ca-
daveric temporal bone and VR TB drilling were assessed by both expert
observers and hand-motion analysis. Experts reviewed videotaped sessions
and were able to distinguish novice and experienced surgeons (construct val-
idity) on the cadaver models but only a trend toward doing so on the VR
TB. Experienced trainees outperformed novices in all hand-motion analysis
metrics on the VR TB and only on the time-on-task metric for the cadaveric
models. This limited study of 19 trainees concluded that the VR TB is an
appropriate assessment of trainees for transition from laboratory-based to
operative-based learning. More research needs to be performed to confirm
temporal bone simulator validity and reliability as a competency assessment
tool [61].
In otolaryngology, just as in general surgery, simulation technology fo-
cuses on endoscopic approachesdmost notably endoscopic sinus surgery.
For example, a low-fidelity simulator using a force-torque sensor during
gauze packing in a human nasal model was able to differentiate experienced
and intermediate endoscopic sinus surgeons [62]. More experience has been
gained in the OTOHNS with an endoscopic sinus surgery simulator (ES3)
developed by Lockheed Martin (Akron, Ohio). The ES3 comprises four
principal hardware components: a simulation host platform (high-powered
Silicon Graphics workstation); a haptic controller that provides coordina-
tion between the universal instrument handler and the virtual surgical in-
struments; a voice-recognition instructor that operates the simulator; and
an electromechanical platform that holds the endoscope replica, universal
surgical instrument handle, and rubber human head model. Simulated sur-
gical tasks range from vasoconstrictor injection to total ethmoidectomy and
agar nasi dissection. The ES3 has a novice mode, thought to be a good tool
to assess skill competency, whereas the intermediate mode seems best suited
for surgical training. The advanced mode has potential as a practice and re-
hearsal tool for trained learners. Fried and colleagues [63] have performed
extensive construct validation studies of the ES3 to demonstrate its discrim-
inative capabilities. It appears to be a viable assessment tool for various en-
doscopic skills, especially if used in the novice mode, and correlates strongly
with other validated measures of perceptual, visuospatial, and psychomotor
performance [64,65]. Their extensive experience observing expert perfor-
mance in the ES3 has allowed benchmark criteria to be developed that
will be useful in the future to establish objective levels of proficiency. Its use-
fulness in predicting endoscopic sinus surgery skills in the OR (predictive
validity) remains to be shown.
The future
All assessment efforts should be focused on the goal of producing the
most outstanding graduating residents in the OTOHNS possible. No single
assessment will be the panacea to the struggle to prove surgical competency
in the trainees; instead, a mixture of assessment tools will be required. The
resident must pass each assessment in a specified longitudinal fashion, rather
than having a passing average for a group of assessments. Advancement of
the residents through their training should depend on these well-defined
milestones of competency rather than one mostly dependent on time and ex-
perience. This may make some training periods longer for some and shorter
for others. For example, technical surgical progress through the early years
of residency could be assessed every 6 months on bench models of core fun-
damental surgical techniques. These techniques would be made up of core
components of both basic and advanced OTOHNS procedures. As compe-
tency is progressively obtained and documented, the trainee is allowed to
progress to a more senior status, and regular assessments with higher-fidelity
bench models and, ultimately, virtual reality simulators could be integrated.
Annually, each resident could participate in an annual competency fair,
1256 TSUE et al
testing more in-depth skills using different methods with the entire resident
complement (junior and senior trainees). This could all take place in parallel
with objective structured observations during live or videotaped level-appro-
priate procedures throughout the year. Objective testing of every procedure
may not be possible, but competency in defined seminal procedures that
form the basis of an OTOHNS practice must be demonstrated at each level
of competency-based advancement. The trainees would be required to main-
tain a portfolio of this stepwise structured progress in surgical technical
competency, and advancement would depend on successful completion of
each objective assessment. If this were standardized nationally, it could be
adopted as part of the ABOto certification process. Objective documenta-
tion of the progress toward surgical competency, especially technical skill
competency, can be monitored during training rather than from an ‘‘after
graduation’’ certification examination, when the usefulness of feedback is
less timely. This approach would make the certification of the residents’
progress to technical competency more formative rather than summative,
and thus, help to further their progress toward surgical competency.
Summary
Classic surgical training and assessment have been based on the appren-
ticeship model. The vast majority of residents are trained well, so radical
changes in the methodology must be approached with caution. Technical
skill remains only one component of overall surgical competency, but has
been one of the most difficult to measure. Assessment methods are currently
subjective and unreliable and include techniques such as operative logs, end-
of-rotation global assessments, and direct observation without criteria.
Newer objective methods for assessing technical skill are being developed
and undergoing rigorous validation andinclude direct observation with cri-
teria, final product analysis, and hand-motion analysis. Following the exam-
ple set in fields in which high-stakes assessment is paramount, such as in
aviation, virtual reality simulators have been introduced to surgical compe-
tency assessment and training. Significant work remains to integrate these
assessments into both training programs and practice and to demonstrate
a resultant improvement in surgical outcome. Continuous assessment and
subsequent real-time feedback provided by these methods are important
in the structured learning of surgical skills and will prove to be increasingly
important in the documentation of the trainees’ surgical competency.
References
[1] The Joint Commission. Available at: www.jointcommission.org. Accessed July 9, 2007.
[2] The Bristol Royal Infirmary Inquiry. The inquiry into the management of care of children
receiving complex heart surgery at the Bristol Royal Infirmary. Available at: www.
Bristol-inquiry.org.uk. Accessed July 9, 2007.
ASSESSMENT OF SURGICAL COMPETENCY 1257
[3] Giddings T, Gray G, Maran A, et al. Response to the general medical council determination
on the Bristol case. London: The Senate of Surgery of Great Britain and Ireland; 1998.
[4] Dibb CB. Medical residency: when are program administrators liable? Journal of Legal Ed-
ucation 2007;281:1–8.
[5] Outcome Project: Enhancing residency education through outcomes assessment. Available
at: www.acgme.org/Outcome. Accessed July 9, 2007.
[6] Darzi A, Mackay S. Assessment of surgical competence. Qual Health Care 2001;10(Suppl
II):ii64–9.
[7] Yule S, Flin R, Paterson-Brown S, et al. Non-technical skills for surgeons in the operating
room: a review of the literature. Surgery 2006;139(2):140–9.
[8] Wright M, Turner D, Harburg C. Competence assessment for the hazardous industries. Sud-
bury (Great Britain): Greenstreet Berman Ltd. For Health and Safety; 2003.
[9] Hamman WR. The complexity of team training: what we have learned from aviation and its
application to medicine. Qual Saf Health Care 2004;13:72–9.
[10] Collyer SC, Malecki GS. Tactical decision making under stress: history and overview. In:
Cannon-Bowers JA, Salas E, editors. Making decisions under stress: implications for indi-
vidual and team training. Washington, DC: American Psychological Association; 1999.
p. 3–15.
[11] Fletcher G, Flin R, McGreorge P, et al. Anaesthetists’ Non-technical Skills (ANTS): evalu-
ation of a behavioral marker system. Br J Anaesth 2003;90:580–8.
[12] Flin R, Yule S. The non-technical skills for surgeons (NOTSS) systems handbook v1.2. 2006.
Available at: http://www.abdn.acuk/iprc/notss. Accessed July 9, 2007.
[13] Airasian PW. Classroom assessment. 3rd edition. New York: McGraw-Hill; 1997.
[14] Kern DE, Thomas PA, Howard DM, et al. Curriculum development for medical education:
a six-step approach. Baltimore (MD): Johns Hopkins University Press; 1998.
[15] Cosman PH, Cregan PC, Martin CJ, et al. Virtual reality simulators: current status in acqui-
sition and assessment of surgical skills. ANZ J Surg 2002;72:30–4.
[16] Scott DJ, Valentine RJ, Bergen PC, et al. Evaluating surgical competency with the American
Board of Surgery In-Training Examination, skill testing, and intraoperative assessment. Sur-
gery 2000;128(4):613–22.
[17] Adrales GL, Donnelly MB, Chu UB, et al. Determinants of competency judgments by expe-
rienced laparoscopic surgeons. Surg Endosc 2004;18(2):323–7.
[18] Reznick RK. Teaching and testing technical skills. Am J Surg 1993;165:358–61.
[19] Carr MM. Program directors’ opinions about surgical competency in otolaryngology resi-
dents. Laryngoscope 2005;115:1208–11.
[20] Grantcharov TP, Bardram L, Funch-Jensen P, et al. Assessment of technical surgical skills.
Eur J Surg 2002;168:139–44.
[21] Moorthy K, Munz Y, Adams S, et al. Self-assessment of performance among surgical
trainees during simulated procedures in a simulated operating theater. Am J Surg 2006;
192(1):114–8.
[22] Moorthy K, Munz Y, Sarker SK, et al. Objective assessment of technical skills in surgery.
BMJ 2003;327:1032–7.
[23] Available at: http://www.acgme.org/acWebsite/RRC_280/280_resEval.asp. Accessed July
9, 2007.
[24] Martin JA, Regehr G, Reznick R, et al. Objective structured assessment of technical skill
(OSATS) for surgical residents. Br J Surg 1997;84(2):273–8.
[25] Siker ES. Assessment of clinical competence. Curr Opin Anaesthesiol 1999;12(6):677–84.
[26] Pandey VA, Wolfe JH, Liapis CD, et al. The examination assessment of technical compe-
tence in vascular surgery. Br J Surg 2006;93(9):1132–8.
[27] Dailey SH, Kobler JB, Zeitels SM. A laryngeal dissection station: educational paradigms in
phonosurgery. Laryngoscope 2004;114(5):878–82.
[28] Gosman GG, Simhan HN, Guido RS, et al. Focused assessment of surgical performance:
difficulty with faculty compliance. Am J Obstet Gynecol 2005;193(5):1811–6.
1258 TSUE et al
[29] Darzi A, Datta V, Mackay S. The challenge of objective assessment of surgical skill. Am
J Surg 2001;181:484–6.
[30] Datta V, Bann S, Beard J, et al. Comparison of bench test evaluations of surgical skill with
live operating performance assessments. J Am Coll Surg 2004;199(4):603–6.
[31] Beard JD, Jolly BC, Newble DI, et al. Assessing the technical skills of surgical trainees. Br
J Surg 2005;92:778–82.
[32] Roberson DW, Kentala E, Forbes P. Development and validation of an objective instrument
to measure surgical performance at tonsillectomy. Laryngoscope 2005;115(12):2127–37.
[33] Naik VN, Perlas A, Chandra DB, et al. An assessment tool for brachial plexus regional an-
esthesia performance: establishing construct validity and reliability. Reg Anesth Pain Med
2007;32(1):41–5.
[34] Hance J, Aggarwal R, Stanbridge R, et al. Objective assessment of technical skills in cardiac
surgery. Eur J Cardiothorac Surg 2005;28(1):157–62.
[35] Saleh GM, Gauba V, Mitra A, et al. Objective structured assessment of cataract surgical
skill. Arch Ophthalmol 2007;125:363–6.
[36] Datta V, Bann S, Mandalia M, et al. The surgical efficiency score: a feasible, reliable, and
valid method of skills assessment. Am J Surg 2006;192(3):372–8.
[37] Szalay D, MacRae H, Regehr G, et al. Using operative outcome to assess technical skill. Am
J Surg 2000;180:234–7.
[38] Datta V, Mandalia M, Mackay S, et al. Relationship between skill and outcome in the lab-
oratory-based model. Surgery 2002;131:318–23.
[39] Bann S, Khan M, Datta V, et al. Surgical skill is predicted by the ability to detect errors. Am
J Surg 2005;189(4):412–5.
[40] Zirkle M, Roberson DW, Leuwer R, et al. Using a virtual reality temporal bone stimulator to
assess otolaryngology trainees. Laryngoscope 2007;117(2):258–63.
[41] Taffinder N, Smith SG, Huber J, et al. The effect of a second-generation 3D endoscope
on the laparoscopic precision of novices and experienced surgeons. Surg Endosc 1999;
13:1087–92.
[42] Datta V, Mandalia M, Mackay S, et al. Relationship between skill and outcome in the lab-
oratory based model. Surg 2002;131(3):318–23.
[43] Datta V, Mackay S, Mandalia M, et al. The use of electromagnetic motion tracking analysis
to objectively measure open surgical skill in the laboratory-based model. J Am Coll Surg
2001;193:479–85.
[44] Datta V, Chang A, Mackay S, et al. The relationship between motion analysis and surgical
technical assessments. Am J Surg 2002;184(1):70–3.
[45] Porte MC, Xeroulis G, Reznick RK, et al. Verbal feedback from an expert is more effective
than self-accessed feedback about motion efficiency in learning new surgical skills. Am J Surg
2007;193(1):105–10.
[46] Reznick RK. Surgical simulation: a vital part of our future. Ann Surg 2005;242(5):640–1.
[47] Anastakis D, Regehr G, Reznick RK, et al. Assessment of technical skills transfer from
the bench training model to the human model. Am J Surg 1999;177(2):167–70.
[48] Seymour NE, Rotnes JS. Chellenges to the development of complex virtual reality simula-
tions. Surg Endosc 2006;20:1774–7.
[49] Wilson MS, Middlebrook A, Sutton C. MIST-VR: a virtual reality trainer for laparoscopic
surgery assesses performance. Ann R Coll Surg Engl 1997;79:403–4.
[50] Hanna GB, Drew T, Clinch P, et al. Computer-controlled endoscopic performance assess-
ment system. Surg Endosc 1998;12:1997–2000.
[51] Moorthy K, Smith S, Brown T, et al. Evaluation of virtual reality bronchoscopy as a learning
and assessment tool. Respiration 2003;70(2):195–9.
[52] Henderson BA, Ali R. Teaching and assessing competence in cataract surgery. Curr Opin
Ophthalmol 2007;18(1):27–31.
[53] Schendel S, Montgomery K, Sorokin A, et al. A surgical simulator for planning and perform-
ing repair of cleft lips. J Craniomaxillofac Surg 2005;33(4):223–8.
ASSESSMENT OF SURGICAL COMPETENCY 1259
[54] Moorthy K, Munz Y, Adams S, et al. A human factor analysis of technical and team skills
among surgical trainees during procedural simulations in a simulated operating theatre. Ann
Surg 2005;242(5):631–9.
[55] Moorthy K, Munz Y, Forrest D, et al. Surgical crisis management training and assessment:
a simulation-based approach to enhancing operating room performance. Ann Surg 2006;
244(1):139–47.
[56] Reznick RK, MacRae H. Medical education: teaching surgical skills-changes in the wind.
N Engl J Med 2006;355(25):2664–70.
[57] Baer S, Williams H, McCombe A. A model for instruction in myringotomy and grommet
insertion. Clin Otolaryngol 1990;15:383–4.
[58] Holt GR, Parel SM, Shuler SL. A model training ear for teaching paracentesis, myringot-
omy, and insertion of tympanostomy tubes. Otolaryngol Head Neck Surg 1983;91:333–5.
[59] Hantman I. An ear manikin. Teaching and training device. Arch Otolaryngol 1968;88:
407–12.
[60] Neal SL, Harris JP, Davidson TM. Artificial eardrum for instruction in myringotomy and
PET tube insertion. Laryngoscope 1985;95:1008–9.
[61] Sewell C, Morris D, Blevins NH, et al. Validating metrics of a mastoidectomy simulator.
Stud Health Technol Inform 2007;125:421–6.
[62] Kumagai T, Yamashita J, Morikawa O, et al. A new force-based objective assessment of
technical skills in endoscopic sinus surgery. Stud Health Technol Inform 2007;125:235–7.
[63] Fried MP, Sadoughi B, Weghorst SJ, et al. Construct validity of the endoscopic sinus surgery
simulator: II. Assessment of discriminant validity and expert benchmarking. Arch Otolar-
yngol Head Neck Surg 2007;13:350–7.
[64] Arora H, Uribe J, Ralph W, et al. Assessment of construct validity of the endoscopic sinus
surgery stimulator. Arch Otolaryngol Head Neck Surg 2005;131(3):217–21.
[65] Available at: http://www.acgme.org/acWebsite/resEvalSystem/reval_otolaryngology.asp.
Accessed July 9, 2007.