Вы находитесь на странице: 1из 17

Evaluating the Written Opinions of

Appellate Judges:
Toward a Qualitative Measure of
Judicial Productivity
MALIA REDDICK

INTRODUCTION

ourt administrators and scholars have proposed a variety of


quantitative measures of judicial productivityfrom case
processing rates, to the number of cases decided, to the length of
judicial opinions. We contend that there is a qualitative component of
judicial output as well, and we describe here a process for qualitatively
assessing the written opinions of appellate judges.
The written opinion is an appellate judges primary work product.
How a case is decided determines the fate of the parties, but the
explanation of that decision often establishes precedent, laying out
guidelines for deciding future cases dealing with similar issues. The
soundness of the legal reasoning and the clarity with which it is
communicated determine the impact of the decision. As such, these factors
may be viewed as qualitative indicators of a judges productivity.
The Institute for the Advancement of the American Legal System at the
University of Denver (IAALS) has spent the last two years developing
recommendations for written opinion review. These recommendations are
based on insights gained from a national conference and follow-up task
force of judges, attorneys, and scholars, as well as focus groups with
appellate judges and attorneys. Here we describe the recommended model
that emerged from these efforts, and contrast it with existing approaches to
evaluating appellate judges productivity.
I.

Assessing Appellate Judicial Performance

There is no shortage of approaches to assessing the performance of


appellate courts as a whole and of judges individually. Such assessments

Malia Reddick is the former Director of the Quality Judges Initiative at the Institute for the
Advancement of the American Legal System (IAALS) at the University of Denver; she
currently serves as a Consultant to IAALS.

547

548

New England Law Review

v. 48 | 547

offer a number of benefits to the appellate judiciary, from fostering judicial


self-improvement, to promoting greater efficiency and accountability, to
enhancing the administration of justice. We discuss some of these
assessment approaches below.
A. Court Performance Measures
The National Center for State Courts (NCSC) has developed an
approach to measuring the performance of appellate courts, known as
Appellate CourTools.1 The performance indicators that comprise Appellate
CourTools include: time from case filing to disposition, clearance rates of
cases, age of active pending caseload, employee satisfaction, constituent
satisfaction, and reliability and integrity of case files. These measures allow
court administrators and judges to gauge how well appellate courts handle
cases, treat participants in the legal process, and engage employees. They
have been implemented in whole or in part in jurisdictions across the
country.
B. Empirical Indicators
Some legal scholars assert that the productivity and overall quality of
appellate judges may be measured by objective indicators such as the
number of opinions they write and the number of citations to their
opinions. In a comparison of elected and appointed state supreme court
justices, one study conceptualized productivity as a measure of effort and
used the total number of opinions written as an indicator of productivity. 2
The same study also used the number of out-of-state citations to a judges
opinions as an indicator of opinion quality, reasoning that [b]etter
opinions are cited more frequently than worse opinions.3 Using these
measures, the authors found elected judges to be more productive (based
on the number of opinions they write), while appointed judges wrote
opinions of higher quality (based on the number of citations to their
opinions by other state supreme courts).
Following this lead, another study used opinion citations as an
indicator of the quality of federal appellate court judges, but took both
positive and negative citations into account. 4 Distinguishing citation

1 NATL CTR. FOR STATE COURTS, Appellate Court Performance Measures, COURTOOLS,
http://www.courtools.org/Appellate-Court-Performance-Measures.aspx (last visited Sept. 22,
2014).
2

Stephen J. Choi et al., Professionals or Politicians: The Uncertain Empirical Case for an Elected
Rather than Appointed Judiciary, 26 J.L. ECON. & ORG. 290, 290336 (2008).
3 Id. at 296.
4 Robert Anderson IV, Distinguishing Judges: An Empirical Ranking of Judicial Quality in the
United States Courts of Appeals, 76 MO. L. REV. 315, 31520 (2011).

2014

Evaluating the Written Opinions of Appellate Judges

549

count from citation treatment, the author used Shepards Citation codes
to define positive and negative treatments. 5 The difference in the number of
positive and negative opinion citations was then used as an objective
performance measure for each judge. This approach is not without its
detractors, who argue that such studies ignore[] aspects of judicial
behavior that are arguably more important than the ones proxied, such as
integrity, fairness, open-mindedness, thoroughness, and temperament.6
C. Bar Association Evaluations/Ratings
Bar associations in a number of states have programs where attorneys
evaluate or rate appellate judges standing for retention or reelection. For
example, the Wyoming State Bar conducts a Judicial Advisory Poll every
two years, in which attorneys who have appeared before supreme court
justices in the past twenty-four months may evaluate them on ten aspects
of their performance.7 Survey results for each justice are published on the
bars website. Similarly, prior to each election, a committee of the Ohio
State Bar Association evaluates supreme-court candidates, including
incumbent justices, their opponents, and candidates for open seats. 8 The
twenty-five member Commission on Judicial Candidates assesses
candidates on eight criteria and designates them as Superior, Highly
Recommended, Recommended, or Not Recommended. Likewise,
prior to judicial retention elections in Florida, the Florida Bar polls
attorneys as to whether they believe each appellate judge standing for
retention should remain on the bench. 9
D. Special Interest Group Evaluations
Some special interest groups offer assessments of appellate judicial
performance based on how judges have ruled in cases relating to that
groups interests. These assessments typically turn on whether a group
agrees or disagrees with a judges decision in cases involving hot-button
issues, such as same-sex marriage, abortion rights, tort reform, capital

Following is a positive treatment code, while distinguishing, criticizing,


limiting, questioning, and overruling are negative treatment codes. Id. at 334.
6 Scott Baker et al., The Continuing Search for a Meaningful Model of Judicial Rankings and Why
It (Unfortunately) Matters, 56 DUKE L.J. 1645, 1657 (2009).
7
Judicial Advisory Poll Results, WYO. BAR, http://www.wyomingbar.org/2012_advisory_
poll.html (last visited Sept. 22, 2014).
8
Candidate Ratings for the 2012 Supreme Court of Ohio Election, OHIO BAR,
https://www.ohiobar.org/ForPublic/PressRoom/Pages/Candidate-ratings-for-the-2012election.aspx (last visited Sept. 22, 2014).
9 The Votes in Your Court Judicial Merit Retention, FLA. BAR, http://www.floridabar.org/
thevotesinyourcourt (last visited Sept. 22, 2014).

550

New England Law Review

v. 48 | 547

punishment, or taxation. Some of these evaluation efforts are ongoing,


while some have been tied to a particular election cycle. For example, since
2009, a group known as Clear the Bench Colorado has [e]valuate[d]
judicial performance on the basis of the Colorado Constitution, established
statutory law, legal precedent, [and] rule of law principles . . . .10 The
Oklahoma Civil Justice Council, an arm of state and local chambers of
commerce in Oklahoma, began in 2012 to assess appellate judges relative to
one another in cases that . . . involve civil liability creation and its
expansion or restraint.11
On the other hand, some interest group evaluations are offered in
conjunction with organized challenges to one or more justices retention.
As part of a wide-ranging and well-funded effort to unseat three Iowa
Supreme Court Justices in 2010, an organization known as Iowa Judicial
Watch issued evaluations of the justices in which ideology ma[de] up a
substantial portion of the grade.12 Similarly, when three Florida justices
were targeted in 2012, Florida Judicial Review provide[d] common sense,
citizen analysis of judges [sic] decisions and promote[d] an independent,
originalist judiciary.13
E. Judicial Performance Evaluation
Judicial performance evaluation (JPE) is a tool for assessing judges job
performance using objective benchmarks focusing on process rather than
outcomes. In several states with retention elections, JPE programs provide
broad-based, apolitical information to voters about judges standing for
retention. However, JPE is not confined to states with judicial retention
elections; it serves a similar purpose in some states in which the legislature
or a commission makes the retention decision. Further, in the handful of
states where judges are chosen in contested elections or have life tenure,
JPE programs are used to encourage and inform judicial self-improvement.
Additionally, JPE programs enhance public trust and confidence in the
judiciary by demonstrating that individual judges and the judiciary as a
whole are accountable for their performance. Preserving public trust and
confidence is equally as important for appellate courts as for trial courts.

10 About CTBC, CLEAR THE BENCH COLO., http://www.clearthebenchcolorado.org/about/


(last visited Sept. 22, 2014).
11 Okla. Civil Justice Council Releases Judicial Evaluation of the Okla. Court of Civil Appeals,
OKLA. CIVIL JUSTICE, http://okciviljustice.com/press-release (last visited Sept. 22, 2014).
12 INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., NATIONAL CONFERENCE ON
EVALUATING APPELLATE JUDGES: PRESERVING INTEGRITY, MAINTAINING ACCOUNTABILITY 3
(2011),
available
at
http://iaals.du.edu/images/wygwam/documents/publications/PostConf_Report_-_Final.pdf.
13

Id.

2014

Evaluating the Written Opinions of Appellate Judges

551

Appellate courts decide cases involving some of the most controversial


legal, political, and social issues, establishing precedents to be applied in
future cases. But having no influence over either the sword or the purse,
appellate courts cannot enforce their own decisions. Instead, the extent to
which the public trusts appellate court rulings depends upon the
legitimacy of the courts themselves. A well-structured, objective, and
transparent performance evaluation program can enhance judicial
legitimacy.
Eleven states have an official process in place for evaluating the
performance of appellate judges, whether to foster judicial selfimprovement, to provide objective assessments to those responsible for
reselecting judges, or simply to promote greater public confidence in the
courts. Figure 1 provides a general overview of these evaluation programs,
including their stated purpose(s), evaluation criteria, bases for evaluation,
survey respondents, evaluation product, and dissemination of results.
As Figure 1 shows, surveys are a widely used tool in these programs.
Respondents typically include: attorneys who have appeared before the
evaluated judge or members of the appellate bar more broadly; other
judges, whether peer or trial judges who apply appellate rulings; and court
staff. Other frequently used evaluative tools include self-evaluations,
interviews, and public comments. Some state programs consider reversal
and recusal rates and case management statistics, with a handful of state
programs incorporating courtroom observation. Finally, a few state
programs have a written opinion review process.
In all but one of these states, the JPE program is overseen and
administered by a commission created for this purpose.14 These
commissions typically include attorneys, members of the public, and often,
sitting or retired judges. The responsibilities of each commission and its
staff vary, but generally include: drafting rules of procedure, usually
subject to supreme court approval; developing and distributing surveys to
appropriate respondents; scheduling and conducting interviews with
evaluated judges; compiling and disseminating evaluation reports; and in
retention election states, deciding on the public recommendation or
assessment for each judge (e.g., Retain; Does not meet performance
standards). In developing and administering surveys, and tabulating and

14

See INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., UNIV. OF DENVER,
TRANSPARENT COURTHOUSE: A BLUEPRINT FOR JUDICIAL PERFORMANCE EVALUATION 4 (2006),
available at http://iaals.du.edu/images/wygwam/documents/publications/TCQ_Blueprint_
JPE2006.pdf. Alaska is an exception, where administering the judicial performance evaluation
program is only one of the Alaska Judicial Councils responsibilities. See Summary of Council
Member Duties, ALASKA JUDICIAL COUNCIL, http://www.ajc.state.ak.us/about/memduties.html
(last visited Sept. 22, 2014).

552

New England Law Review

v. 48 | 547

summarizing results, JPE commissions often contract with survey research


entities.
II. IAALSs Recommendations for Appellate JPE
To a large extent, JPE programs for appellate judges have been
patterned after programs for trial judges. But there are fundamental
differences in the work of trial judges and appellate judgesdifferences
that must be taken into account in designing programs for evaluating their
performance. The most obvious difference is that appellate judges engage
in collegial decision-making, deciding cases in three-judge panels, or as an
entire court, while in a trial court the judge is the sole decider. Appellate
judges also have far less interaction with the parties in their cases than do
trial judges, only coming face-to-face with attorneys during an oral
argument (if held) that is likely to last no longer than an hour. Perhaps the
most significant difference between appellate and trial judges is their work
product. While trial judges hold conferences and make rulings throughout
the course of a trial, an appellate judges primary output is the written
opinion, and even then, individual judges do not write an opinion in every
case. All of these factors affect the who, what, and how of a judicial
performance evaluation program.
Recognizing that performance evaluation programs can and should be
more closely tailored to the roles and responsibilities of appellate judges,
IAALS undertook a two-year effort to develop recommended tools for
evaluating appellate judges. Without any preconceptions about what these
tools would entail, IAALS revisited the key questions that shape a JPE
process: for what should appellate judges be held accountable, who is in
the best position to assess appellate judges job performance, and how the
evaluation process should be structured.
IAALSs effort to develop recommended tools for evaluating the
performance of appellate judges began in August 2011 with a National
Conference on Evaluating Appellate Judges: Preserving Integrity, Maintaining
Accountability.15 The conference brought together more than seventy
appellate judges, attorneys, scholars, and JPE program coordinators, with
eighteen states represented. The conference featured panels that discussed
the role and responsibilities of appellate judges, appropriate indicators and
tools for evaluating their performance, challenges to establishing and
implementing an appellate JPE program, strategies for improving existing

15 INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., NATIONAL CONFERENCE ON
EVALUATING APPELLATE JUDGES: PRESERVING INTEGRITY, MAINTAINING ACCOUNTABILITY,
available at iaals.du.edu/images/wygwam/documents/publications/Post-Conf_Report_2011_Final.pdf.

2014

Evaluating the Written Opinions of Appellate Judges

553

programs, and using appellate JPE to defuse political and special interest
attacks in judicial elections.
One of the key points of agreement among conferees was the
importance of a written opinion review component in any appellate JPE
process. Conference panelists and attendees engaged in a broad discussion
of the criteria to be used in an opinion evaluation and the types of
individuals best suited to conduct the evaluation. IAALS formed a postconference task force to consider these questions in greater detail and
depth. The task force included two appellate judges, two representatives of
state JPE commissions, and a law professor. An Opinion on Opinions: Report
of the IAALS Task Force on State Appellate Court Opinion Review was the
outgrowth of that effort and offers recommendations and guidelines
regarding how to identify the opinions to be reviewed, who should
perform the review, and the criteria on which the review should be based. 16
To ensure that the recommended tools for evaluating appellate judges
were inclusive, fair, and workable, IAALS contracted with the Butler
Institute for Families at the University of Denver to conduct focus groups
of Colorado appellate judges and appellate attorneys in September and
October 2012. These focus groups considered the responsibilities of
appellate judges that should be included in a performance evaluation
process and the characteristics of a high-quality appellate opinion. The
feedback received during these focus group discussions was invaluable in
helping to define the parameters of the recommendations and further
refine the guidelines for opinion review. The Focus Group Report provides
more information about the process and outcomes of the focus groups. 17
The final step in developing recommendations for evaluating appellate
judges was assuring that one of the primary evaluative toolsthe survey
was comprehensive and clear. Based on input from the focus groups,
IAALS developed draft surveys to be completed by three types of
respondents who come into professional contact with appellate judges:
appellate attorneys, trial judges, and court staff. Working again with the
Butler Institute, IAALS conducted cognitive interviews with
representatives of each respondent group to field test the surveys, and
with appellate judges regarding a self-evaluation tool.

16

INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., AN OPINION ON OPINIONS: REPORT
IAALS TASK FORCE ON APPELLATE OPINION REVIEW 1, available at http://iaals.du.edu/
images/wygwam/documents/publications/OpiniononOpinionsReport.pdf.
17 INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., IAALS QUALITY JUDGES INITIATIVE:
APPELLATE JUDICIAL PERFORMANCE EVALUATION 23 (2013), available at
OF THE

http://iaals.du.edu/images/wygwam/documents/publications/IAALS_Appellate_
JPE_Focus_Group_Summary.pdf.

554

New England Law Review

v. 48 | 547

This two-year process of harnessing the input and expertise of social


scientists, judges, attorneys, and others resulted in the following
recommended components of a comprehensive program for evaluating the
performance of appellate judges: a written opinion review process; surveys
of attorneys, trial judges, and court staff; and a self-evaluation tool.18
III. Reviewing Appellate Opinions as a Qualitative Measure of Judicial
Productivity
In light of the fact that an appellate judges primary output is the
written opinion, an opinion review process is an essential component of
any appellate JPE program, regardless of its purpose. It is also a necessary
counterpart to quantitative indicators of judicial performance and of
judicial productivity in particular. However, as Figure 1 shows, only four
states include an opinion review process in their appellate JPE programs.
The review process tends to be informal and lack clear guidelines for what
constitutes a high-quality opinion.
We describe here a process for reviewing appellate opinions, including
guidelines regarding the makeup of the evaluation teams that should carry
out the review, the identification of opinions for review, the review criteria
and process, and training for opinion reviewers. We also provide opinion
review templates for attorney and non-attorney evaluators to use in
conjunction with these recommendations. Finally, we identify questions to
include in surveying attorneys and trial judges that relate to opinion
writing.
A. Guidelines for Reviewing Written Opinions
An official performance evaluation commission should administer a
comprehensive opinion review process and our guidelines apply to such a
process. A two- or three-person team (depending on the size of the
evaluation commission and the number of judges to be evaluated) should
conduct the opinion review for each judge. This will ensure a manageable
workload for commission members and allow a more careful and tailored
review. Each team should be composed of one attorney and one nonattorney, with an additional attorney or non-attorney as needed for a threeperson team. Where the membership of the evaluation commission
includes former or retired judges, these individuals should not be assigned
to an evaluation team, but rather should be available to all teams to consult
on matters that would benefit from a judicial perspective.

18

INST. FOR THE ADVANCEMENT OF THE AM. LEGAL SYS., RECOMMENDED TOOLS FOR
EVALUATING APPELLATE JUDGES (2013), available at http://iaals.du.edu/images/wygwam/
documents/publications/Recommended_Tools_for_Evaluating_Appellate_Judges.pdf.

2014

Evaluating the Written Opinions of Appellate Judges

555

Each justice/judge subject to evaluation should select five opinions for


review. One of these opinions should be a dissenting or concurring
opinion, and for intermediate appellate court judges one should be an
unpublished opinion. The opinions should be chosen from the judges
entire term (or term since the last evaluation), representing a variety of case
types and issue complexity.
The criteria for opinion review must focus on the quality and clarity of
the opinion, rather than the outcome reached in the case. Criteria should
include legal analysis and reasoning, fairness, and clarity. Such criteria
should be discussed with evaluators prior to the evaluation cycle, to ensure
consistency across the evaluation teams in their understanding and
application of the criteria. Figure 2 provides opinion review templates for
attorney and non-attorney reviewers.
The opinion review itself should take place in two stages. In the first
stage, each member of the evaluation team should read and assess the
submitted opinions individually. In the second stage, the team should meet
and discuss the individual assessments of each opinionand the judges
opinions as a wholeand prepare a report to the commission summarizing
their assessments. The report should highlight particular strengths and/or
weaknesses, as applicable, and make specific reference to areas of
disagreement between the attorney and non-attorney evaluators. Each
evaluation team should then share its assessment with the full commission
and answer any questions the commission members may have.
The performance evaluation commission should develop and conduct
a training program for commission members on direct opinion review.
Training should emphasize the broad purposes of appellate judicial
performance evaluation, focusing on the importance of process-based and
objective assessments as opposed to assessments of the outcomes of
specific cases. During the training, commission members should review the
criteria referenced in the opinion review templates discussing what each
criterion means. The commission should give special consideration to each
type of evaluator (non-attorney, attorney, retired judge, etc.). For nonattorney evaluators, the commission should provide an overview of the
role and function of appellate courts and the opinion writing process. The
commission might also consider providing a glossary of legal terms used in
the opinion review templates and terms the non-attorney evaluators may
commonly encounter in appellate opinions. Attorney evaluators (including
former or retired judges) should be reminded to focus on the criteria
employed in the evaluation process, rather than the substantive issues
raised by the opinion or the outcome.

556

New England Law Review

v. 48 | 547

B. Survey Questions on Opinion Writing


Based on input from our focus groups of appellate judges and
attorneys, and on current practice in some appellate JPE states, we
determined that surveys were also an appropriate tool for assessing the
legal analysis and reasoning, fairness, and clarity of appellate judges
opinions. Surveys also provide a wider range of viewpoints and
perspectives on these opinions. While we identified three groups as
appropriate survey respondents regarding appellate judicial performance
in general (attorneys, trial judges, and court staff), we determined that it
was appropriate to ask questions related to opinion writing only of
attorneys and judges. The pool of attorneys surveyed may be broad
enough to include all attorneys who use appellate decisions extensively in
their legal practice, in addition to attorneys who appeared before the
evaluated judge in oral arguments or in whose case the judge wrote an
opinion.
Below are examples of questions regarding each judges written
opinions that may be asked of both attorneys and trial judges:

Writes opinions that adequately explain the basis of the


courts decision.

Writes opinions that follow an applicable standard of


review for the case.

Writes opinions that clearly set forth any rules of law to


be used in future cases.

Writes opinions that decide only those issues that need


to be decided in the case before the court.

Writes opinions that address the issues raised by both


parties fairly.

Writes separate opinions that are appropriate in tone


and substance.

Writes opinions that are clear.

Writes opinions that are concise.

Writes opinions that adequately summarize the relevant


facts in the case.

Writes opinions in which the legal reasoning is easy to


follow.

As we discussed, a performance evaluation commission administers


most of the appellate JPE programs currently in place as part of an official
program. But in the absence of an official program, state bar associations
are well positioned to implement a stand-alone survey component of an
evaluation process, particularly attorney surveys. Bar associations may also

2014

Evaluating the Written Opinions of Appellate Judges

557

be able to conduct a streamlined appellate JPE program more costefficiently. A comprehensive judicial performance evaluation program can
require a significant budget allocationincluding survey dissemination
and processing costs, commission expenses, staff salaries, and publicity
costs. Bar associations may minimize these costs, while still providing
valuable information to the public and to appellate judges, by surveying
attorneys (and perhaps trial judges and court staff) and conducting such
surveys electronically using online survey software.

CONCLUSION
One aspect of appellate judicial performance that must be assessed is
productivity, not just quantitatively, but qualitatively. Here we offer
recommendations for a qualitative assessment of an appellate judges
primary work product, the written opinion. Our recommendations
encompass direct opinion review and surveys of attorneys and trial judges.
These tools may be incorporated into any appellate JPE program, whether
it is designed to provide information to voters and others responsible for
reselecting judges, enhance public trust in the judiciary, or simply
encourage judicial self-improvement.
When it comes to reelecting or retaining judges, the need for broadbased and unbiased assessments of judicial performance has never been
greater than it is in todays political climate. Attacks on judges motivated
by unpopular rulings have become more and more commonplace in the
context of judicial elections, particularly for state supreme-court justices.
One of the advantages of the recommendations we offer is that they convey
the clear message to voters, and others who reselect judges, that the quality
of a judges performance does not turn on the outcome of a particular case
or group of cases. Rather, it turns on whether the judge provides a process
that is fair, impartial, and transparent.

558

v. 48 | 547

New England Law Review

FIGURE 1: OFFICIAL APPELLATE JPE PROGRAMS

State

Stated Purposes

AK

Voter
information

Criteria
Legal ability;

Bases for
Evaluation
Surveys;

Respondents

Product/
Report

Dissemination

Attorneys;

Retention

Posted online;

Court staff

recommendation;

Published in
voter guide

Impartiality;

Self-

Integrity;

evaluation;

Judge survey

Temperament;

Judge

results

Diligence

statistics;
Interview;
Public
hearing

AZ

Voter

Legal ability;

Surveys;

Attorneys;

Meets/does not

Posted online;

information;

Integrity;

Public

Other judges

meet performance

Published in

Self-

Communication

hearing;

standards;

voter guide

improvement;

skills;

Public input;

Judge survey

Judicial

Judicial

Self-

results

assignments;

temperament;

evaluation

Judicial

Administrative

education;

performance

Independence /
accountability
CO

Voter

Integrity;

Surveys;

Attorneys;

Retention

Posted online;

Observation;

Court staff;

recommendation;

Published in

Self-

Other judges

Judge survey

voter guide

information;

Legal

Self-

knowledge;

improvement

Communication

evaluation;

skills;

Opinion

Judicial

review;

temperament;

Judge

Administrative

statistics;

performance;

Interview

Service to legal
profession/
public

results

2014

Evaluating the Written Opinions of Appellate Judges

State

Stated Purposes

DC

Reappointment

Criteria

Bases for
Evaluation

Respondents

Product/
Report

559
Dissemination

Work product;

Written

Determination of

President;

Legal

statements

well qualified,

Candidate;

scholarship;

from

qualified, or

Public

Dedication;

candidates;

unqualified

Efficiency;

Public input;

Demeanor

Candidate

N/A

conference
FL

Self-

Judges

Feedback

improvement

questioning;

forms

(voluntary)

Professional

Attorneys

Judge feedback

Shared with
evaluated
judge

conduct;
Knowledge of
the case;
Opinions

HI

Self-

Fairness/

improvement;

impartiality;

Judicial

Written

provided to

assignment;

opinions;

Judicial

Appointment

Oral argument

Selection

Surveys

Attorneys

Court survey

Posted online

results

(Judge results

Commission

and retention;

on request)

Judicial
education;
Judicial
administration
MO

Voter

Decisions based

information;

on facts/ law;

Opinion

Self-

Clarity of

review;

Judge survey

improvement;

decisions;

Observation

results

Judicial

Demeanor;

and/or

education

Promptness in

interview;

decisions
NH

Court
improvement

Overall
performance

Surveys;

Attorneys

Retention

Other judges

recommendation;

Posted online

Public input
Surveys

Attorneys;

Court survey

Sent to

Law

results

governor/

professors;

legislators;

Trial judges;

Posted online

Pro se
litigants

560

v. 48 | 547

New England Law Review

Criteria

Bases for

Stated Purposes

NM

Voter

Legal ability;

Surveys;

Attorneys;

Retention

Posted online;

information;

Fairness;

Opinion

Court staff;

recommendation;

Paid

Self-

Communication

review

Other judges

Summary of

radio/print ads

improvement

skills;

judge survey

Preparation/

results

Evaluation

Respondents

Product/

State

Report

Dissemination

attentiveness/
temperament/
control over
proceedings
TN

Voter

Integrity;

Surveys;

Attorneys;

Summary of

information;

Knowledge/

Opinion

Court staff;

judge survey

Self-

understanding

review;

Other judges

results;

improvement

of law;

Self-

Impressions from

Ability to

evaluation;

interview;

communicate;
Preparation/
attentiveness;
Effectiveness in
working with
other

Posted online

Retention

Caseload/

recommendation

workload
statistics;
Public input
Interview

judges/staff
UT

Voter

Legal ability;

Surveys;

Attorneys;

Compliance with

Posted online;

information;

Integrity/judicial

Compliance

Court staff

performance

Published in

Self-

temperament;

with

standards;

voter guide

improvement;

Administrative

performance

Judge survey

Judicial

ability

standards;

results

education

Public
comment

2014

561

Evaluating the Written Opinions of Appellate Judges

FIGURE 2: OPINION REVIEW TEMPLATES


Attorney Reviewer
Reviewers Name:
Justices/Judges Name:
Case Name:
I.

Legal Analysis and Reasoning (skip for a concurring or dissenting


opinion)
a. The opinion adequately explains the basis of the courts decision.
Agree Partly

Agree/Partly Disagree

Disagree

b. The opinion follows an applicable standard of review for the


case.
Agree Partly

Agree/Partly Disagree

Disagree

c. The opinion clearly sets forth rules of law, if any, to be used in


future cases.
Agree Partly

Agree/Partly Disagree

Disagree

d. The opinion provides clear direction to the trial court.


Agree Partly

Agree/Partly Disagree

Disagree

e. The opinion decides only those issues that need to be decided in


the case before the court.
Agree Partly

Agree/Partly Disagree

Disagree

Additional comments on Legal Analysis and Reasoning:


_________________________________________________________________
_________________________________________________________________

562

New England Law Review

v. 48 | 547

II. Fairness
a. The opinion addresses the issues raised by both parties fairly.
Agree Partly

Agree/Partly Disagree

Disagree

b. (For a concurring or dissenting opinion) The opinion is


appropriate in tone and substance.
Agree Partly

Agree/Partly Disagree

Disagree

Additional comments on Fairness:


_________________________________________________________________
_________________________________________________________________
III. Clarity
a. The opinion is clear.
Agree Partly

Agree/Partly Disagree

Disagree

b. The opinion is concise.


Agree Partly

Agree/Partly Disagree

Disagree

c. The opinion adequately summarizes the relevant facts in the


case.
Agree Partly

Agree/Partly Disagree

Disagree

d. The opinions legal reasoning is easy to follow.


Agree Partly

Agree/Partly Disagree

Disagree

Additional comments on Clarity:


_________________________________________________________________
_________________________________________________________________

2014

563

Evaluating the Written Opinions of Appellate Judges

Non-Attorney Reviewer
Reviewers Name:
Justices/Judges Name:
Case Name:
I.

Fairness
a. The opinion addresses the issues raised by both parties fairly.
Agree Partly

Agree/Partly Disagree

Disagree

b. (For a concurring or dissenting opinion) The opinion is


appropriate in tone and substance.
Agree Partly

Agree/Partly Disagree

Disagree

Additional comments on Fairness:


_________________________________________________________________
_________________________________________________________________
II. Clarity
a. The opinion is clear.
Agree Partly

Agree/Partly Disagree

Disagree

b. The opinion is concise.


Agree Partly

Agree/Partly Disagree

Disagree

c. The opinion adequately summarizes the relevant facts in the


case.
Agree Partly

Agree/Partly Disagree

Disagree

d. The opinions legal reasoning is easy to follow.


Agree Partly

Agree/Partly Disagree

Disagree

Additional comments on Clarity:


_________________________________________________________________
_________________________________________________________________

Вам также может понравиться