Вы находитесь на странице: 1из 41

CIOLLI-G ARBER 10/15/09 8:31 PM

HOW TO RANK LAW SCHOOL RANKINGS


ANTHONY CIOLLI*

JOSHUA GARBER*

This article discusses the various systems used to rank law schools and methods for
evaluating their validity. The authors conclude that outcome-based rankings are likely
to offer the most utility to prospective students.

I. THE MOST COMMON RANKINGS SYSTEMS ..............................................134


A. Reputation Rankings.....................................................................134
1. Single Individual Perception .................................................135
2. Faculty Perception..................................................................135
3. Attorney/Judge Perception.....................................................136
4. Student Perception or Revealed Preference..........................137
B. Institutional Characteristics Rankings .........................................137
C. Outcomes Rankings: Employment Placement and Beyond .......138
D. Comprehensive Rankings: U.S. News and its Progeny..............139
II. HOW DOES ONE EVALUATE A RANKING SYSTEM? ...............................141
A. The Reactionary Response to Law School Rankings .................141
B. The Case for Evaluation: Why the Reactionaries Miss the
Point ............................................................................................145
C. Developing an Evaluation Paradigm ...........................................146
1. Purpose....................................................................................146
2. Audience .................................................................................147
3. Methodology...........................................................................148
i. Are measurements used meaningful or arbitrary? ..........148
ii. How much error is due to experimenter failings,
rather than inherent error?.............................................148
iii. Can others duplicate the findings?.................................151
III. COMMON FAULTS WITH EXISTING LAW SCHOOL RANKINGS ..............152
A. Disconnect Between Purpose and Methodology ........................152
1. U.S. News and other comprehensive rankings.....................152
2. Single Individual Perception Reputation Rankings .............155

*
Anthony Ciolli is Appellate Law Clerk to Chief Justice Rhys S. Hodge, Supreme Court of the
Virgin Islands. The opinions in this article are the author’s alone and do not reflect the views of
Chief Justice Hodge, the Supreme Court of the Virgin Islands, or the Virgin Islands judiciary.
*
Joshua R. Garber is a Policy Fellow at the American Civil Liberties Union of Northern
California’s Death Penalty Program. The opinions in this article are the author’s alone and do not
reflect the views of the American Civil Liberties Union, its members, donors, or supporters.

132
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 133

B. Disconnect Between Audience and Purpose ...............................157


1. Faculty Perception: Evaluating the Leiter Report ................157
2. Why Revealed Preference Reveals Little .............................161
C. Poor Methodology & Avoidable Error ........................................162
IV. BEYOND LAW SCHOOL RANKINGS: A PROPOSAL FOR CHANGE .........169

Both academic journals and the popular media have published a


multitude of articles related to the impact of higher education rankings. In
fact, the Indiana Law Journal recently devoted an entire symposium issue
to law school rankings.1 The overwhelming majority of these writings have
painted a very unflattering picture of all higher education rankings,
especially those published by U.S. News & World Report. The Law School
Admission Council has stated that law school rankings have resulted in
many law schools misusing the LSAT in the admissions process.2 Several
authors, most prominently Daria Roithmayr, argue that this misuse of the
LSAT results in discrimination against minorities, particularly blacks and
Hispanics.3 These criticisms intensified in April 2005, when U.S. News
altered its rankings methodology by averaging a law school’s 25th and 75th
percentile LSAT and GPA figures rather than using self-reported medians.4
Shortly after U.S. News announced this change, the magazine was criticized
for hampering affirmative action efforts,5 and within a year reverted to the
previous system.6 Rankings have also been blamed for a host of other
phenomena, from causing law schools to raise tuition7 to “reinforcing an

1 Symposium, The Next Generation of Law School Rankings, 80 IND. L.J. (2006).
2 See LAW SCH. ADMISSION COUNCIL, NEW MODELS TO ASSURE DIVERSITY, FAIRNESS,
AND APPROPRIATE TEST USE IN LAW SCH. ADMISSIONS 4, 7, 10 (1999)
3 See Daria Roithmayr, Barriers to Entry: A Market Lock-In Model of Discrimination, 86
VA. L. REV. 727, 731-36 (2000) (“[L]aw schools have had to adopt the industry standard that
favors whites…. schools that want to maintain their national ranking or place graduates in
lucrative positions must admit students based on their [LSAT] scores…”)
4 See U.S. News, Law Methodology (2005), available at
http://www.usnews.com/usnews/edu/grad/rankings/about/06law_meth_brief.php.
5 If only a school’s median LSAT is used for rankings purposes, the 25th and 75th percentile
LSAT scores are irrelevant. By definition, half a law school’s incoming class will always have an
LSAT score at or below the median; thus, when only the median is used, it does not matter for
rankings purposes if a school with a 170 LSAT median has a 168 or a 162 as its 25th percentile.
Since blacks and Hispanics as a group have lower LSAT scores than whites and Asians, using
medians as opposed to interquartiles gave law schools the flexibility to admit minorities with
significantly below median LSAT scores without impacting their U.S. News rank. With this
change, some argue that law schools will feel significant pressure to raise their 25th percentile
figures, and thus would have to scale back their affirmative action programs. See
http://www.collegejournal.com/successwork/workplacediversity/20050421-
bialik.html?refresh=on.
6 Theodore P. Seto, Understanding the U.S. News Law School Rankings, LOYOLA LAW
SCHOOL LEGAL STUDIES PAPER No. 2007-25, 7 (2007).
7 Alvin J.Esau, Competition, Cooperation, or Cartel: A National Law School Accreditation
Process for Canada?, 23 DALHOUSIE L.J. 183, 185 (2000) (“Partly as a result of the recent
CIOLLI-G ARBER 10/15/09 8:31 PM

134 THE DARTMOUTH LAW JOURNAL Vol. VII:2

educational caste system.”8


Unfortunately, many of these articles fail to acknowledge that the
genie is already out of the bottle. Regardless of one’s attitude towards U.S.
News and other rankings, there is little doubt that commercial and non-
commercial law school rankings systems are here to stay. This Article will
identify the four primary ranking archetypes, advocate for a standardized
method of evaluating a ranking system’s validity, and evaluate the most
popular existing law school ranking schemes using this standardized
method. We conclude by proposing outcome-based rankings as a way of
evaluating law schools that will result in the greatest utility for prospective
students and will not suffer from the same deficiencies as U.S. News and its
competitors, provided that the authors have not engaged in avoidable error.

I. THE MOST COMMON RANKINGS SYSTEMS

Law school rankings come in a wide variety of forms, but one can
easily classify most ranking schemes into one of four distinct categories:
“reputation” rankings, “institutional characteristics” rankings, “outcomes”
rankings, and “comprehensive” rankings. In this section, I will briefly
describe each of these rankings archetypes as well as their common sub-
categories.

A. Reputation Rankings

Reputation rankings are likely the most popular type of rankings


system in terms of sheer numbers. This is not very surprising, since
creating a ranking system based solely on reputation is often relatively
easy. After all, reputation is a proxy for how a law school is perceived by
others – any individual can create a reputation ranking scheme simply by
surveying himself or others. Because reputation rankings are so prevalent,
one can further break them down into four sub-categories: single individual
perception, faculty perception, attorney/judge perception, and student
perception.

surveys and rankings of law schools in Canada, there appears to be increased competition
between law schools to stay on top or move up some kind of ladder of prestige. There is increased
competition between schools to recruit students, charge them higher fees, get more money from
alumni, institute new programs, chairs, and institutes, and brag about job placements for
graduates.”)
8 David C. Yamada, Same Old, Same Old: Law School Rankings and the Affirmation of
Hierarchy, 31 SUFFOLK U. L. REV. 249, 261-62 (1997).
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 135

1. Single Individual Perception

Single individual perception rankings are the most common of the


four reputation ranking schemes, if only because they require less time and
effort to create than any other ranking scheme. As the name implies, single
individual perception rankings represent the viewpoints of one individual.
This individual simply creates an ordinal list of law schools arranged by the
individual’s perception of each school’s reputation or quality. While the
individual might list some factors that played a role in shaping his rankings,
a precise methodology is never disclosed.
Most single individual perception rankings are not widely
disseminated. Because of their questionable methodologies, these rankings
are often relegated to threads on internet message-boards and blogs.9
Perhaps the most well known single individual perception ranking is the
Gourman Report, first self-published by Jack Gourman in 1967 and then
backed by the Princeton Review in 1997. While early editions of the
Gourman Report focused on overall university quality, Gourman began to
rank law schools in the 1980 edition.10 Gourman claims that his rankings
are based on multiple measures of quality, such as administration policies
and facilities.11 However, neither Gourman nor the Princeton Review have
been willing to reveal a methodology,12 leading some to speculate that there
is no methodology other than Gourman’s personal opinion.13

2. Faculty Perception

Faculty perception rankings are a type of peer-based ranking system in


which an individual attempts to measure the quality or reputation of a
school’s faculty by surveying faculty members at various institutions. This
individual then aggregates the survey responses in order to generate a
ranking.

9 See, e.g., http://www.leiterrankings.com/jobs/2008job_biglaw.shtml


10 Scott van Alstyne, Ranking the Law Schools: The Reality of Illusion?, AMERICAN BAR
FOUNDATION RESEARCH JOURNAL, VOL. 7, NO. 3 at 656.
11 Other qualities Gourman claims to take into account are “the relationship between
professors and administrators, support of faculty members, cooperation among professors,
methods of communication between professors and the administration, the openness of the
administration, the use of consultants and committees to solve problems, attitudes about scholarly
research, and the overall cooperation of the administration.”
http://chronicle.com/free/v44/i11/11a00101.htm
12 For decades, Gourman has refused to elaborate on his criteria or how he was able to
quantify an institution’s quality. Id. Evan Schnittman, editor-in-chief of the Princeton Review,
stated that he only knows Gourman’s criteria “in general terms.” Id.
13 Alexander W. Astin, director of the Higher Education Research Institute at the University
of California at Los Angeles, stated that “An intelligent U.S. senior in high school could sit down
and make up rankings that are similar to Gourman.” Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

136 THE DARTMOUTH LAW JOURNAL Vol. VII:2

Faculty perception rankings are rather common in other disciplines.


Perhaps the most well known faculty perception ranking is Brian Leiter’s
The Philosophical Gourmet, a bi-annual ranking of philosophy
departments.14 However, law school faculty perception rankings have been
relatively rare. The best-known attempt at ranking law schools based on the
perceptions of law school faculty are Leiter’s educational quality
rankings.15 Modeled after his philosophy department rankings, Leiter’s law
school educational quality rankings seek to rank law schools based on the
strength of their faculty.16
Unlike Gourman, Leiter discloses the methodology used to generate
his rankings. Leiter asked more than 150 law professors to complete his
survey and evaluated faculty lists on a scale of one to five.17 After gathering
this data, Leiter aggregated the results and ranked the faculties based on
their mean score.18

3. Attorney/Judge Perception

Rankings by attorneys and judges are similar to faculty perception


rankings in that an individual is not merely stating his or her own personal
opinion of an institution’s quality, but aggregating the opinions of many
other individuals – in this case, lawyers or judges. Unlike a faculty
perception ranking, an attorney/judge perception ranking is not a peer-
based ranking system.19
Although practitioners frequently serve as adjuncts or legal writing
instructors at many law schools, the overwhelming majority of attorneys
and judges cease to have a formal affiliation with a law school after
obtaining their law degrees. Therefore, unlike faculty perception rankings,
which almost always involve law faculty rating the reputation of other law
faculties, attorney/judge perception rankings can differ in what they
measure. While some attorney/judge perception rankings may measure
academic reputation, others may ask respondents to measure student
employability, or perceptions of student quality, or how graduates perform
on the job.
One set of attorney perception rankings was conducted by Angela

14 Rankings available at http://www.philosophicalgourmet.com/overall.asp


15 Leiter’s EQR rankings available at http://www.leiterrankings.com/faculty/
2003faculty_reputation.shtml
16 Id.
17 Id.
18 Id.
19 Law firm rankings, such as the annual Vault reputation rankings, are an example of a peer-
based reputation ranking system by attorneys.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 137

Cheng and published in the National Law Journal in November 2004. 20


The National Law Journal sent a survey to 250 hiring partners, which,
among other things, asked these attorneys to list schools from which they
hired first year associates in the previous year.21 As Leiter did for his
faculty perception rankings, Cheng aggregated the results and reported
them in her article.

4. Student Perception or Revealed Preference

Student perception rankings – also commonly referred to as revealed


preference rankings – are a recent but growing trend in higher education
rankings. This trend began in October 2002, when Christopher Avery,
Mark Glickman, Caroline Hoxby, and Andrew Metrick released the first
draft of their paper “A Revealed Preference Ranking of U.S. Colleges and
Universities.”22 This paper, later revised in September 2004 using a new
methodology, sought to rank undergraduate programs based on student
preferences or desirability.23 In November 2004, Aaron J. Chalfin released
a similar paper, which ranked law schools based on student desirability.24
While there have been significant methodological differences between
the major revealed preference studies, both have involved surveying
current or prospective students, with the researchers aggregating this
survey data to create an ordinal ranking.

B. Institutional Characteristics Rankings

Institutional characteristics rankings are, for some, an attractive


alternative to reputation rankings. While reputation rankings focus on how
institutions are perceived by others, institutional characteristics rankings
examine the characteristics inherent to the institutions. There are countless
ways individuals could rank law schools based on their characteristics.
Perhaps the most common method involves ranking schools based on a
tangible measure of student quality, such as LSAT medians or interquartile
ranges. Other possibilities include ranking law schools based on
student:faculty ratio, the number of volumes in the law library, the

20 Angela Cheng, Georgetown, Virginia Among Most Mentioned, THE NATIONAL LAW
JOURNAL(2004),available at http://www.law.georgetown.edu/news/releases/
documents/nlj_000.pdf.
21 Id.
22 Avery, Christopher, Glickman, Mark E., Hoxby, Caroline M. and Metrick, Andrew,A
Revealed Preference Ranking of U.S. Colleges and Universities (December 2005). NBER
Working Paper No. W10803. Available at SSRN: http://ssrn.com/abstract=601105
23 Id.
24 http://www.autoadmit.com/studies/chalfin/chalfin.pdf
CIOLLI-G ARBER 10/15/09 8:31 PM

138 THE DARTMOUTH LAW JOURNAL Vol. VII:2

percentage of faculty with non-law doctorates, or the average number of


law review articles published by the average faculty member in a given
period.
All institutional characteristics rankings share two things in common
that make them distinct from reputation rankings: the institutional
characteristic measured is both easily verifiable and static. One can easily
verify the law LSAT and undergraduate GPA interquartile ranges for every
ABA-accredited law school’s most recently admitted class by reading the
latest edition of the annual ABA/LSAC guide to law schools.25 While
slightly more work is involved, one can also verify what percentage of
every ABA-accredited law school’s tenured or tenure-track faculty
members had non-law doctorates at the start of the 2004-2005 academic
year – even long after that academic year has already passed.
Reputation rankings do not share these qualities because one cannot
verify reputation with absolute certainty. For example, if two individuals
were to conduct the same Chalfin-esque student perception ranking survey
using parallel samples during the same time period, it is very likely that the
results would be different, even if they were very similar, due to the
inherent error involved in survey research and trying to quantify a non-
quantifiable idea. In contrast, if two individuals were to calculate the
percentage of faculty with non-law doctorates at the start of the 2004-2005
academic year at every ABA-accredited law school, both individuals would
come up the same numbers, except in the case of avoidable human error.
Of course, this is not to say that reputation rankings, or rankings
heavily dependant on survey research, are inherently inferior or flawed
relative to institutional characteristics rankings. As discussed later in this
article, there are instances where reputation surveys are a preferred type of
rankings mechanism. However, it cannot be denied that a significant
number of individuals prefer static measures with non-existent unavoidable
error – it is precisely these individuals who create the market for
institutional characteristics rankings.

C. Outcomes Rankings: Employment Placement and Beyond

Outcomes rankings are similar to institutional characteristics rankings


in that they focus on rather concrete concepts; however, rather than
measuring characteristics of an institution, these rankings measure what
happens to individuals after they leave that institution. In most cases, such
rankings will focus on employment-related measures, such as employment
placement or starting salaries. As with institutional characteristic rankings,
outcome rankings provide the reader with a sense of objectivity, similar to

25 Edition available at http://officialguide.lsac.org/


CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 139

reputation rankings. However, unlike institutional characteristics, it is


difficult to verify outcomes with any sort of absolute certainty. There is
greater ambiguity involved in outcomes rankings, since one must make
value judgments that are not present when trying to calculate LSAT
medians or determine the number of books in a library. For instance, where
does one draw the line when calculating judicial clerkship placement
success? Does one count state family court clerks the same as U.S.
Supreme Court clerks? While two different individuals calculating the
same institutional characteristics will obtain the same numbers, individuals
ranking law schools by student outcomes will likely obtain different results
depending on how they deal with these questions.
While most outcomes rankings focus on employment placement, they
are not limited to that area. Other outcomes rankings can include student
debt after graduation, the percentage of students who go on to obtain
LL.Ms or other graduate degrees after graduation, and a host of other
possibilities.

D. Comprehensive Rankings: U.S. News and its Progeny

One cannot discuss law school rankings without acknowledging the


annual U.S. News & World Report rankings. The U.S. News rankings,
although heavily maligned by some academics,26 are without a doubt
today’s most influential law school rankings. In fact, the U.S. News
rankings are so influential that there is little doubt many law schools shape
admissions policies with these rankings in mind.27
How did U.S. News, a biweekly magazine, often considered a tier
below Time and Newsweek, become the dominant force in higher education
rankings? Some have argued that U.S. News provides a service to
prospective law students by providing a host of information about
individual law schools that is not available anywhere else.28 While this is
one of the strongest arguments in support of the U.S. News rankings, it does
not fully explain how the U.S. News rankings were able to rise to
prominence. After all, U.S. News not only did not create the first set of law
school rankings, but it was also not the first magazine to rank law schools.29

26 See, e.g., The Law School Admissions Council, Deans Speak Out (2006), available at,
http://www.lsac.org/Choosing/deans-speak-out-rankings.asp
27 Leigh Jones, Law School Deans Feel Heat From Rankings, NATIONAL LAW JOURNAL
(2006), available at http://www.law.com/jsp/article.jsp?id=1146560723820
28 Russell Korobkin, Harnessing the Positive Power of Rankings: A Response to Posner and
Sunstein, 81 IND. L.J. 35, 37 (2006).
29 This honor goes to Learning & the Law, a magazine that published the first set of law
school rankings 1975 in an article called “Adding up the Law Schools: A Tabulation and Rating
of Their Resources” by Charles D. Kelso. See
http://taxprof.typepad.com/taxprof_blog/2006/10/the_first_law_s.html,
CIOLLI-G ARBER 10/15/09 8:31 PM

140 THE DARTMOUTH LAW JOURNAL Vol. VII:2

U.S. News established itself as the dominant ranking system through


innovation. While other rankings providers limited themselves to ranking
law schools based on one measure of quality, U.S. News moved beyond
using a single criterion to rank law schools. While the first edition of the
U.S. News law school rankings in 1987 did consist only of a faculty
perception reputation ranking, the second edition, released in 1990,
involved a drastic methodology change. These 1990 rankings, and all
subsequent versions, created an “overall score” by combining a faculty
perception ranking, an attorney/judge perception ranking, a selectivity
ranking, a placement success ranking, a graduation rate ranking, and an
instructional resources ranking.30 While the methodology has changed
several times since then, since 1990 all versions of the U.S. News rankings
have involved some aggregation of multiple reputation and institutional
characteristics rankings.
Thus, U.S. News can receive credit for popularizing “comprehensive”
rankings. Rather than merely ranking law schools based on reputation or
one institutional characteristic, law schools are ranked through a weighted
average of multiple reputation and institutional characteristic rankings.
Comprehensive rankings like U.S. News generally have a key advantage
over other rankings systems when it comes to obtaining the acceptance of
the general public. U.S. News and other comprehensive rankings tend to
convey a sense of scientific rigor. By weighing factor X at 40%, factor Y at
25%, factor Z at 1%, and so on, a comprehensive rankings provider
conveys the impression of statistical certainty, even if the factors were
arbitrarily chosen and the weights “lack any defensible empirical or
theoretical basis.”31
Such weighting also results in fewer dramatic rises and falls relative to
other rankings. For example, since U.S. News began using a comprehensive
ranking system in 1990, the same fourteen schools – Harvard, Yale,
Stanford, Chicago, Columbia, NYU, Penn, Berkeley, Michigan, Duke,
Northwestern, Virginia, Cornell, and Georgetown – have held the top
fourteen positions in the rankings.32 In addition, since 1990, each of these
schools, in what is commonly referred to as the “Top 14,”33 has been
ranked in the U.S. News top ten at least once.34 While there are fluctuations
within the Top 14 from year to year, it is a remarkably stable group. For
instance, the U.S. News top five has always been some combination of

http://www.elsblog.org/the_empirical_legal_studi/2006/10/the_first_ranki.html
30 U.S. News & World Report, Best Graduate Schools (March 9, 1990).
31 http://www.washingtonmonthly.com/features/2000/norc.html
32 Leiter, Brian. From the Bowels of Cyberspace: The Myth of the “the Top 14” (March 19,
2006). Available at: http://leiterlawschool.typepad.com/leiter/2006/03/from_the_bowels.html.
33 Id.
34 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 141

Harvard, Yale, Stanford, Chicago, Columbia, and NYU (commonly called


the “HYSCCN” group), and 1991, 1992, 2007, and 2008 were the only
years when this group did not make up the entire top six.35 Other ranking
systems have not displayed this type of consistency. Thus, comprehensive
rankings like U.S. News are, in general, able to provide year-to-year
fluctuations while at the same time providing enough general stability to
prevent the rankings from appearing that they lack efficacy.
Furthermore, comprehensive rankings inherently split the difference
between other ranking systems. U.S. News, by combining reputation scores
with LSAT medians and employment figures, is able to establish itself as a
“consensus” ranking system. Even though few individuals are truly
satisfied with the U.S. News methodology or results, so many individuals
provide tepid or lukewarm support for U.S. News that its rankings become
the standard.
Other comprehensive rankings have arisen in the wake of U.S. News,
but have failed to achieve nearly the same amount of attention or
acceptance. Perhaps the most notable of these ranking schemes is “Judging
the Law Schools,”36 an annual comprehensive ranking by Thomas Brennan,
founder and former President of the Thomas M. Cooley Law School, and
Don LeDec, the current President and Dean of Cooley.37 This ranking
system, which heavily weighs factors such as law school square-footage
and library seating capacity,38 has often been dubbed the “Cooley Law
School Rankings,”39 and the weighting scheme has come under heavy
criticism for blatantly favoring Cooley40 – for instance, the 2008 edition
ranks Cooley higher than Stanford, Penn, Berkeley, Chicago, Cornell and
Duke.41

II. HOW DOES ONE EVALUATE A RANKING SYSTEM?

A. The Reactionary Response to Law School Rankings

While much has been written about how higher education rankings
impact educational institutions by shaping admissions policies and

35 In 1991 and 1992, Michigan displaced NYU for the sixth position. In 2007, Penn tied
Chicago for the sixth position, and in 2008 Berkeley was ranked as the sixth in the country.
36 Rankings available at http://www.cooley.edu/rankings/overall2008.htm
37 http://www.cooley.edu/rankings/intro_10th_general.htm
38 Id.
39
Leiter,Brian.TheCooleyLawSchoolRankings.(Oct.2005).http://leiterlawschool.typepad.com/leiter/
2005/10/the_cooley_law_.html
40 http://taxprof.typepad.com/taxprof_blog/2009/01/size-matters-.html
41 Rankings available at http://www.cooley.edu/rankings/overall2008.htm
CIOLLI-G ARBER 10/15/09 8:31 PM

142 THE DARTMOUTH LAW JOURNAL Vol. VII:2

affecting diversity, even more has been written about the utility of law
school rankings. U.S. News and other ranking systems have been heavily
maligned by both academic and non-academic writers. Although some
writers have defended law school rankings,42 the overwhelming majority of
articles about law school rankings have been negative. Individuals ranging
from university presidents43 to law professors 44 to high school seniors45
have extensively criticized rankings for being, among other things,
arbitrary or otherwise useless. Most of these individuals could be best
described as reactionaries, because they not only attack U.S. News and its
methodology, but argue that law school rankings are inherently flawed and
maintain that society would be better off giving these rankings little
credence.
The most publicized reactionary attack against law school rankings
was organized by the American Association of Law Schools in 1998. On
February 18, 1998, a week before the expected release of the 1998 U.S.
News law school rankings, the AALS held a press conference where ten
deans and educators, led by John Sexton, then-dean of the New York
University School of Law, urged U.S. News to stop publishing its annual
law school rankings.46 At the press conference and the accompanying press
release, Sexton stated that the rankings were “misleading and dangerous,”
and argued that “a ranking system inherently assumes that every applicant
has identical needs and desires.”47 The AALS press conference coincided
with the release of an AALS-commissioned U.S. News validity study48 and
a letter denouncing law school rankings sent to 93,000 law school
applicants by the Law School Admission Council.49 The letter, signed by
164 law school deans, urged applicants to gather their own information
about law schools:

42 See generally Russell Korobkin, In Praise of Law School Rankings: Solutions to


Coordination and Collective Action Problems, 77 TEX. L. REV. 403 (1998).
43 http://www.stanford.edu/dept/pres-provost/president/speeches/961206gcfallow.html
44 See, e.g., David A. Thomas, The Law School Rankings are Harmful Deceptions: A
Response to Those Who Praise the Rankings and Suggestions for a Better Approach, 40 HOUS.
L.REV. 419, 426-427 (2003) (“The inference a reader would draw from the U.S. News rankings
is that the items listed in the rankings are the accepted criteria one should use in determining the
quality of a law school. Not only is that deliberately created impression false and misleading,
but… almost none of the U.S. News criteria can possibly be relevant to or probative of any
reasonable or common sense notion of law school quality.”).
45 See Robert Samuels, Students Call College Rankings into Question, Fox News, August 27,
2003. http://www.foxnews.com/story/0,2933,95772,00.html
46 Press release available at http://www.aals.org/reports/validity.html
47 Id.
48 See Stephen P. Klein & Laura Hamilton, The Validity of The U.S. News and World Report
Ranking of ABA Law Schools (1998), at http://www.aals.org/validity.html.
49 See The Law School Admission Council, Deans Speak Out (2006), at
http://www.lsac.org/LSAC.asp?url=lsac/deans-speak-out-rankings.asp.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 143

The idea that all law schools can be measured by the same yardstick
ignores the qualities that make you and law schools unique, and is
unworthy of being an important influence on the choice you are about to
make. As the deans of law schools that range across the spectrum of several
rating systems, we strongly urge you to minimize the influence of rankings
on your own judgment. In choosing the best school for you, we urge you to
get information about all the schools in which you might have some
interest.50
Although it is difficult to disagree with the letter’s underlying
principles, the deans’ argument rings hollow. While the deans urge
prospective students to “get information about all the schools in which you
might have some interest,” both law schools and the AALS have
historically sought to minimize the dissemination of such information. Paul
Caron and Rafael Gely summarize the AALS impact on 20th century legal
education quite well:
The AALS. . . removed competitive elements from the legal education
market and allowed law schools to behave in a monopolistic manner. Law
schools are different from the typical profit-maximizing firm (which seeks
monopoly power by controlling price and excluding rivals) and the typical
nonprofit firm (which seeks output maximization by distributing “its
bounty as widely and as equitably as possible”). Instead, law schools seek
to maximize “elitist preferences.” These preferences include. . . “freedom
of law schools to teach ‘the best’ law students,” “freedom from faculty
accountability to non-peer groups (such as students),” and “freedom to
operate in a non-commercial atmosphere.”
By eliminating. . . competition among law schools, the AALS
eliminated incentives to measure the success of a law school. To the extent
that all law schools offered the same courses, taught by faculty with similar
credentials, using the same methods, it became more difficult to identify
“winners” and “losers.” This elimination of measures of organizational
success was consistent with the elitist preferences of AALS members.
More established schools had little to fear from allowing lesser schools to
exist - as long as they were not too different (i.e., they operated within rigid
AALS standards). Other law schools not only enjoyed the protection
afforded by the cartel-like activities of the AALS, but also were in a
position to claim equality with their more established counterparts.
Standardization permitted every law school to claim to be as good as
whatever conventional wisdom suggested were the leading law schools in
the country.
The absence of measures of organizational success also eliminated
incentives to measure individual contributions. The ability of law schools

50 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

144 THE DARTMOUTH LAW JOURNAL Vol. VII:2

to pursue elitist preferences, and hence enjoy freedom from accountability,


allowed them to pursue various objectives that they would have been
unable to seek in a world in which contributions were measured. For
example, law faculty could more easily make hiring decisions based not on
teaching effectiveness and scholarly productivity but instead on other
grounds such as race, gender, or educational pedigree.
In short, the AALS presided over a law school world without
objective standards and accountability. By standardizing the industry
product, the AALS eliminated any measures of organizational success and
any incentives to measure individual contributions.51
This AALS stranglehold did not break until U.S. News and other law
school rankings provided institutions with a very strong incentive to cease
operating in a non-competitive environment. U.S. News rankings, by
creating a “rat race” among law schools that otherwise would not exist,
forced those law schools to reveal the sort of information that they
otherwise would want to hide from applicants.52 Law school rankings, in
their function as information forcing devices, are socially beneficial in that
they result in more efficient matching outcomes between applicants and
schools (and employers).53
David A. Thomas provides a strong counter-argument. Thomas points
out that even those who defend law school rankings “implicitly [concede]
that the contents of the rankings have little to do with actually revealing the
quality of a law school, either absolutely or comparatively.”54 While some
previously hidden information may become available to the general public,
ordinal rankings that include dubious qualities such as “reputation” may
result in net social harm rather than social benefit. Thomas argues that “if
rankings readers are actually deceived into believing they are learning
about the quality of a law school, such deception may ultimately deter
interested persons from more meaningful investigations.”55 Furthermore,
since the American Bar Association’s Official Guide to Law Schools is
available for free on the internet56 and contains much of the same statistical
information as U.S. News, one can argue that law school rankings have
already served their purpose as an information forcing device and
eliminating them would not result in any ill effects.

51 Paul L. Caron & Rafael Gely. What Law Schools Can Learn from Billy Beane and the
Oakland Athletics, 82 TEX. L. REV. 1483, 1507-08 (2004).
52 See Scott Baker, Stephen J.Choi, & G. Mitu Gulati. The Rat Race as an Information
Forcing Device, 80 IND. L.J. (forthcoming 2005).
53 See id.; see also Korobkin, supra note 53.
54 Thomas, supra note 55, at 425.
55 Id.
56 http://officialguide.lsac.org/
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 145

B. The Case for Evaluation: Why the Reactionaries Miss the Point

While Thomas makes valid points, his analysis fails because he does
not properly distinguish U.S. News rankings from other law school
rankings.57 Though his criticisms of both U.S. News and reputation
rankings are justified to an extent, it is inappropriate to attack law school
rankings as a whole due to the perceived failings of one (albeit popular) set
of rankings. Thomas provides little support for this strongly-worded
generalization: “Without question, the world would be better off without
those rankings. They benefit no person or entity other than their
publisher.”58
Thomas’s position is akin to ‘no one should rank law schools because
no ranking can possibly be perfect.’ Because GPA and LSAT are not
perfect predictors of intelligence or law school performance, Thomas
believes they should not be used in a rankings scheme.59
Thomas is correct that LSAT, GPA, and student:faculty ratio are not
completely free of error. However, this does not make them useless
measures. After all, what in the world is completely free of any error
whatsoever? Let us apply Thomas’s high standard beyond just law school
rankings. If one were to take Thomas’s position to its natural conclusion,
one would have to advocate for the elimination of any field of study whose
research does not result in absolutely error-free results, including
economics, political science, psychology, sociology, and the medical and
natural sciences. While error is present in all of these fields, the presence of
error does not automatically mean we should discount all of its scholarship
Researchers can still reach perfectly valid conclusions even in the presence
of reasonable error.
Although it is possible that the world would be better off without one
poorly written or shoddy piece of economics “research,” arguing the entire
field of economics should be is indefensible and akin to throwing the baby
out with the bathwater. Even if U.S. News rankings are horribly flawed,

57 But see Thomas, supra note 55, at 426 (“If the intent of a ranking is merely to display
objective data in numerical order, leaving interpretation and assessment to readers, then the only
challenges one could raise would be to the accuracy or validity of the data.”). Although Thomas
does acknowledge this difference between U.S. News and other rankings when critiquing the U.S.
News methodology, he fails to make the same distinction in his later analysis, and paints all law
school rankings in a negative light. See infra note 69.
58 Thomas, supra note 55, at 456.
59 See Thomas, supra note 55, at 431 (“Implicit… is the assumption that ‘good’ students -
those who are a positive influence on their fellow students - can be identified solely or principally
by their GPA or LSAT score. More important, but far less subject to assessment, are whether a
law school’s students are emotionally stable, free of substance abuse or other self-defeating
behaviors, honest, kind and compassionate, and balanced in a professional environment that is
often intensely driven and beset by materialism. These factors cannot be measured by reported
statistics or by rankings….”).
CIOLLI-G ARBER 10/15/09 8:31 PM

146 THE DARTMOUTH LAW JOURNAL Vol. VII:2

arguing that all formal law school rankings result in a net negative effect,
regardless of the rankings’ methodology or purpose, is an extreme position
that is very difficult, if not impossible, to defend. Not all rankings are
inherently flawed or arbitrary. For example, institutional characteristic
rankings are, by definition, based on objective and verifiable facts. One
could very easily generate an ordinal ranking of law schools based on the
number of hours per week the law library is open, and if a prospective
student places an extremely high value on being able to study in the library,
it would be both rational and efficient to use such a ranking to decide
where to interview. Of course, such a ranking might lack utility if
prospective students place significantly higher values on other factors,
which they often do, but students can consider these issues when evaluating
law school rankings.
Keep in mind that in the absence of formal rankings, stakeholders in
legal education will simply rank law schools informally. Prospective law
students, rather than looking at U.S. News or other rankings, will create
their own personal set of law school rankings based on whatever data they
can informally obtain, potentially containing even greater flaws than U.S.
News.60 If law school rankings did not exist, legal employers would still
have to evaluate law school quality when making hiring decisions, even if
they had little or no knowledge of particular law schools.61

C. Developing an Evaluation Paradigm

Since there are situations where rankings can be highly useful, as well
as situations where they can be useless or even destructive, a blanket
dismissal of all rankings provides little utility to anyone. Rather than saying
all current rankings are flawed, one should develop a standardized method
for evaluating the efficacy of ranking schemes. When evaluating any
ranking, one should pay careful attention to three factors: purpose,
audience, and methodology. I will now discuss each of these factors.

1. Purpose

How does this ranking matter? What does this ranking measure?

60 See Jan Hoffman, Judge Not, Law Schools Demand Of a Magazine That Ranks Them
(1998), available at
http://query.nytimes.com/gst/fullpage.html?res=9806E4DD133FF93AA25751C0A96E958260&s
ec=&spon=&pagewanted=2 where the Dean of New York University School of Law states that if
reputation rankings asked about Princeton Law School that Princeton, which does not have a law
school, would nevertheless appear in the top twenty.
61 Michael Berger, Why U.S.News Rankings are Useful and Important, 51 J. LEGAL EDUC.
487, 492 (2001).
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 147

These are the questions one should ask when evaluating the purpose of any
ranking.
Obviously, a law school ranking scheme intends to rank law schools,
but what is the ranking’s intended purpose? Does the ranker intend to
measure employment placement, selectivity, or faculty attitudes towards
vanilla ice cream?
One might think this is a very straightforward question, but even after
twenty-two years in existence, there is still no consensus about what U.S.
News is trying to measure. Some authors have assumed that the purpose of
the U.S. News rankings is to rate the quality of legal education.62 Others
state that U.S. News’s primary purpose is to rate schools based on how they
are perceived in the “real world” by employers. U.S. News itself does not
seem to know the purpose of its own rankings. While on one page U.S.
News says that the purpose of its graduate rankings is to provide an
“independent assessment of the academic quality of programs,”63 another
page states that one should use the U.S. News rankings to determine “how
diplomas from various schools will affect [applicants’] earning power” and
“how successful the schools are at preparing graduates for the bar exam.” 64
Unless we are supposed to believe that U.S. News rank perfectly correlates
with academic quality, earning power, and bar exam preparation,65 it
appears that U.S. News provides its readers with contradictory purposes.

2. Audience

Who gains the most utility from these rankings? This is the question
to ask when trying to determine a ranking’s audience. While most rankings
tend to target prospective students, the audience could be any demographic
group, from faculty to law librarians to the general public.
Audience is as crucial to the evaluation of law school rankings as it is
for evaluating legal writing. A 70 page paper might make a splash as a law
review article but that same paper would make a horrible letter to the editor
of a local newspaper. An ordinal ranking of law schools based by
admissions officer salaries may have a lot of utility for current and
prospective law school admissions officers; however, this same ordinal
ranking would be virtually worthless to prospective students. Like good
writers, effective law school ranking providers will tailor their rankings
towards the appropriate audience.

62 See, e.g., Thomas, supra note 55, at 419.


63 http://www.usnews.com/usnews/edu/grad/rankings/about/faq_meth_brief.php#one.
64 http://www.usnews.com/usnews/edu/grad/rankings/about/06rank.b_brief.php.
65 Even a very quick comparison of U.S. News rankings and U.S. News’s raw data shows that
this is not the case.
CIOLLI-G ARBER 10/15/09 8:31 PM

148 THE DARTMOUTH LAW JOURNAL Vol. VII:2

3. Methodology

An effective methodology will allow a law school ranking scheme to


serve its intended purpose for its intended audience. At a minimum, the
rankings methodology should conform to standard social science research
methods. It is not possible to provide a concise summary of an entire
research methods textbook in one article.66 However, the most important
research methods concepts can be applied to law school rankings as part of
a simple three part test:

i. Are measurements used meaningful or arbitrary?

It should be self-evident that the tool one uses to measure something


should actually measure what one says it measures. Obviously, one does
not use a ruler to measure one’s weight, and one does not use a
thermometer to measure the volume of a partially filled milk carton. Using
those tools as measuring devices in those contexts would bring about
laughable results.
The same holds true for law school rankings. When ranking law
schools, the measurements used should further the intended purpose. For
example, if one’s purpose is to rank law schools based on student:faculty
ratio, one’s methodology should not involve distributing a survey asking
faculty for their favorite ice cream flavor. While measuring faculty ice
cream preferences might be appropriate in other contexts,67 the results
would tell us absolutely nothing about school student:faculty ratios,
because the measurement device used is completely arbitrary. For a set of
rankings to have efficacy, the measuring device must have some relation to
what one intends to measure. If one intends to measure student:faculty
ratios, one should collect data on the total number of students and the total
number of faculty at every law school; one should not use data on faculty
ice cream preferences unless one actually wants to measure faculty ice
cream preferences.

ii. How much error is due to experimenter failings, rather than


inherent error?

It is extremely difficult to imagine any type of social science research


that does not result in any error whatsoever. Whether measuring public

66 For a more in-depth explanation of social science research methods, see generally (?)
67 For example, Ben & Jerry’s may be opening up an ice cream store in the faculty lounge of
every law school in America and would like to know what flavors different faculty prefer so that
they know approximately how much of a certain flavor to keep in stock at each location.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 149

opinion towards same-sex marriage or developing an empirical model to


predict a Supreme Court justice’s vote, error will always exist. However,
all error is not equal. When evaluating any piece of research, one must
distinguish between avoidable and unavoidable error.
Unavoidable error is the error that is inherent in any type of empirical
research. As much as an experimenter may want a completely error free
research project, ideal conditions are almost never possible. If a researcher
is examining how people’s weekly church attendance changes over a 20
year period, it’s a virtual certainty that some individuals will withdraw
from the study before the 20 year period ends, perhaps because of death or
incapacitation, or because they simply do not want to participate anymore.
Such unavoidable error is present in many law school ranking
schemes. Any law school ranking system that uses median LSAT as a
factor must deal with the reality that not every individual attending every
law school has taken the LSAT. Northwestern University, for example, has
a three year joint J.D./M.B.A. degree program that requires the GMAT but
not the LSAT for admission to both the business school and the law
school.68 As much as 10% of Northwestern’s incoming first year J.D. class
comes from this program.69 Georgetown University has a sub-
matriculation program where Georgetown undergraduates can apply to
Georgetown University Law Center during their junior year based on their
undergraduate GPA and their SAT/ACT score without submitting an LSAT
score.70 In contrast, most law schools, such as the University of
Pennsylvania Law School, require LSAT scores from every applicant and
will not admit any student to the J.D. program without them, not even
through sub-matriculation or other joint degree programs.71 Thus, one can
say that there is a response rate differential among law schools when it
comes to incoming student LSAT scores. While schools such as the
University of Pennsylvania and Harvard may have 100% reporting rates,
schools like Northwestern may have 90% reporting rates. An individual
constructing a law school ranking system that makes use of LSAT medians
must work with the data available to him or her. Since LSAT scores simply
do not exist for some incoming Northwestern and Georgetown law students
because they were admitted without taking the LSAT, the ranker must deal
with this limitation, perhaps by assuming that the theoretical LSAT scores
of the non-responsive group would not differ significantly from the LSAT

68 http://www.jdmba.northwestern.edu/faq.htm.
69 Northwestern’s class of 2011 has 26 J.D./M.B.A. students out of a total class size of 242,
meaning about 10.7% of the class consists of J.D./M.B.A. students. See
http://www.jdmba.northwestern.edu/classprofile.htm,
http://www.law.northwestern.edu/admissions/profile/
70 http://www.law.georgetown.edu/Admissions/jd_general.html#earlyassurance.
71 http://www.law.upenn.edu/prospective/jd/faq.html#jointdegreeapp.
CIOLLI-G ARBER 10/15/09 8:31 PM

150 THE DARTMOUTH LAW JOURNAL Vol. VII:2

scores of the rest of the class, or realizing that the number of students
without LSAT scores at these schools is so low that even if their scores
would differ from the rest of the cohort the medians would not be
impacted. Alternatively, the researcher could disclose that this problem
exists and explain the biases it introduces, but argue that since LSAT scores
are the best standardized measure of student quality available the benefits
of including LSAT score as a rankings variable outweigh the costs. While
the researcher has several options available to him or her, fixing the error
itself is not one of them.72
Avoidable error, in contrast, involves error that does not have to exist
in an experiment or research project. While unavoidable error is inherent to
the project, avoidable error is completely preventable. Avoidable error is
only present due to the researcher’s negligence or inexperience.
Avoidable error can manifest itself in many ways, but it seems
especially common in studies that involve survey research. Poor sampling
has the potential to make an otherwise valid study virtually useless. For
example, if a researcher wanted to examine how the American people feel
about same-sex marriage, he or she would introduce avoidable error if he or
she only included eighteen year old college freshmen at Berkeley in the
sample. Because eighteen year old Berkeley freshmen are not an accurate
reflection of the demographic makeup of the United States, such a survey
would reveal nothing about how Americans in general feel about the issue.
A similar survey about abortion given only to Catholic priests would also
reveal nothing about the general public’s attitudes towards the issue. In
both cases, the error is avoidable because the researcher could have given
the survey to a sample that properly reflects the demographic makeup of
the United States.
Law school rankings, too, may have avoidable error due to poor
sampling techniques. U.S. News in particular has been justifiably criticized
for the poor construction of its practitioner reputation survey.73 However,
other types of avoidable error may exist, because a ranker may fail to make
necessary adjustments. For instance, when ranking schools based on
student:faculty ratio, an individual would introduce avoidable error by only
using full-time J.D. enrollment in the calculation. Such a ranking would

72 Of course, if this specific researcher had access to unlimited time and funds, he or she
could obtain the names of every single student enrolled at Northwestern, Georgetown, etc. who
did not take the LSAT and offer to pay each of them a large sum of money to take the LSAT so
he or she could calculate the “true” median LSAT for each school. While such a scenario is
theoretically possible, in the real world researchers do not have unlimited resources, and various
privacy laws exist that would make this scenario virtually impossible in practice.
73 See, e.g., Brian Leiter, The U.S. News Law School Rankings: A Guide for the Perplexed
(2003), available at http://www.utexas.edu/law/faculty/bleiter/rankings/guide.html (criticizing the
U.S. News practitioner survey for having a very low response rate and a geographically biased
sample).
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 151

ignore part-time and LL.M./S.J.D. students, and thus inflate the ranking of
schools with part-time or graduate programs. Other sources of avoidable
error include basic mistakes, such as inputting incorrect data or making a
mathematical error.
Note that not all instances of avoidable error will automatically cause
a set of rankings to become invalid. When evaluating law school rankings
with avoidable error, one must consider the impact that error has on the
rankings themselves. Let us consider one example: a law professor ranks
every law school based on median LSAT score. On April Fool’s Day, her
research assistant sneaks into her office, opens the professor’s Excel file
that contains all the rankings, and subtracts five LSAT percentile points
from every law school’s median; thus, a school with a 99.4th percentile
median now appears to have a 94.4th percentile median, a school with a 77th
percentile median appears to have a 72nd percentile median, and so on. The
professor, absent-minded as she is, never notices the change, and uploads
the ordinal rankings to her website. Although there is significant avoidable
error, the ordinal rankings themselves have not been damaged by the error
– the 1st school is still the 1st school, the 69th school is still the 69th school,
and the absolute difference in percentile points between any two schools
remains the same. While the underlying data was tampered with, the
damage done was evenly distributed among every law school, and thus the
rankings may still serve their intended purpose. However, if the research
assistant had altered seven random schools and did not make any changes
to the other schools in the dataset, then the avoidable error would certainly
cause the rankings to lose efficacy and become invalid.

iii. Can others duplicate the findings?

Repeatability is one of the hallmarks of social science research. It is


not enough for a researcher to have a valid research design and avoid as
much error as possible. Researchers, as a quality check measure, should
also disclose their methodology so that others may repeat their experiment
to prove its validity. Like social scientists, individuals creating law school
rankings should explain how they generated their rankings. Regardless of
how one feels towards U.S. News rankings, U.S. News has openly
explained its methodology,74 and has allowed others to make attempts to
reverse engineer them.75 While most rankings providers now willingly
share their methodologies, some, most notably the Gourman Report, do

74 See, e.g., Robert Morse, Changing Law School Ranking Formula, U.S. News. June 26,
2008. http://www.usnews.com/blogs/college-rankings-blog/2008/6/26/changing-the-law-school-
ranking-formula.html.
75 See, e.g., http://www.deloggio.com/usnews/usnwrpl.htm (membership required).
CIOLLI-G ARBER 10/15/09 8:31 PM

152 THE DARTMOUTH LAW JOURNAL Vol. VII:2

not, and thus raise serious questions about their validity.76

III. COMMON FAULTS WITH EXISTING LAW SCHOOL RANKINGS

Having established a paradigm by which one can evaluate the validity


of law school rankings, it is now appropriate to apply this system to current
law school ranking systems in order to illustrate some of the most common
flaws.

A. Disconnect Between Purpose and Methodology

1. U.S. News and other comprehensive rankings

Much has been written about the flaws with comprehensive rankings,
although most criticisms have focused on U.S. News. Such a focus is not
surprising, given the enormous influence of the U.S. News rankings. Other
comprehensive rankings, such as the Cooley rankings, have been virtually
ignored by prospective law students and other legal education
stakeholders.77
There is little doubt that prospective law students are U.S. News’s
intended audience. However, the U.S. News rankings suffer from
significant purpose and methodology problems. As discussed earlier in this
paper,78 U.S. News does a poor job explaining what its rankings are
supposed to measure, claiming that its rankings are an “independent
assessment of the academic quality of programs”79 while also stating that
its rankings measure “how diplomas from various schools will affect
[applicants’] earning power” and “how successful the schools are at
preparing graduates for the bar exam.”80
Ultimately, whether U.S. News intends to measure academic quality,
earning power, bar exam preparation, or some combination of all three is a
moot question. The U.S. News rankings do not pass the validity test on
methodological grounds, regardless of which purpose is the true one. While
many articles have been written about the problems with the U.S. News

76 Selengo, supra note 11.


77 See Brian Leiter, “How to Rank Law Schools,” 81 INDIANA LAW JOURNAL 47-52
(2006).
78 See Section II, purpose section.
79 Supra note 75; Best Graduate Schools – Frequently Asked Questions – About The
Rankings, http://www.usnews.com/articles/education/best-graduateschools/2008/03/26/frequently
-asked-questionsrankings.html#2.
80 Supra note 76; Best Graduate Schools – Frequently Asked Questions – About The
Rankings – How To Use Our Lists Wisely, http://www.usnews.com/articles/education/best-
graduate-schools/2009/04/22/how-to-use-our-lists-wisely-2010.html.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 153

methodology,81 Brian Leiter concisely summarizes the fatal flaw of U.S.


News and all other comprehensive rankings: “[T]hose elements worth
measuring should be measured separately rather than aggregated on the
basis of unprincipled and unrationalizable schema. One can rank schools
based on SSRN downloads, student LSAT scores, faculty reputation,
scholarly impact as measured by citations, job placement, Supreme Court
clerkships, and so on, but there is no way these criteria can be meaningfully
amalgamated.”82
Put aside for a moment the numerous flaws with how U.S. News
measures academic reputation,83 practitioner reputation,84 student
selectivity,85 faculty resources,86 and employment placement.87 In fact, let

81 See, e.g., http://www.washingtonmonthly.com/features/2000/norc.html,


http://www.leiterrankings.com/usnews/2000usnews_compare.shtml,
http://www.deloggio.com/data/tiers.html,
http://leiterlawschool.typepad.com/leiter/files/how_to_rank_law_schools.doc.
82 Leiter, infra note 90.
83 Academic reputation is measured by surveying “law school deans, deans of academic
affairs, the chair of faculty appointments, and the most recently tenured faculty members” of law
schools. About 70% respond to this survey and this composes a school’s “Peer Assessment
Score,” which accounts for 25% of a school’s ranking. Accordingly, it is the most significant
variable in determining a school’s rank. However, this score may be heavily influenced by the
fact that law school deans and faculty members may hold allegiances to certain schools –
particularly their alma matters and the schools their academic peers attended. Because the
majority of legal academics hail from the same schools and the score itself is highly subjective,
U.S. News may be causing a disservice by basing 25% of its ranking on this survey. See
http://www.usnews.com/articles/education/best-graduate-schools/2008/03/26/law-
methodology.html.
84 U.S. News surveys “legal professionals, including the hiring partners of law firms, state
attorneys general, and selected federal and state judges” to find a school’s practitioner reputation
score. While it is good to include this data in the rankings, a school’s reputation ranking accounts
for 15% of its overall ranking, and it is questionable that this score should be given the weight.
Basing 15% of a law schools’ overall ranking on a school reputation is especially questionable
because the bar passage rate is given only a weight of 2% of the overall score and employment at
graduation is onlygiven a weight of 4% of the overall score. Both of these factors are, arguably,
more important to an applicant than the practitioner reputation of a school. Additionally,
surveying hiring partners at law firms can make the survey biased to schools that place well in
New York City, where there are more firms than in any other city. Id. See also Brian Leiter, The
U.S. News Law School Rankings: A Guide for the Perplexed (2003), available at
http://www.leiterrankings.com/usnews/guide.shtml.
Measuring student selectivity largely by median undergraduate Grade Point Average (GPA) and
LSAT, as U.S. News does, is problematic for two reasons. First, the median LSAT of a school is
a poor measure of selectivity because 50.1% of a school’s population can, for example, have an
LSAT score of 168 but a large portion of the school may nevertheless have an LSAT of 163 or
less. Second, measuring undergraduate GPA as a measure of selectivity ignores the fact that
some of the top students in the country hail from schools known for grade deflation (such as
California Institute of Technology or Swarthmore) or majored in a field of study known for a
harsh grading curve. U.S. News claims that a school with a higher median undergraduate GPA or
LSAT should be ranked higher than a school a school with lower scores, but unless the quality of
the undergraduate institution attended/major studied and the LSAT scores from the rest of the
class are considered, the magazine does a poor job of measuring student selectivity. See
http://www.usatoday.com/news/education/2007-03-27-princeton-grades_N.htm, discussing grade
CIOLLI-G ARBER 10/15/09 8:31 PM

154 THE DARTMOUTH LAW JOURNAL Vol. VII:2

us imagine that U.S. News found a way to correct for all of these flaws, but
otherwise kept its methodology the same. What would these new and
improved U.S. News rankings tell us? Would they tell us about academic
quality? Not really – if the purpose of the U.S. News rankings is to
measure academic quality, why does their methodology include factors
such as bar passage rates, percent of the class employed nine months after
graduation, and financial aid expenditures per student? Perhaps the
rankings are meant to measure the earning power of graduates. But if this is
U.S. News’s true purpose, why include factors such as the number of titles
in the law school’s library, or acceptance rate? Such extraneous factors do
nothing but create distortions in the rankings. There is absolutely no
empirical evidence to suggest that purchasing 10,000 library books will
have any impact on graduate earning power, and no reason to believe that
decreasing financial aid expenditures per student will decrease faculty
quality.
These additional factors create nothing but noise even if evidence
shows that there is a relationship between the two variables. For example,
academic reputation is known to strongly correlate with national
employment placement.88 However, why should a rankings scheme whose
purpose is to predict employment prospects or earning power include
academic reputation as an input if information on employment placement is
already available? While there is a correlation between the two variables,
the correlation is not perfect, and even if the correlation is very strong,
actual employment placement data will be by definition better than a

deflation at Princeton University, a top school that produces graduates with lower GPA’s than
many schools below its caliber.
86 The main problem with how U.S. News calculates faculty resource ranking is that at no
point does the magazine consider the strength of faculty in both publishing and teaching ability
(instead, it focuses on expenditures per student, student:faculty ratio, and library resources). No
points are assigned to faculties that are particularly scholarly, such as Chicago’s. Also, there is
no place for the students to rank the quality of faculty/instruction at their school. Thus, although
a school may have a high “faculty resource” score, this in no way means that the faculty at the
school is better than the faculty at another school. See
http://www.usnews.com/articles/education/best-graduate-schools/2008/03/26/law-
methodology.html
87 U.S. News values 4% of its overall ranking on employment at graduation and 14% of its
overall ranking on employment 9 months after graduation (and the magazine states that “For the
nine-month employment rate, 25 percent of those whose status is unknown are counted as
employed.” Students, in picking a law school, are hoping to be employed when they graduate.
At the very least, students hope to employed 6 months after graduation when they are required to
start making loan repayments. As a result, U.S. News is acting arbitrarily in placing 14% of its
overall ranking on employment 9 months after graduation.
http://www.usnews.com/articles/education/best-graduate-schools/2008/03/26/law-
methodology.html
88 See Anthony Ciolli, The Legal Employment Market: Determinants of Elite Firm Placement
and How Law Schools Stack Up, 45 JURIMETRICS J. 413 (2005).
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 155

variable that closely correlates with employment placement.


The only way the current U.S. News methodology is valid is if the
purpose of the U.S. News rankings is also the methodology. That is, the
purpose of U.S. News is to rank law schools by placing a 25% weight on
academic reputation and selectivity respectively, 20% weight on placement
success, and 15% weight on practitioner reputation as well as faculty
resources. If that is the actual purpose of the rankings, then the U.S. News
methodology makes perfect sense. However, this creates a new validity
problem: what utility do prospective law students, or any other audience,
gain from such a ranking? U.S. News has provided no justification as to
why these weights are necessary.89 There is also no evidence that even a
significant minority (let alone a majority) of prospective law students feel
these are appropriate weights. Given these facts, it is difficult to explain
how the U.S. News rankings can have any real utility for prospective law
students.

2. Single Individual Perception Reputation Rankings

U.S. News and other comprehensive rankings are often invalid due to
disconnect between purpose and methodology. Reputation rankings
generally suffer from similar problems, although different subsets of
reputation rankings also experience other validity issues.
The validity problems with single individual perception reputation
rankings closely parallel the problems with U.S. News. Depending on the
rankings stated purpose, the validity problem may be either methodology or
purpose. A single individual perception ranking will rarely experience a
methodology problem if the purpose of the ranking is to determine how a
specific individual views the law schools.90 But once again, even if such a
ranking is not methodological flawed, it lacks a useful purpose. After all,
why should a prospective law student care about how Jay Brody ranks the
law schools?91 Are there any tangible benefits to attending Brody’s 4th
ranked law school over his 17th ranked school?92
Not surprisingly, most single individual perception reputation
rankings claim that their rankings strongly correlate with some other factor.

89 Stephen P. Klein and Laura Hamilton, The Validity of the U.S. News and World Report
Ranking of ABA Law Schools (1998), available at http://www.aals.org/reports/validity.html
90 Perhaps the only way such a ranking could experience a methodology problem is if a third
party were asked to provide the rankings on behalf of the other person. For instance, if Justice
Scalia were asked to list Justice Breyer’s top ten law schools, he might provide a different list
from the one Justice Breyer would provide.
91
http://web.archive.org/web/20030727215004/http://brody.com/law/resources/brody_law_school_r
ankings.php.
92 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

156 THE DARTMOUTH LAW JOURNAL Vol. VII:2

Jay Brody does not claim that the purpose of his rankings are merely to tell
the world about his own perceptions about law school quality – instead,
Brody says that prospective law students should pay attention to his
rankings because they represent how law schools are perceived in the
employment market.93 In other words, one is led to believe that the purpose
of the Brody rankings is to measure career placement.
Most single individual reputation rankings, therefore, run into a
similar methodology problem as with U.S. News and other comprehensive
rankings: if the purpose of the Brody rankings is to rank law schools based
on career placement, should not the Brody rankings actually try to
meaningfully measure career placement? Brody has made no visible
attempt to include actual career placement variables in his rankings
scheme, nor does he provide evidence that his individual perceptions about
law school quality correlate with a law school’s ability to place its
graduates. When there is such an obvious disconnect between purpose and
methodology, the ranking scheme does not pass the validity test.
Notably, there is a limited situation where a single individual
perception ranking can be both methodologically sound as well as useful to
the intended audience. This can occur when the individual ranking the law
schools is also in a position to make hiring decisions for specific positions
that the audience finds desirable. For example, let us assume that a
Supreme Court justice decided to rank the law schools, in order to let
prospective law students know how he perceives the quality of various
schools. Unlike Brody, there are many individuals who put great weight in
a Supreme Court justice’s view of law school quality. This is not just due to
the prestige of the justice’s position, but also because the justice has direct
control over a commodity that is highly desirable to many law students:
which individuals get a clerkship with that justice. Thus, a Supreme Court
justice’s single individual perception reputation rankings could have great
utility, for a prospective law student would know that attending a law
school ranked highly by the justice may enhance their chances of obtaining
a Supreme Court clerkship. Similar situations may occur if the hiring
partner of an elite law firm that many prospective law students aspire to,
such as Wachtell or Cravath, decided to rank the law schools. However, it
seems no such individuals or firms have decided to rank the law schools
based solely on their own perceptions.94

93 Id.
94 Although Judge Richard Posner has written an article about law school rankings that
includes his own ranking scheme, this scheme is based on amalgamating other rankings, and does
not necessarily reflect how Judge Posner himself perceives law schools. See Posner, Richard,
"Law School Rankings," 81 Indiana Law Journal 13 (2006).
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 157

B. Disconnect Between Audience and Purpose

1. Faculty Perception: Evaluating the Leiter Report

Brian Leiter’s Educational Quality Survey, the most well known law
school faculty perception rankings, does not suffer from the same
confusion of purpose as U.S. News – Professor Leiter explicitly states that
the purpose of his survey is to measure the scholarly quality of law school
faculty,95 and he does not try to mislead readers into believing that his
rankings are meant to correlate with student employment prospects or any
other unrelated variable.96 Similarly, Professor Leiter’s methodology, while
certainly open to criticism,97 is still sound enough to serve the rankings
intended purpose; that is, Leiter’s survey actually attempts to measure how
faculty perceive other faculty.
The Leiter rankings, however, do suffer from a flaw, albeit a
somewhat less serious one. While the purpose and methodology may be
sound, Leiter considers prospective J.D. students his audience.98 Certainly,
some prospective J.D. students may have an interest in Leiter’s findings.
However, whether one likes it or not, faculty perception of the scholarly
quality of faculty at other law schools has little to no direct impact on
prospective law students. This is not to say that faculty quality is not
important to students – however, faculty perception of the quality of other
faculties bears little utility for prospective students. Faculty perceptions of
the scholarly quality of other faculties do not correlate well with faculty
teaching quality. In fact, Leiter’s own teaching quality rankings confirm
that there is little to no relation between student evaluations of teaching
quality and faculty perception of scholarly quality.99
Leiter argues that his rankings “may be the decisive factor for those
interested in pursuing careers in law teaching.”100 While it is conceivable
that some prospective law students might already have an interest in legal
academia prior to enrolling in law school and might therefore take faculty
perception of scholarly quality very seriously, basing an enrollment
decision on the Leiter rankings may not be any wiser than basing an

95 http://www.leiterrankings.com/faculty/1999faculty_reputation.shtml.
96 Id.
97 For example, Richard G. Heck, Jr. argues that Leiter’s rankings of graduate schools should
not be based on a single factor, the quality of the faculty’s research, because the correlation
between this and the quality of a student’s graduate education is “arguably small”. Leiter may be
making the same mistake by ranking law schools by faculty research.
http://frege.brown.edu/heck/philosophy/aboutpgr.php.
98 http://www.leiterrankings.com/faculty/1999faculty_reputation.shtml.
99 http://www.leiterrankings.com/faculty/2003faculty_best.shtml.
100 http://www.utexas.edu/law/faculty/bleiter/rankings/students.html (last visited: January
2009).
CIOLLI-G ARBER 10/15/09 8:31 PM

158 THE DARTMOUTH LAW JOURNAL Vol. VII:2

enrollment decision on U.S. News rankings. In fact, deciding law schools


based on the Leiter faculty perception rankings may result in even bigger
enrollment mistakes.
In addition to the overall faculty perception rankings, Leiter also
includes faculty perception rankings for various specialty areas ranging
from administrative law to legal history.101 Leiter emphasizes the
importance of these specialty rankings to prospective law students on his
rankings webpage: “If you have particular intellectual interests or particular
professional goals, make sure the schools you’re considering will meet
them. In some cases, the rankings by faculty quality in the specialty areas
will provide useful information on this score. A school with a strong
faculty in, e.g., international and comparative law, will undoubtedly have
good offerings in those areas.”102 While Leiter cautions that differences in
0.1 or less in the specialty rankings should not be given much weight,103 the
implication by omission is that differences higher than 0.1 should be
treated as significant differences.
These faculty perception specialty rankings have little use even to
students who are seriously considering legal academia. For example, in the
faculty quality in law and philosophy specialty ranking, Cardozo Law
School is ranked 5th with a 4.0 score and the University of San Diego is
ranked 10th with a 3.6 score, while Harvard University is ranked 12th with a
3.4 score, the University of Chicago is ranked 13th with a 3.3 score, and
Stanford University ranks 18th with a 3.1 score.104 For law and economics,
Georgetown University is ranked 19th with a 3.1 score while George Mason
University is ranked 9th with a 4.0 score.105 While these faculty perception
rankings may very well be accurate, they provide very little utility to
prospective law students, including prospective law students who know
with an absolute certainty that they intend to become legal academics in
those specialty fields. Once again, Leiter himself provides the data showing
the limitations of his faculty perception rankings. In his survey of where
tenure-track faculty hired since 1996 went to law school, Leiter finds that
108 Harvard, 104 Yale, and 48 Stanford graduates who had not earned an
LL.M. or other graduate degree secured tenure-track jobs.106 In contrast,
only one Cardozo and one George Mason graduate who had not earned an
additional graduate degree obtained tenure-track jobs, and the Cardozo
graduate works at her own alma mater.107 The University of San Diego

101 http://www.leiterrankings.com/faculty/index.shtml
102 http://www.leiterrankings.com/students/index.shtml
103 http://www.leiterrankings.com/faculty/index.shtml
104 http://www.utexas.edu/law/faculty/bleiter/rankings/philo.html
105 http://www.utexas.edu/law/faculty/bleiter/rankings/economics.html
106 http://www.leiterrankings.com/jobs/2006job_teaching.shtml
107 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 159

produced no graduates who obtained tenure-track jobs who did not earn an
additional graduate degree.108 While faculty members perceive some
second tier schools like Cardozo as having faculty in certain specialty areas
that are significantly better than faculty at top schools such as Harvard and
Yale, the data plainly shows that an individual seeking to enter legal
academia is far better off attending “inferior” J.D. programs at Harvard and
Yale.
Furthermore, even if the faculty perception rankings did not have such
flaws, the nature of the J.D. as a three year program pursued by future
lawyers and future academics alike makes prospective J.D. students an
inappropriate audience for this ranking. First, many individuals who apply
to J.D. programs thinking they intend to become law professors end up
foregoing the academy without even trying to obtain a tenure-track job.
The reasons for this can range from failings as students (finishing in the
bottom half of the class, not making law review, and so on) to realizing that
academia is simply not a good fit (they discover that they hate writing
academic articles, would prefer to earn more money in private practice, and
so forth). This sort of invisible attrition should not come as a surprise given
that, unlike Ph.D. programs, virtually all law students apply to law school
without ever setting foot in a law school class or knowing anything about
law as an academic discipline. Given how only a slim number of
individuals from any law school class will actually seek to become legal
academics, even among those who entered law school with that ambition, it
is irresponsible for J.D. students to make enrollment decisions based on
how their J.D. school’s faculty is perceived by faculty at other law schools.
Second, and perhaps more importantly, faculty perception rankings
are extremely fluid. Even something as minor as one faculty member
switching schools, retiring, or dying can result in a significant shift in how
an entire school’s faculty is perceived by others. Because faculty departures
and additions occur relatively frequently, faculty perception rankings can
change even more significantly than U.S. News rankings between the time
the prospective students makes the enrollment decision, graduates, and
enters the teaching market. While faculty departures will naturally have
little negative impact on schools such as Yale that are already perceived as
highly prestigious by non-academics,109 such departures, even if only
temporary, could have more significant effects on schools that are
traditionally viewed as having less prestige. Leiter himself illustrates this
by quoting a student on his blog: “If you are going to Tufts to work with
Dennett and he is gone or not teaching in one or both of your first
semesters, then it is not clear that Tufts’ distinguished faculty will be of

108 Id.
109 http://leiterlawschool.typepad.com/leiter/2005/08/how_to_rank_law.html.
CIOLLI-G ARBER 10/15/09 8:31 PM

160 THE DARTMOUTH LAW JOURNAL Vol. VII:2

much help to you.”110 While in reference to two year philosophy M.A.


programs, the same principle holds true for J.D. programs: the temporary or
permanent departure of one or two important faculty members can
eliminate any hypothetical advantage to attending a school that traditionally
carries less prestige but is perceived well by law school faculty.
Prospective J.D. students, then, have little use for a faculty perception
ranking, even those interested in becoming law professors. Professor
Leiter’s faculty perception rankings, however, could have considerable
utility for students who are contemplating enrollment in LL.M. or J.S.D.
programs with the intention of entering law teaching afterwards. An
audience of LL.M./J.S.D. applicants would gain more utility from such
rankings than prospective J.D. students for several reasons. First, full time
LL.M. and J.S.D. programs usually do not last longer than one academic
year, except in cases where an individual is pursuing both an LL.M. and a
J.S.D.111 Thus, except in cases of unexpected deaths; informed LL.M. and
J.S.D. applicants will know in advance whether specific faculty members
will be available during their stay in the program. Second, since these
LL.M. and J.S.D. students have already experienced law school classes
during their J.D. programs and are pursuing their advanced degrees to
enhance their chances of obtaining academic jobs, how faculty perceive
their graduate schools truly becomes the decisive factor in the enrollment
decision, since the perceptions of other stakeholders, such as elite law
firms, are no longer relevant since future hiring decisions will be by faculty
members. Third, LL.M. and J.S.D. students will likely enter the market for
law teaching immediately after earning their graduate degrees. Thus, unlike
J.D. students, they do not need to worry about the prestige of their graduate
institution dropping in the eyes of faculty on hiring committees between
enrolling in the program and going on the market. Fourth, since many
LL.M. and all J.S.D. programs often involve significantly more one-on-one
interactions with faculty than J.D. programs, one could argue that the
prestige of LL.M. and especially J.S.D. students are more closely tied to the
prestige of their faculty advisors than J.D. students. Given these realities,
Leiter might consider increasing the validity of his rankings by targeting
prospective graduate law students rather than prospective J.D. students.

110 http://leiterreports.typepad.com/blog/2004/10/ma_programs_in_.html.
111 For example, it would take at least 3 years to obtain a LL.M./J.S.D. in International Human
Rights from Saint Thomas University. http://www.stu.edu/Academics/Programs/
LLMJSDinInterculturalHumanRights/ProgramInformation/tabid/1227/JSDinInterculturalHuman
Rights/tabid/1237/Default.aspx
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 161

2. Why Revealed Preference Reveals Little

Revealed preference/student perception rankings such as the Avery


and Chalfin studies can provide either a wealth of information or cloud a
reader’s mind with misinformation, depending on how such rankings are
structured. As Cass Sunstein notes, revealed preference rankings serve as a
market test 112 – a revealed preference study using proper methodology will
illuminate on where law schools stand in the minds of students. If such
studies are geared towards deans of admission or other school
administrators, they can provide such audiences with very useful
information that is rather difficult for any one school to obtain on its
own.113
Problems often arise, however, when revealed preference rankings are
geared towards an inappropriate audience. Although the Avery study does
not suffer from an audience mismatch, the Chalfin study, by focusing on
prospective and current law students as its primary audience, provides its
target audience with little useful information. While Chalfin states that
revealed preference rankings can serve as a proxy for employment
placement,114 there is no evidence to show that students are in a position to
actually compare schools based on employment placement. In fact, the very
strong correlation between the Chalfin rankings and the U.S. News rankings
indicate that students in Chalfin’s sample are likely not making their own
independent assessment of school quality, but are merely regurgitating a
more popular ranking system.115 If one’s goal is to measure employment
placement, it would be more efficient to study employment placement
itself, rather than conduct a revealed preference study.
Chalfin’s observation about employment placement leads us to a more
serious flaw with revealed preference rankings. Although Chalfin claims
his revealed preference rankings indicate views regarding employment
placement, there is no way of knowing why his survey respondents ranked
schools the way they did. Because different individuals value school
characteristics differently, one cannot tell whether most respondents placed
the greatest weight on employment placement, or perceived academic
reputation, or location, or perhaps some other factor. While the Avery
study is geared towards a more relevant audience than the Chalfin study, it
is unclear just how much utility mere ordinal rankings will have for that
intended audience. For example, Cornell’s dean of admissions may find it
interesting to know that more students would rather enroll at Dartmouth

112 Cass R. Sunstein, Ranking Law Schools: A Market Test?, 81 IND. L.J. 25, 28 (2006).
113 While schools could gather similar information through looking at cross-admit yields,
response rates may be so low as to make such data misleading.
114 http://www.autoadmit.com/studies/chalfin/chalfin.pdf.
115 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

162 THE DARTMOUTH LAW JOURNAL Vol. VII:2

than Cornell. However, the Avery study does not explain why students
prefer Dartmouth to Cornell. Do they think Dartmouth is more prestigious,
or places its graduates in better jobs at graduation? Or do most students
prefer Hanover to Ithaca? Perhaps students feel Dartmouth has better
dorms. In any case, we simply do not know why Avery’s revealed
preference survey respondents prefer Dartmouth to Cornell – thus, Cornell
administrators do not know what areas they are perceived as weak relative
to Dartmouth administrators. As a result, Cornell cannot determine how it
can improve itself as an institution. Does it need to market the benefits of
living in Ithaca better, or should it spend its resources improving its on
campus dining services? Maybe it is all just a perceived lack of selectivity.
While there is some utility in knowing that students prefer Dartmouth,
Cornell cannot do anything to improve its situation if it does not know why
Dartmouth is ranked higher in the revealed preference survey.

C. Poor Methodology & Avoidable Error

Even when there is no apparent disconnect between audience,


purpose, and methodology, poor methodological techniques that result in
significant unavoidable errors can result in a heavily misleading ranking
system. Perhaps the best examples of ranking systems doomed by
avoidable error are three studies of law school career placement conducted
by John Wehrli, Brian Leiter, and Michael Sullivan.
Of these three, the most well known was conducted by Leiter in
2003.116 Leiter attempted to measure nationwide placement by studying 45
firms from the 2003 edition of the Vault Guide to the Top 100 Law
Firms.117 Leiter chose these 45 firms by selecting Vault’s top three firms in
each city/region where there were at least three firms on the Vault list.

116 Brian Leiter, The Most National Law School Based on Job Placement in Elite Law Firms
(2003), available at http://www.leiterrankings.com/jobs/2003job_national.shtml
117 The following are Leiter’s 45 firms, broken down by city. These firms include the
following: Atlanta: King & Spaulding, Alston & Bird, Kilpatrick Stockton; Boston: Hale and
Door; Ropes & Gray, Goodwin Proctor; Chicago: Kirkland & Ellis, Sidley Austin Brown &
Wood, Mayer Brown Rowe & Maw; Cleveland: Jones Day Reavis & Pogue, Baker & Hostetler;
Dallas: Akin Gump Strauss Hauer & Feld, Haynes and Boone; Washington DC: Covington &
Burling, Williams & Connolly, Wilmer Cutler & Pickering.; Houston: Baker Botts, Fulbright &
Jaworski, Vinson & Elkins; Los Angeles: Latham & Watkins, Gibson Dunn & Crutcher,
O’Melveny & Myers; Miami: Holland & Knight, Greenberg Traurig; Milwaukee: Foley &
Lardner; Minneapolis: Dorsey & Whitney; New York: Cravath, Swaine & Moore, Wachtell
Lipton Rosen & Katz, Sullivan & Cromwell; Philadelphia: Morgan Lewis & Bockius, Dechert,
Drinker Biddle & Reath; Pittsburgh: Kirkpatrick & Lockhart; Portland: Miller Nash, Stoel
Rives; Richmond: Hunton & Williams, McGuireWoods; San Diego: Gray Cary Ware &
Freidenrich; San Francisco: Morrison & Foerster, Wilson Sonsini Goodrich & Rosati, Orrick
Herrington & Sutcliffe; Seattle: Perkins Coie, Davis Wright Tremaine, Foster Pepper &
Shefelman; and St Louis: Bryan Cave.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 163

Leiter then searched the online version of Martindale-Hubbell for


attorneys at these 45 firms who graduated from a pre-selected list of 22 law
schools.118 Using this data, Leiter ranked these law schools based on their
placement in these firms. Leiter based 40% of this rank on the per capita
rate of placement (total number of attorneys at the firms divided by the size
of the school’s yearly graduating class), another 40% on the number of law
firms that have at least one graduate of the school in the firm, and the final
20% based on the number of law firms that have at least five graduates of
the school in the firm. Using these measures, Leiter concluded that the top
ten law schools in terms of elite firm placement were, in order, Harvard,
Chicago, Yale, Virginia, Michigan, Stanford, Columbia, Georgetown,
Duke, and Penn. In March 2005, Michael Sullivan, then an undergraduate
student at Calvin College and currently a law student at Duke, updated
Leiter’s study, applying the same methodology but using more current data
from Martindale-Hubbell.119
While Leiter’s study is in wider circulation, John E. Wehrli’s
employment placement rankings predate Leiter’s by seven years.120 In
1996, Wehrli, then a law student jointly enrolled at the University of
California-Hastings College of Law and the University of California-
Berkeley’s Haas School of Business, published a list of the top 30 law
schools at the top 100 law firms 121 , adjusted for class size. Like Leiter,
Wehrli searched the online Martindale-Hubbel directory for his data –
however, unlike Leiter, Wehrli searched every ABA-accredited law school,
rather than using a pre-selected list. Wehrli concluded that the top ten law
schools in terms of elite firm placement included, in order, the following:
Harvard, Yale, Chicago, Columbia, Stanford, Michigan, Virginia, NYU,
Berkeley, and Northwestern.122 At the same time, Wehrli developed a
secondary set of rankings, where he ranked law schools based on the

118 Leiter’s 22 schools are Yale, Harvard, Stanford, Chicago, Columbia, NYU, Michigan,
Virginia, Texas, Penn, Cornell, Georgetown, Northwestern, Duke, Vanderbilt, UCLA, Emory,
Washington & Lee, Notre Dame, Minnesota, and George Washington.
119 Michael Sullivan, Law School Job Placement (2005), available at
http://www.calvin.edu/admin/csr/students/sullivan/law/index.htm.
120 John Wehrli, Top 30 Law Schools at the Top 100 Law Firms, Adjusted for School Size
(1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html.
121 Wehrli used the American Lawyer’s list of the top 100 firms for his study. John Wehrli,
Top 30 Law Schools at the Top 100 Law Firms, Adjusted for School Size (1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html
122 The schools Wehrli ranked 11-30 are, in order: Cornell, Penn, Duke, Georgetown, UCLA,
USC, Texas, George Washington, Illinois, Boston University, Vanderbilt, Florida, Fordham, UC
Hastings, Boston College, Washington & Lee, Notre Dame, UC Davis, Washington University in
St. Louis, and Case Western. John Wehrli, Top 30 Law Schools at the Top 100 Law Firms,
Adjusted for School Size (1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html
CIOLLI-G ARBER 10/15/09 8:31 PM

164 THE DARTMOUTH LAW JOURNAL Vol. VII:2

number of top 100 law firms that have at least one attorney present from
the school.123 The top ten schools according to that ranking are, in order,
the following schools: Harvard, Virginia, Columbia, Georgetown, NYU,
Yale, Michigan, Chicago, Penn, and Cornell.124
National Placement Rankings, Wehrli vs. Leiter vs. Sullivan

Rank Wehrli (1996) Leiter (2003) Sullivan (2005)


1 Harvard Harvard Harvard
2 Yale Chicago Chicago
3 Chicago Yale Virginia
4 Columbia Virginia Yale
5 Stanford Michigan Stanford
6 Michigan Stanford Michigan
7 Virginia Columbia Columbia
8 NYU Georgetown Duke
9 Berkeley Duke Penn
10 Northwestern Penn NYU

Although several years separate the Wehrli, Leiter, and Sullivan


studies, and although the Wehrli study employs a different methodology,
these studies are similar enough to explain jointly. Wehrli, Leiter, and
Sullivan’s studies are reflective of several major improvements over the
early law school employment studies. Unlike previous studies, Wehrli,
Leiter, and Sullivan made a good faith effort to adjust for law school class
size. In addition, these researchers attempted to measure truly nationwide
placement. However, despite these improvements, all three studies contains
several significant methodological flaws – most of which Leiter and
Sullivan acknowledge, but which Wehrli does not address – that make
them of questionable use to prospective students.
First, none of the studies distinguish between recent hires and

123 John Wehrli, The 31 Truly National Law Schools (1996), available at
http://web.archive.org/web/19980520150145/wehrli.ilrg.com/amlawnational.html.
124 In this set of rankings, Harvard and Virginia appear to be tied for 1st place with 100 firms
each, while Columbia, Georgetown, NYU, and Yale are tied for 3rd place with 99 firms each and
Michigan and Chicago are tied for 7th place with 98 firms and Cornell and George Washington
are tied at 10th with 94 firms. However, for unstated reasons, Wehrli did not designate these ties
but gave schools absolute ordinal ranks. The schools ranked 11th to 31st are, in order, George
Washington, Stanford, Duke, Berkeley, Boston University, Northwestern, UCLA, Vanderbilt,
Texas, Fordham, Boston College, UC Hastings, Notre Dame, Rutgers (both campuses), Emory,
Wisconsin, USC, Illinois, Washington University in St. Louis, Tulane, and Minnesota. Wehrli
says that he deliberately stopped his rankings at #31 because the gap between #31 Minnesota and
the unspecified school ranked at #32 was very significant, and thus #31 is his cutoff for the “truly
national schools.” Listing schools in an ordinal ranking without explaining how ties were
decided is inherently unfair and unhelpful to a law school consumer.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 165

individuals hired a very long time ago. Wehrli, Leiter, nor Sullivan limited
their studies to associates hired within the last few years. Instead, they
studied all attorneys hired at these firms, whether they were non-
partnership track counsel who graduated in the early 1990s, senior partners
who graduated in the 1960s, or, in Leiter’s case, first year associates who
graduated in 2002. Leiter acknowledges this problem, and concede that his
study “reflect[s] not only who Ropes & Gray was hiring in 1995, but some
of whom they were hiring in 1970.” Leiter also acknowledges that this bias
had a significant impact on his rankings: schools like Michigan and Duke,
which Leiter claims were more prominent in the past, may be artificially
inflated in his rankings, while schools like NYU, which may not have been
as well regarded in the past but have risen to greater prominence in recent
years, may be low ranked relative to their contemporary placement. Leiter
attempts to circumvent this problem by stating that his study “invariably
reflects a school’s reputation among elite firms over a long period of time.”
Although this may be true, and while all three studies have academic merit,
they have little practical value for prospective law students or other
interested parties, who are primarily concerned with contemporary hiring
trends.
Second, the researchers have not adjusted for varying career
preferences amongst students from different schools. When making per
capita adjustments, Wehrli, Leiter, and Sullivan divide the total number of
attorneys by graduating class size. However, the percentage of students
who choose to go into law firms is not constant among all the law schools
in his study. According to the 2005 edition of U.S. News and World
Report’s law school rankings guide, 80% of Columbia Law School’s
graduating class of 2002 was employed at a law firm. In contrast, only 72%
of NYU’s graduating class of 2002 was employed at a law firm.125 Since
NYU graduates are entering private practice in lower proportions than
Columbia graduates, one can expect that using total class size, all else
equal, would artificially inflate Columbia’s ranking relative to NYU’s.
Although Leiter and Sullivan do not address this problem, Wehrli openly
acknowledges it, noting that Harvard ranked higher than Yale even though
its class is three times larger because “a higher % of Yale grades [sic] enter
government service and politics than Harvard.” A far better way to adjust
for the effect of differing class size would have been to divide the total
number of attorneys by the number of graduating students employed at law
firms, since this would have virtually eliminated this problem of sectoral
self selection.

125 U.S. News, The Top 100 Law Schools (2005), available at http://grad-
schools.usnews.rankingsandreviews.com/best-graduate-schools/top-law-schools/rankings (U.S.
News premium login required for access.) .
CIOLLI-G ARBER 10/15/09 8:31 PM

166 THE DARTMOUTH LAW JOURNAL Vol. VII:2

Third, the researchers did not adjust for the geographical employment
preferences of students. This is probably the most significant
methodological flaw of both studies. Just as student career preferences vary
from school to school, the geographical preferences of a school’s students
differ as well. For example, according to the 2005 edition of U.S. News and
World Report’s law school rankings guide, 78% of Columbia’s 2002
graduating class settled in the Middle Atlantic (NY, PA, NJ) region, while
only 6% settled in the Pacific (AK, CA, HI, OR, WA) region. In contrast,
9% of Berkeley’s 2002 graduating class settled in the Middle Atlantic
region, while 75% settled in the Pacific region.126
While Leiter does not mention the Wehrli study, he correctly states
that the AmLaw Top 100 list, which Wehrli used, is heavily biased in favor
of the Northeast, particularly New York City-based firms. Thus, one can
assume that Leiter’s attempt at selecting only the best three firms in a given
region was an attempt to fix the geographical bias present in Wehrli’s study
against schools that do not send many graduates to the northeast. However,
by doing this, Leiter, and later Sullivan, created a new bias. For example,
of the 45 firms Leiter and Sullivan included in their study, only seven are
located in the Middle Atlantic region, while twelve are located in the
Pacific region. The problem is even more apparent than it seems. Within
these regions certain states dominated – 77% of Columbia graduates stayed
in New York, and 69% of Berkeley graduates stayed in California.127
However, while only three New York firms are in Leiter and Sullivan’s
studies, Leiter and Sullivan include seven California firms, which
artificially boosts the rankings of schools like Berkeley, UCLA, and
Stanford, while artificially lowering the rankings of schools that place a
large proportion of graduates in the Northeast such as Columbia, NYU,
Penn and Cornell. A better means of selecting firms would have been to
select a number of firms proportional to the market share of each regional
market relative to the national legal market.
Although firm prestige certainly has a major impact on choice of job,
it is fair to say that most students primarily pick their jobs first based on
location. Leiter concedes this point in his study and poses the rhetorical
question, “How many [students] pick Columbia with the goal of big firm
practice in Dallas or Portland?” I agree with Leiter’s point; as the
employment location charts in U.S. News show, few Columbia students
choose to practice in Dallas or Portland. However, in order to consider
Leiter and Sullivan’s studies wholly accurate, one needs to assume that
students do not have any geographic preferences whatsoever, and will
choose jobs based solely upon prestige. In other words, for these rankings

126 Id.
127 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 167

to have efficacy, one would have to assume that a Columbia student would
rather work at the #1 Dallas firm rather than the #5 New York firm – an
assumption that neither I nor probably Professor Leiter are willing to
accept.
Fourth, Wehrli, Leiter, and Sullivan do not properly deal with the bias
introduced by graduates of LL.M programs. Leiter and Sullivan include
LL.M classes as part of a school’s total class size. Leiter justifies this
inclusion by pointing out that Martindale-Hubbell’s search engine does not
distinguish between J.D. graduates and LL.M graduates, and therefore he
has to include LL.M classes as part of total class size to avoid artificially
raising the rankings of schools with large LL.M programs, such as
Georgetown and NYU. Although including LL.M classes does avoid
positive bias towards schools with large LL.M programs geared towards
domestic students, it results in negative bias towards schools with LL.M
programs geared towards international students – many grades of such
programs do not intend to practice in the United States but instead return to
their home countries to work after graduation. Moreover, including LL.M
graduates in the rankings introduces yet another element that makes these
rankings a poor resource for prospective J.D. students who do not intend to
pursue an LL.M. Wehrli’s study, however, introduces the exact opposite
bias – although schools with LL.M programs geared towards internationals
are not artificially penalized, schools with large domestic LL.M programs
like NYU are artificially benefited. The only way any researcher could
have avoided introducing either bias to his study would have been not to
use Martindale-Hubbell’s search engine to gather the data.
Fifth, Martindale-Hubbell’s online directory is an incomplete and
inconsistent source of information. For most law firms, Martindale-Hubbell
includes biographical information on a firm’s associates, partners, and
counsel. However, several law firms only submit biographical information
to Martindale-Hubbell about their partners and counsel, and do not provide
the names of their associates, let alone where they went to law school.
To illustrate just how much damage excluding a firm’s associates can
have, I will use a concrete example. Cravath, Swaine & Moore, one of the
most elite law firms in world, has not included their associates (except for
senior ones) in the online Martindale-Hubbell directory.128 According to
the dataset Sullivan released, Martindale-Hubbell’s search engine found 19
Columbia Law graduates working at Cravath. However, according to
Cravath’s own search engine, there are 87 Columbia Law graduates

128 Lawyer Locator, Cravath, Swaine & Moore LLP (2005), available at
http://www.martindale.com/ (follow “Find Lawyers & Firms; search “(Search For) Organization
Name” for “Cravath, Swaine & Moore, LLP”).
CIOLLI-G ARBER 10/15/09 8:31 PM

168 THE DARTMOUTH LAW JOURNAL Vol. VII:2

working there –including 69 associates not in Martindale-Hubbell!129 Since


there already is geographical bias present towards schools that send large
percentages of their graduating classes to New York, excluding Cravath
and Sullivan associates while including associates for firms in other regions
horribly prejudices all of these studies, and explains why Columbia, NYU,
and similar schools are ranked so low.
Sullivan elaborates on several additional problems involved with
Martindale-Hubbell. For instance, Sullivan points out that Martindale-
Hubbell sometimes lists the same lawyer twice in its database.130 Sullivan
sums up his discussion of Martindale-Hubbell with this point:
[Given the inadequacies of the database,] a better way of doing this
kind of study would be to use firms that had search engines on their
websites that searched by law school.131
Sixth, Leiter and Sullivan’s choice of elite firms is open to question.
The issue of geographic bias aside, exclusively using the 2003 edition of
Vault’s guide to determine which firms are elite may be misleading. Even if
one assumes that regional Vault rank perfectly correlates with regional firm
prestige, one must remember that Leiter and Sullivan’s studies includes
individuals who were hired over a period lasting several decades. While
these 45 firms might be the most elite in their region in 2003, they might
not have been the most elite in their region in 1983, or 1963.
Seventh, Leiter and Sullivan’s methodology favors large schools.
Leiter himself states that “without a doubt, two of the measures used in
calculating the overall rank are sensitive to the number of graduates,” and
concedes that this favors large schools such as Georgetown and Texas and
hurts smaller schools. It is unclear why Leiter and Sullivan chose to include
these two measures in their ranking formula knowing the bias it introduces.
This bias is not present in Wehrli’s per capita rankings since he only
measured per capita representation and nothing else in that set of rankings.
However, I do not believe that Wehrli’s ranking of schools based on the
number of top 100 firms employing at least one graduate reveals anything
useful about law school placement, since differing class sizes and differing
geographical and sectoral preferences are not taken into account.
There is one methodological problem that impacts Leiter and
Sullivan’s studies but not Wehrli’s. Leiter breaks a cardinal rule of research

129 Cravath, Swaine & Moore, Cravath Lawyers (2005), available at


http://www.cravath.com/Cravath.html.
130 Sullivan states that he included these double counted lawyers in his study both times they
were listed in order to remain consistent with his stated methodology. See Michael Sullivan,
Methodology (2005), available at http://www.calvin.edu/admin/csr/students/sullivan/law/
method.htm.
131 See Michael Sullivan, Discussion (2005), available at http://www.calvin.edu/admin/
csr/students/sullivan/law/discussion.htm.
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 169

methodology – he started the study with a preconceived notion of which


law schools are “national.” Leiter states that he studied the “usual suspects
for the top law schools,” which he defines as Yale, Harvard, Stanford,
Chicago, Columbia, NYU, Michigan, Virginia, Texas, Penn, Cornell,
Georgetown, Northwestern, Berkeley, and Duke.132 He also states that he
included Vanderbilt and UCLA because they are “two schools on the cusp
of this elite group.” In addition, he included Emory, Notre Dame,
Minnesota, Washington & Lee, and George Washington in his rankings “as
a check on the reliability of the results” because they are “very reputable,
but presumably less national schools.” These assumptions are unfounded.
To his credit, Leiter does include a prominent disclaimer that other schools
not studied might have outperformed Washington & Lee, Notre Dame,
George Washington, Minnesota, and UCLA – however, no such disclaimer
is present for the “usual suspects” and Vanderbilt. While it might very well
be true that the “usual suspects” are in a league of their own and Vanderbilt
and UCLA are on the cusp of this group, no academic research using
proper methodology exists to support this claim as it applies to employment
prospects. Similarly, while the “usual suspects” might be the top 15
according to U.S. News or Leiter’s faculty rankings, there had been no
evidence showing that U.S. News rank or Leiter’s faculty rankings correlate
with employment placement. In fact, Vanderbilt outperforming two of the
“usual suspects” in Leiter’s study should cast doubt on this assumption.
Rather than coming to such a conclusion before conducting his study and
limiting his research to such a small number of schools, Leiter might have
examined every ABA-accredited law school, as Wehrli did. Although
Sullivan examined three additional schools – Boston College, Boston
University, and Illinois – the same criticism applies to him as well.

IV. BEYOND LAW SCHOOL RANKINGS: A PROPOSAL FOR CHANGE

The overwhelming majority of law school rankings researchers have


failed to produce a set of law school rankings that provide any real utility to
their audience, either because of a disconnect between purpose, audience,
and methodology or due to avoidable error introduced due to the
experimenter’s poor research techniques. How, then, would one go about
creating a law school ranking study that actually has significant utility for
its intended audience? While most ranking studies are heavily flawed, a
few have managed to produce very useful results.

132 It is important to note that when Leiter included Texas in the “usual suspects for the top
law schools” he was faculty at the University of Texas School of Law.
http://www.law.uchicago.edu/faculty/leiter/cv.html.
CIOLLI-G ARBER 10/15/09 8:31 PM

170 THE DARTMOUTH LAW JOURNAL Vol. VII:2

The Avery clerkship study133 shows us that it is very possible to create


a set of law school rankings that are non-arbitrary, useful to their audience,
and do not suffer from significant amounts of avoidable error. However,
even this study runs into a pitfall inherent to all ranking schemes:
timeliness. While the Avery clerkship study is useful to prospective law
students seriously considering judicial clerkships, how useful will it be in
the year 2010, or 2040?134 No matter how great a law school ranking
system, it is a given that all rankings will have a short shelf life – while the
results of the Avery study may continue to be reliable for several years
after its initial publication, it is inevitable that as the years continue to pass
the study’s results will become more dated.135 Although the timeliness
issue could be avoided if rankings researchers annually or bi-annually
update their rankings, other than U.S. News few ranking systems have
actually been updated on a frequent basis.136
Perhaps the best way to remedy the timeliness problem is to move law
school rankings research beyond mere ordinal rankings. Rather than
obsessing over whether the Virginia is ranked higher than U.C. Berkeley in
a given ranking system, researchers should turn their attention to finding
out why (or why not) Virginia is ranked higher than Berkeley. While some
researchers have made significant attempts at finding out the underlying
determinants of rankings other than their own,137 most individuals who rank
law schools have not attempted to determine the causes of school
placement, other than the occasional unsubstantiated speculation.138
The exception to this rule has been the AutoAdmit Law School
Employment Rankings study.139 In this study, data were collected from
1,295 employers on 15,293 law firm associates who graduated from law
school between 2001 and 2003.140 After adjusting for both regional141 and

133 Christopher Avery et al., The Market for Federal Judicial Law Clerks, 68 U. CHI L. REV.
793 (2001).
134 Even today the study may be outdated, Avery et al. gathered their data from the Spring
2000 edition of the Judicial Yellow Book, and since then the market for federal judicial clerks has
changed significantly.
135 For instance, over the decades many judges will retire or die, while others are confirmed to
take their place. These new judges will naturally bring their own biases into the clerkship hiring
process, and thus even within a decade there may be significant changes in the market for judicial
clerks.
136 Sullivan, for example, has never updated his law school rankings.
137 See, e.g., Jeffrey E. Stake, The Interplay Between Law School Rankings, Reputations, and
Resource Allocation: Ways Rankings Mislead, 81 IND. L.J. 229, 250-55 (2006).
138 Leiter, for instance, has occasionally attributed ranking shifts in his educational quality
rankings to various faculty moves. However, Leiter has made no attempt to isolate whether those
faculty moves actually caused those shifts, or if other factors were involved.
http://www.leiterrankings.com/archives/2000archives_appc_changes.shtml.
139 Anthony Ciolli, The Legal Employment Market: Determinants of Elite Firm Placement and
How Law Schools Stack Up, 45 JURIMETRICS J. 413 (2005).
140 Data collection took place between November 2004 and January 2005. The author
CIOLLI-G ARBER 10/15/09 8:31 PM

Summer 2009 LAW SCHOOL RANKINGS 171

sectoral preferences,142 as well as class size,143 all ABA-accredited law


schools were ranked by entry level associate placement at the nation’s most
elite law firms 144 in nine regions of the United States.145 These regional
rankings were then aggregated 146 to generate a set of national elite law firm
placement rankings for law schools which placed a significant number of
their graduates in multiple geographic regions.147
However, this study did not stop with ordinal rankings of regional and
national elite law firm placement. Although other employment studies,
such as those by Wehrli and Leiter, stopped after determining which law
school was number one, the AutoAdmit study attempted to find out why
the top ranked school, Chicago, was ranked number one, and why some
schools, such as Yale and Stanford, performed poorly relative to what one
might expect.148 Acknowledging that ordinal rankings “will have only
limited utility to current or prospective students or to career services
professionals,”149 the author decided to explore the determinants of national
elite law firm placement in order to answer these questions, for “[i]f these
deeper questions can be answered, prospective law students will know what
traits to value highly when considering law schools and academic
administrators will be able to identify ways that make their student bodies
more attractive to elite legal employers.”150
Such an approach both diminishes the timeliness problem and
enhances overall utility for the audience. The AutoAdmit study concludes
that, holding academic reputation constant, there are three factors that serve
as significant predictors of national elite firm placement: grading
systems,151 class rank disclosure policies,152 and the number of required first

obtained this information by examining biographies on law firm websites. Id. at 419.
141 See id. at 423 (“Student geographical preferences vary from school to school…. Failing to
adjust for these preferences can produce distortions.”)
142 See id. (“The percentage of students who choose to go into law firms is not constant among
law schools.”)
143 See id. at 422 (explaining how class size adjustments were made in order to isolate J.D.
students who graduated between 2001 and 2003).
144 Elite firms were defined as any law firm ranked by the American Lawyer or Vault. Id. at
416-418.
145 These nine regions were New England, Mid Atlantic, Midwest, West North Central, South
Atlantic, East South Central, West South Central, Rocky Mountains, and Pacific. Only schools
that sent more than 20 students to a given region between 2001 and 2003 were ranked in that
region. Id. at 424.
146 The regional rankings were aggregated based on market share. Id. at 427.
147 Schools that placed 20 or more graduates in at least two of the nine regions were ranked
nationally. Id.
148 Id. at 428.
149 Id. at 427-28.
150 Id. at 428.
151 Id. at 431-34.
152 Id. at 435-36.
CIOLLI-G ARBER 10/15/09 8:31 PM

172 THE DARTMOUTH LAW JOURNAL Vol. VII:2

year classes.153 Using this information, prospective law students are able to
know what factors they should consider important when judging a school’s
ability to place its graduates even many years after the initial study was
published. Similarly, law school administrators, rather than merely
knowing how their law schools stack up to other law schools, will have a
better idea of what they can do to improve elite firm placement relative to
their peer schools. Based on the AutoAdmit study, one could argue that the
best practices law schools should implement in order to maximize
placement are a traditional letter grade grading system,154 no class rank
disclosure to students or to employers,155 and more required first year
classes.156

153 Id. at 434-35.


154 See id. at 432 (“A system that generates outliers at the extremes, but leaves an
undifferentiated middle, might not benefit those in the very top of the class, who could one day
apply for Supreme Court clerkships, or those at the very bottom, whose failings are obvious, but
seems to strongly benefit the class as a whole, since most of the class ends up in the middle.”).
155 See id. at 435 (stating that class rank non-disclosure policies discourage the use of rigid
class rank cutoffs by employers, and provides psychological benefits to law students during the
employment-seeking process).
156 See id. at 434 (“The more classes an individual takes, the more possibilities for
differentiation emerge.”).

Вам также может понравиться