Академический Документы
Профессиональный Документы
Культура Документы
JOSHUA GARBER*
This article discusses the various systems used to rank law schools and methods for
evaluating their validity. The authors conclude that outcome-based rankings are likely
to offer the most utility to prospective students.
*
Anthony Ciolli is Appellate Law Clerk to Chief Justice Rhys S. Hodge, Supreme Court of the
Virgin Islands. The opinions in this article are the author’s alone and do not reflect the views of
Chief Justice Hodge, the Supreme Court of the Virgin Islands, or the Virgin Islands judiciary.
*
Joshua R. Garber is a Policy Fellow at the American Civil Liberties Union of Northern
California’s Death Penalty Program. The opinions in this article are the author’s alone and do not
reflect the views of the American Civil Liberties Union, its members, donors, or supporters.
132
CIOLLI-G ARBER 10/15/09 8:31 PM
1 Symposium, The Next Generation of Law School Rankings, 80 IND. L.J. (2006).
2 See LAW SCH. ADMISSION COUNCIL, NEW MODELS TO ASSURE DIVERSITY, FAIRNESS,
AND APPROPRIATE TEST USE IN LAW SCH. ADMISSIONS 4, 7, 10 (1999)
3 See Daria Roithmayr, Barriers to Entry: A Market Lock-In Model of Discrimination, 86
VA. L. REV. 727, 731-36 (2000) (“[L]aw schools have had to adopt the industry standard that
favors whites…. schools that want to maintain their national ranking or place graduates in
lucrative positions must admit students based on their [LSAT] scores…”)
4 See U.S. News, Law Methodology (2005), available at
http://www.usnews.com/usnews/edu/grad/rankings/about/06law_meth_brief.php.
5 If only a school’s median LSAT is used for rankings purposes, the 25th and 75th percentile
LSAT scores are irrelevant. By definition, half a law school’s incoming class will always have an
LSAT score at or below the median; thus, when only the median is used, it does not matter for
rankings purposes if a school with a 170 LSAT median has a 168 or a 162 as its 25th percentile.
Since blacks and Hispanics as a group have lower LSAT scores than whites and Asians, using
medians as opposed to interquartiles gave law schools the flexibility to admit minorities with
significantly below median LSAT scores without impacting their U.S. News rank. With this
change, some argue that law schools will feel significant pressure to raise their 25th percentile
figures, and thus would have to scale back their affirmative action programs. See
http://www.collegejournal.com/successwork/workplacediversity/20050421-
bialik.html?refresh=on.
6 Theodore P. Seto, Understanding the U.S. News Law School Rankings, LOYOLA LAW
SCHOOL LEGAL STUDIES PAPER No. 2007-25, 7 (2007).
7 Alvin J.Esau, Competition, Cooperation, or Cartel: A National Law School Accreditation
Process for Canada?, 23 DALHOUSIE L.J. 183, 185 (2000) (“Partly as a result of the recent
CIOLLI-G ARBER 10/15/09 8:31 PM
Law school rankings come in a wide variety of forms, but one can
easily classify most ranking schemes into one of four distinct categories:
“reputation” rankings, “institutional characteristics” rankings, “outcomes”
rankings, and “comprehensive” rankings. In this section, I will briefly
describe each of these rankings archetypes as well as their common sub-
categories.
A. Reputation Rankings
surveys and rankings of law schools in Canada, there appears to be increased competition
between law schools to stay on top or move up some kind of ladder of prestige. There is increased
competition between schools to recruit students, charge them higher fees, get more money from
alumni, institute new programs, chairs, and institutes, and brag about job placements for
graduates.”)
8 David C. Yamada, Same Old, Same Old: Law School Rankings and the Affirmation of
Hierarchy, 31 SUFFOLK U. L. REV. 249, 261-62 (1997).
CIOLLI-G ARBER 10/15/09 8:31 PM
2. Faculty Perception
3. Attorney/Judge Perception
20 Angela Cheng, Georgetown, Virginia Among Most Mentioned, THE NATIONAL LAW
JOURNAL(2004),available at http://www.law.georgetown.edu/news/releases/
documents/nlj_000.pdf.
21 Id.
22 Avery, Christopher, Glickman, Mark E., Hoxby, Caroline M. and Metrick, Andrew,A
Revealed Preference Ranking of U.S. Colleges and Universities (December 2005). NBER
Working Paper No. W10803. Available at SSRN: http://ssrn.com/abstract=601105
23 Id.
24 http://www.autoadmit.com/studies/chalfin/chalfin.pdf
CIOLLI-G ARBER 10/15/09 8:31 PM
26 See, e.g., The Law School Admissions Council, Deans Speak Out (2006), available at,
http://www.lsac.org/Choosing/deans-speak-out-rankings.asp
27 Leigh Jones, Law School Deans Feel Heat From Rankings, NATIONAL LAW JOURNAL
(2006), available at http://www.law.com/jsp/article.jsp?id=1146560723820
28 Russell Korobkin, Harnessing the Positive Power of Rankings: A Response to Posner and
Sunstein, 81 IND. L.J. 35, 37 (2006).
29 This honor goes to Learning & the Law, a magazine that published the first set of law
school rankings 1975 in an article called “Adding up the Law Schools: A Tabulation and Rating
of Their Resources” by Charles D. Kelso. See
http://taxprof.typepad.com/taxprof_blog/2006/10/the_first_law_s.html,
CIOLLI-G ARBER 10/15/09 8:31 PM
http://www.elsblog.org/the_empirical_legal_studi/2006/10/the_first_ranki.html
30 U.S. News & World Report, Best Graduate Schools (March 9, 1990).
31 http://www.washingtonmonthly.com/features/2000/norc.html
32 Leiter, Brian. From the Bowels of Cyberspace: The Myth of the “the Top 14” (March 19,
2006). Available at: http://leiterlawschool.typepad.com/leiter/2006/03/from_the_bowels.html.
33 Id.
34 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
While much has been written about how higher education rankings
impact educational institutions by shaping admissions policies and
35 In 1991 and 1992, Michigan displaced NYU for the sixth position. In 2007, Penn tied
Chicago for the sixth position, and in 2008 Berkeley was ranked as the sixth in the country.
36 Rankings available at http://www.cooley.edu/rankings/overall2008.htm
37 http://www.cooley.edu/rankings/intro_10th_general.htm
38 Id.
39
Leiter,Brian.TheCooleyLawSchoolRankings.(Oct.2005).http://leiterlawschool.typepad.com/leiter/
2005/10/the_cooley_law_.html
40 http://taxprof.typepad.com/taxprof_blog/2009/01/size-matters-.html
41 Rankings available at http://www.cooley.edu/rankings/overall2008.htm
CIOLLI-G ARBER 10/15/09 8:31 PM
affecting diversity, even more has been written about the utility of law
school rankings. U.S. News and other ranking systems have been heavily
maligned by both academic and non-academic writers. Although some
writers have defended law school rankings,42 the overwhelming majority of
articles about law school rankings have been negative. Individuals ranging
from university presidents43 to law professors 44 to high school seniors45
have extensively criticized rankings for being, among other things,
arbitrary or otherwise useless. Most of these individuals could be best
described as reactionaries, because they not only attack U.S. News and its
methodology, but argue that law school rankings are inherently flawed and
maintain that society would be better off giving these rankings little
credence.
The most publicized reactionary attack against law school rankings
was organized by the American Association of Law Schools in 1998. On
February 18, 1998, a week before the expected release of the 1998 U.S.
News law school rankings, the AALS held a press conference where ten
deans and educators, led by John Sexton, then-dean of the New York
University School of Law, urged U.S. News to stop publishing its annual
law school rankings.46 At the press conference and the accompanying press
release, Sexton stated that the rankings were “misleading and dangerous,”
and argued that “a ranking system inherently assumes that every applicant
has identical needs and desires.”47 The AALS press conference coincided
with the release of an AALS-commissioned U.S. News validity study48 and
a letter denouncing law school rankings sent to 93,000 law school
applicants by the Law School Admission Council.49 The letter, signed by
164 law school deans, urged applicants to gather their own information
about law schools:
The idea that all law schools can be measured by the same yardstick
ignores the qualities that make you and law schools unique, and is
unworthy of being an important influence on the choice you are about to
make. As the deans of law schools that range across the spectrum of several
rating systems, we strongly urge you to minimize the influence of rankings
on your own judgment. In choosing the best school for you, we urge you to
get information about all the schools in which you might have some
interest.50
Although it is difficult to disagree with the letter’s underlying
principles, the deans’ argument rings hollow. While the deans urge
prospective students to “get information about all the schools in which you
might have some interest,” both law schools and the AALS have
historically sought to minimize the dissemination of such information. Paul
Caron and Rafael Gely summarize the AALS impact on 20th century legal
education quite well:
The AALS. . . removed competitive elements from the legal education
market and allowed law schools to behave in a monopolistic manner. Law
schools are different from the typical profit-maximizing firm (which seeks
monopoly power by controlling price and excluding rivals) and the typical
nonprofit firm (which seeks output maximization by distributing “its
bounty as widely and as equitably as possible”). Instead, law schools seek
to maximize “elitist preferences.” These preferences include. . . “freedom
of law schools to teach ‘the best’ law students,” “freedom from faculty
accountability to non-peer groups (such as students),” and “freedom to
operate in a non-commercial atmosphere.”
By eliminating. . . competition among law schools, the AALS
eliminated incentives to measure the success of a law school. To the extent
that all law schools offered the same courses, taught by faculty with similar
credentials, using the same methods, it became more difficult to identify
“winners” and “losers.” This elimination of measures of organizational
success was consistent with the elitist preferences of AALS members.
More established schools had little to fear from allowing lesser schools to
exist - as long as they were not too different (i.e., they operated within rigid
AALS standards). Other law schools not only enjoyed the protection
afforded by the cartel-like activities of the AALS, but also were in a
position to claim equality with their more established counterparts.
Standardization permitted every law school to claim to be as good as
whatever conventional wisdom suggested were the leading law schools in
the country.
The absence of measures of organizational success also eliminated
incentives to measure individual contributions. The ability of law schools
50 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
51 Paul L. Caron & Rafael Gely. What Law Schools Can Learn from Billy Beane and the
Oakland Athletics, 82 TEX. L. REV. 1483, 1507-08 (2004).
52 See Scott Baker, Stephen J.Choi, & G. Mitu Gulati. The Rat Race as an Information
Forcing Device, 80 IND. L.J. (forthcoming 2005).
53 See id.; see also Korobkin, supra note 53.
54 Thomas, supra note 55, at 425.
55 Id.
56 http://officialguide.lsac.org/
CIOLLI-G ARBER 10/15/09 8:31 PM
B. The Case for Evaluation: Why the Reactionaries Miss the Point
While Thomas makes valid points, his analysis fails because he does
not properly distinguish U.S. News rankings from other law school
rankings.57 Though his criticisms of both U.S. News and reputation
rankings are justified to an extent, it is inappropriate to attack law school
rankings as a whole due to the perceived failings of one (albeit popular) set
of rankings. Thomas provides little support for this strongly-worded
generalization: “Without question, the world would be better off without
those rankings. They benefit no person or entity other than their
publisher.”58
Thomas’s position is akin to ‘no one should rank law schools because
no ranking can possibly be perfect.’ Because GPA and LSAT are not
perfect predictors of intelligence or law school performance, Thomas
believes they should not be used in a rankings scheme.59
Thomas is correct that LSAT, GPA, and student:faculty ratio are not
completely free of error. However, this does not make them useless
measures. After all, what in the world is completely free of any error
whatsoever? Let us apply Thomas’s high standard beyond just law school
rankings. If one were to take Thomas’s position to its natural conclusion,
one would have to advocate for the elimination of any field of study whose
research does not result in absolutely error-free results, including
economics, political science, psychology, sociology, and the medical and
natural sciences. While error is present in all of these fields, the presence of
error does not automatically mean we should discount all of its scholarship
Researchers can still reach perfectly valid conclusions even in the presence
of reasonable error.
Although it is possible that the world would be better off without one
poorly written or shoddy piece of economics “research,” arguing the entire
field of economics should be is indefensible and akin to throwing the baby
out with the bathwater. Even if U.S. News rankings are horribly flawed,
57 But see Thomas, supra note 55, at 426 (“If the intent of a ranking is merely to display
objective data in numerical order, leaving interpretation and assessment to readers, then the only
challenges one could raise would be to the accuracy or validity of the data.”). Although Thomas
does acknowledge this difference between U.S. News and other rankings when critiquing the U.S.
News methodology, he fails to make the same distinction in his later analysis, and paints all law
school rankings in a negative light. See infra note 69.
58 Thomas, supra note 55, at 456.
59 See Thomas, supra note 55, at 431 (“Implicit… is the assumption that ‘good’ students -
those who are a positive influence on their fellow students - can be identified solely or principally
by their GPA or LSAT score. More important, but far less subject to assessment, are whether a
law school’s students are emotionally stable, free of substance abuse or other self-defeating
behaviors, honest, kind and compassionate, and balanced in a professional environment that is
often intensely driven and beset by materialism. These factors cannot be measured by reported
statistics or by rankings….”).
CIOLLI-G ARBER 10/15/09 8:31 PM
arguing that all formal law school rankings result in a net negative effect,
regardless of the rankings’ methodology or purpose, is an extreme position
that is very difficult, if not impossible, to defend. Not all rankings are
inherently flawed or arbitrary. For example, institutional characteristic
rankings are, by definition, based on objective and verifiable facts. One
could very easily generate an ordinal ranking of law schools based on the
number of hours per week the law library is open, and if a prospective
student places an extremely high value on being able to study in the library,
it would be both rational and efficient to use such a ranking to decide
where to interview. Of course, such a ranking might lack utility if
prospective students place significantly higher values on other factors,
which they often do, but students can consider these issues when evaluating
law school rankings.
Keep in mind that in the absence of formal rankings, stakeholders in
legal education will simply rank law schools informally. Prospective law
students, rather than looking at U.S. News or other rankings, will create
their own personal set of law school rankings based on whatever data they
can informally obtain, potentially containing even greater flaws than U.S.
News.60 If law school rankings did not exist, legal employers would still
have to evaluate law school quality when making hiring decisions, even if
they had little or no knowledge of particular law schools.61
Since there are situations where rankings can be highly useful, as well
as situations where they can be useless or even destructive, a blanket
dismissal of all rankings provides little utility to anyone. Rather than saying
all current rankings are flawed, one should develop a standardized method
for evaluating the efficacy of ranking schemes. When evaluating any
ranking, one should pay careful attention to three factors: purpose,
audience, and methodology. I will now discuss each of these factors.
1. Purpose
How does this ranking matter? What does this ranking measure?
60 See Jan Hoffman, Judge Not, Law Schools Demand Of a Magazine That Ranks Them
(1998), available at
http://query.nytimes.com/gst/fullpage.html?res=9806E4DD133FF93AA25751C0A96E958260&s
ec=&spon=&pagewanted=2 where the Dean of New York University School of Law states that if
reputation rankings asked about Princeton Law School that Princeton, which does not have a law
school, would nevertheless appear in the top twenty.
61 Michael Berger, Why U.S.News Rankings are Useful and Important, 51 J. LEGAL EDUC.
487, 492 (2001).
CIOLLI-G ARBER 10/15/09 8:31 PM
These are the questions one should ask when evaluating the purpose of any
ranking.
Obviously, a law school ranking scheme intends to rank law schools,
but what is the ranking’s intended purpose? Does the ranker intend to
measure employment placement, selectivity, or faculty attitudes towards
vanilla ice cream?
One might think this is a very straightforward question, but even after
twenty-two years in existence, there is still no consensus about what U.S.
News is trying to measure. Some authors have assumed that the purpose of
the U.S. News rankings is to rate the quality of legal education.62 Others
state that U.S. News’s primary purpose is to rate schools based on how they
are perceived in the “real world” by employers. U.S. News itself does not
seem to know the purpose of its own rankings. While on one page U.S.
News says that the purpose of its graduate rankings is to provide an
“independent assessment of the academic quality of programs,”63 another
page states that one should use the U.S. News rankings to determine “how
diplomas from various schools will affect [applicants’] earning power” and
“how successful the schools are at preparing graduates for the bar exam.” 64
Unless we are supposed to believe that U.S. News rank perfectly correlates
with academic quality, earning power, and bar exam preparation,65 it
appears that U.S. News provides its readers with contradictory purposes.
2. Audience
Who gains the most utility from these rankings? This is the question
to ask when trying to determine a ranking’s audience. While most rankings
tend to target prospective students, the audience could be any demographic
group, from faculty to law librarians to the general public.
Audience is as crucial to the evaluation of law school rankings as it is
for evaluating legal writing. A 70 page paper might make a splash as a law
review article but that same paper would make a horrible letter to the editor
of a local newspaper. An ordinal ranking of law schools based by
admissions officer salaries may have a lot of utility for current and
prospective law school admissions officers; however, this same ordinal
ranking would be virtually worthless to prospective students. Like good
writers, effective law school ranking providers will tailor their rankings
towards the appropriate audience.
3. Methodology
66 For a more in-depth explanation of social science research methods, see generally (?)
67 For example, Ben & Jerry’s may be opening up an ice cream store in the faculty lounge of
every law school in America and would like to know what flavors different faculty prefer so that
they know approximately how much of a certain flavor to keep in stock at each location.
CIOLLI-G ARBER 10/15/09 8:31 PM
68 http://www.jdmba.northwestern.edu/faq.htm.
69 Northwestern’s class of 2011 has 26 J.D./M.B.A. students out of a total class size of 242,
meaning about 10.7% of the class consists of J.D./M.B.A. students. See
http://www.jdmba.northwestern.edu/classprofile.htm,
http://www.law.northwestern.edu/admissions/profile/
70 http://www.law.georgetown.edu/Admissions/jd_general.html#earlyassurance.
71 http://www.law.upenn.edu/prospective/jd/faq.html#jointdegreeapp.
CIOLLI-G ARBER 10/15/09 8:31 PM
scores of the rest of the class, or realizing that the number of students
without LSAT scores at these schools is so low that even if their scores
would differ from the rest of the cohort the medians would not be
impacted. Alternatively, the researcher could disclose that this problem
exists and explain the biases it introduces, but argue that since LSAT scores
are the best standardized measure of student quality available the benefits
of including LSAT score as a rankings variable outweigh the costs. While
the researcher has several options available to him or her, fixing the error
itself is not one of them.72
Avoidable error, in contrast, involves error that does not have to exist
in an experiment or research project. While unavoidable error is inherent to
the project, avoidable error is completely preventable. Avoidable error is
only present due to the researcher’s negligence or inexperience.
Avoidable error can manifest itself in many ways, but it seems
especially common in studies that involve survey research. Poor sampling
has the potential to make an otherwise valid study virtually useless. For
example, if a researcher wanted to examine how the American people feel
about same-sex marriage, he or she would introduce avoidable error if he or
she only included eighteen year old college freshmen at Berkeley in the
sample. Because eighteen year old Berkeley freshmen are not an accurate
reflection of the demographic makeup of the United States, such a survey
would reveal nothing about how Americans in general feel about the issue.
A similar survey about abortion given only to Catholic priests would also
reveal nothing about the general public’s attitudes towards the issue. In
both cases, the error is avoidable because the researcher could have given
the survey to a sample that properly reflects the demographic makeup of
the United States.
Law school rankings, too, may have avoidable error due to poor
sampling techniques. U.S. News in particular has been justifiably criticized
for the poor construction of its practitioner reputation survey.73 However,
other types of avoidable error may exist, because a ranker may fail to make
necessary adjustments. For instance, when ranking schools based on
student:faculty ratio, an individual would introduce avoidable error by only
using full-time J.D. enrollment in the calculation. Such a ranking would
72 Of course, if this specific researcher had access to unlimited time and funds, he or she
could obtain the names of every single student enrolled at Northwestern, Georgetown, etc. who
did not take the LSAT and offer to pay each of them a large sum of money to take the LSAT so
he or she could calculate the “true” median LSAT for each school. While such a scenario is
theoretically possible, in the real world researchers do not have unlimited resources, and various
privacy laws exist that would make this scenario virtually impossible in practice.
73 See, e.g., Brian Leiter, The U.S. News Law School Rankings: A Guide for the Perplexed
(2003), available at http://www.utexas.edu/law/faculty/bleiter/rankings/guide.html (criticizing the
U.S. News practitioner survey for having a very low response rate and a geographically biased
sample).
CIOLLI-G ARBER 10/15/09 8:31 PM
ignore part-time and LL.M./S.J.D. students, and thus inflate the ranking of
schools with part-time or graduate programs. Other sources of avoidable
error include basic mistakes, such as inputting incorrect data or making a
mathematical error.
Note that not all instances of avoidable error will automatically cause
a set of rankings to become invalid. When evaluating law school rankings
with avoidable error, one must consider the impact that error has on the
rankings themselves. Let us consider one example: a law professor ranks
every law school based on median LSAT score. On April Fool’s Day, her
research assistant sneaks into her office, opens the professor’s Excel file
that contains all the rankings, and subtracts five LSAT percentile points
from every law school’s median; thus, a school with a 99.4th percentile
median now appears to have a 94.4th percentile median, a school with a 77th
percentile median appears to have a 72nd percentile median, and so on. The
professor, absent-minded as she is, never notices the change, and uploads
the ordinal rankings to her website. Although there is significant avoidable
error, the ordinal rankings themselves have not been damaged by the error
– the 1st school is still the 1st school, the 69th school is still the 69th school,
and the absolute difference in percentile points between any two schools
remains the same. While the underlying data was tampered with, the
damage done was evenly distributed among every law school, and thus the
rankings may still serve their intended purpose. However, if the research
assistant had altered seven random schools and did not make any changes
to the other schools in the dataset, then the avoidable error would certainly
cause the rankings to lose efficacy and become invalid.
74 See, e.g., Robert Morse, Changing Law School Ranking Formula, U.S. News. June 26,
2008. http://www.usnews.com/blogs/college-rankings-blog/2008/6/26/changing-the-law-school-
ranking-formula.html.
75 See, e.g., http://www.deloggio.com/usnews/usnwrpl.htm (membership required).
CIOLLI-G ARBER 10/15/09 8:31 PM
Much has been written about the flaws with comprehensive rankings,
although most criticisms have focused on U.S. News. Such a focus is not
surprising, given the enormous influence of the U.S. News rankings. Other
comprehensive rankings, such as the Cooley rankings, have been virtually
ignored by prospective law students and other legal education
stakeholders.77
There is little doubt that prospective law students are U.S. News’s
intended audience. However, the U.S. News rankings suffer from
significant purpose and methodology problems. As discussed earlier in this
paper,78 U.S. News does a poor job explaining what its rankings are
supposed to measure, claiming that its rankings are an “independent
assessment of the academic quality of programs”79 while also stating that
its rankings measure “how diplomas from various schools will affect
[applicants’] earning power” and “how successful the schools are at
preparing graduates for the bar exam.”80
Ultimately, whether U.S. News intends to measure academic quality,
earning power, bar exam preparation, or some combination of all three is a
moot question. The U.S. News rankings do not pass the validity test on
methodological grounds, regardless of which purpose is the true one. While
many articles have been written about the problems with the U.S. News
us imagine that U.S. News found a way to correct for all of these flaws, but
otherwise kept its methodology the same. What would these new and
improved U.S. News rankings tell us? Would they tell us about academic
quality? Not really – if the purpose of the U.S. News rankings is to
measure academic quality, why does their methodology include factors
such as bar passage rates, percent of the class employed nine months after
graduation, and financial aid expenditures per student? Perhaps the
rankings are meant to measure the earning power of graduates. But if this is
U.S. News’s true purpose, why include factors such as the number of titles
in the law school’s library, or acceptance rate? Such extraneous factors do
nothing but create distortions in the rankings. There is absolutely no
empirical evidence to suggest that purchasing 10,000 library books will
have any impact on graduate earning power, and no reason to believe that
decreasing financial aid expenditures per student will decrease faculty
quality.
These additional factors create nothing but noise even if evidence
shows that there is a relationship between the two variables. For example,
academic reputation is known to strongly correlate with national
employment placement.88 However, why should a rankings scheme whose
purpose is to predict employment prospects or earning power include
academic reputation as an input if information on employment placement is
already available? While there is a correlation between the two variables,
the correlation is not perfect, and even if the correlation is very strong,
actual employment placement data will be by definition better than a
deflation at Princeton University, a top school that produces graduates with lower GPA’s than
many schools below its caliber.
86 The main problem with how U.S. News calculates faculty resource ranking is that at no
point does the magazine consider the strength of faculty in both publishing and teaching ability
(instead, it focuses on expenditures per student, student:faculty ratio, and library resources). No
points are assigned to faculties that are particularly scholarly, such as Chicago’s. Also, there is
no place for the students to rank the quality of faculty/instruction at their school. Thus, although
a school may have a high “faculty resource” score, this in no way means that the faculty at the
school is better than the faculty at another school. See
http://www.usnews.com/articles/education/best-graduate-schools/2008/03/26/law-
methodology.html
87 U.S. News values 4% of its overall ranking on employment at graduation and 14% of its
overall ranking on employment 9 months after graduation (and the magazine states that “For the
nine-month employment rate, 25 percent of those whose status is unknown are counted as
employed.” Students, in picking a law school, are hoping to be employed when they graduate.
At the very least, students hope to employed 6 months after graduation when they are required to
start making loan repayments. As a result, U.S. News is acting arbitrarily in placing 14% of its
overall ranking on employment 9 months after graduation.
http://www.usnews.com/articles/education/best-graduate-schools/2008/03/26/law-
methodology.html
88 See Anthony Ciolli, The Legal Employment Market: Determinants of Elite Firm Placement
and How Law Schools Stack Up, 45 JURIMETRICS J. 413 (2005).
CIOLLI-G ARBER 10/15/09 8:31 PM
U.S. News and other comprehensive rankings are often invalid due to
disconnect between purpose and methodology. Reputation rankings
generally suffer from similar problems, although different subsets of
reputation rankings also experience other validity issues.
The validity problems with single individual perception reputation
rankings closely parallel the problems with U.S. News. Depending on the
rankings stated purpose, the validity problem may be either methodology or
purpose. A single individual perception ranking will rarely experience a
methodology problem if the purpose of the ranking is to determine how a
specific individual views the law schools.90 But once again, even if such a
ranking is not methodological flawed, it lacks a useful purpose. After all,
why should a prospective law student care about how Jay Brody ranks the
law schools?91 Are there any tangible benefits to attending Brody’s 4th
ranked law school over his 17th ranked school?92
Not surprisingly, most single individual perception reputation
rankings claim that their rankings strongly correlate with some other factor.
89 Stephen P. Klein and Laura Hamilton, The Validity of the U.S. News and World Report
Ranking of ABA Law Schools (1998), available at http://www.aals.org/reports/validity.html
90 Perhaps the only way such a ranking could experience a methodology problem is if a third
party were asked to provide the rankings on behalf of the other person. For instance, if Justice
Scalia were asked to list Justice Breyer’s top ten law schools, he might provide a different list
from the one Justice Breyer would provide.
91
http://web.archive.org/web/20030727215004/http://brody.com/law/resources/brody_law_school_r
ankings.php.
92 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
Jay Brody does not claim that the purpose of his rankings are merely to tell
the world about his own perceptions about law school quality – instead,
Brody says that prospective law students should pay attention to his
rankings because they represent how law schools are perceived in the
employment market.93 In other words, one is led to believe that the purpose
of the Brody rankings is to measure career placement.
Most single individual reputation rankings, therefore, run into a
similar methodology problem as with U.S. News and other comprehensive
rankings: if the purpose of the Brody rankings is to rank law schools based
on career placement, should not the Brody rankings actually try to
meaningfully measure career placement? Brody has made no visible
attempt to include actual career placement variables in his rankings
scheme, nor does he provide evidence that his individual perceptions about
law school quality correlate with a law school’s ability to place its
graduates. When there is such an obvious disconnect between purpose and
methodology, the ranking scheme does not pass the validity test.
Notably, there is a limited situation where a single individual
perception ranking can be both methodologically sound as well as useful to
the intended audience. This can occur when the individual ranking the law
schools is also in a position to make hiring decisions for specific positions
that the audience finds desirable. For example, let us assume that a
Supreme Court justice decided to rank the law schools, in order to let
prospective law students know how he perceives the quality of various
schools. Unlike Brody, there are many individuals who put great weight in
a Supreme Court justice’s view of law school quality. This is not just due to
the prestige of the justice’s position, but also because the justice has direct
control over a commodity that is highly desirable to many law students:
which individuals get a clerkship with that justice. Thus, a Supreme Court
justice’s single individual perception reputation rankings could have great
utility, for a prospective law student would know that attending a law
school ranked highly by the justice may enhance their chances of obtaining
a Supreme Court clerkship. Similar situations may occur if the hiring
partner of an elite law firm that many prospective law students aspire to,
such as Wachtell or Cravath, decided to rank the law schools. However, it
seems no such individuals or firms have decided to rank the law schools
based solely on their own perceptions.94
93 Id.
94 Although Judge Richard Posner has written an article about law school rankings that
includes his own ranking scheme, this scheme is based on amalgamating other rankings, and does
not necessarily reflect how Judge Posner himself perceives law schools. See Posner, Richard,
"Law School Rankings," 81 Indiana Law Journal 13 (2006).
CIOLLI-G ARBER 10/15/09 8:31 PM
Brian Leiter’s Educational Quality Survey, the most well known law
school faculty perception rankings, does not suffer from the same
confusion of purpose as U.S. News – Professor Leiter explicitly states that
the purpose of his survey is to measure the scholarly quality of law school
faculty,95 and he does not try to mislead readers into believing that his
rankings are meant to correlate with student employment prospects or any
other unrelated variable.96 Similarly, Professor Leiter’s methodology, while
certainly open to criticism,97 is still sound enough to serve the rankings
intended purpose; that is, Leiter’s survey actually attempts to measure how
faculty perceive other faculty.
The Leiter rankings, however, do suffer from a flaw, albeit a
somewhat less serious one. While the purpose and methodology may be
sound, Leiter considers prospective J.D. students his audience.98 Certainly,
some prospective J.D. students may have an interest in Leiter’s findings.
However, whether one likes it or not, faculty perception of the scholarly
quality of faculty at other law schools has little to no direct impact on
prospective law students. This is not to say that faculty quality is not
important to students – however, faculty perception of the quality of other
faculties bears little utility for prospective students. Faculty perceptions of
the scholarly quality of other faculties do not correlate well with faculty
teaching quality. In fact, Leiter’s own teaching quality rankings confirm
that there is little to no relation between student evaluations of teaching
quality and faculty perception of scholarly quality.99
Leiter argues that his rankings “may be the decisive factor for those
interested in pursuing careers in law teaching.”100 While it is conceivable
that some prospective law students might already have an interest in legal
academia prior to enrolling in law school and might therefore take faculty
perception of scholarly quality very seriously, basing an enrollment
decision on the Leiter rankings may not be any wiser than basing an
95 http://www.leiterrankings.com/faculty/1999faculty_reputation.shtml.
96 Id.
97 For example, Richard G. Heck, Jr. argues that Leiter’s rankings of graduate schools should
not be based on a single factor, the quality of the faculty’s research, because the correlation
between this and the quality of a student’s graduate education is “arguably small”. Leiter may be
making the same mistake by ranking law schools by faculty research.
http://frege.brown.edu/heck/philosophy/aboutpgr.php.
98 http://www.leiterrankings.com/faculty/1999faculty_reputation.shtml.
99 http://www.leiterrankings.com/faculty/2003faculty_best.shtml.
100 http://www.utexas.edu/law/faculty/bleiter/rankings/students.html (last visited: January
2009).
CIOLLI-G ARBER 10/15/09 8:31 PM
101 http://www.leiterrankings.com/faculty/index.shtml
102 http://www.leiterrankings.com/students/index.shtml
103 http://www.leiterrankings.com/faculty/index.shtml
104 http://www.utexas.edu/law/faculty/bleiter/rankings/philo.html
105 http://www.utexas.edu/law/faculty/bleiter/rankings/economics.html
106 http://www.leiterrankings.com/jobs/2006job_teaching.shtml
107 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
produced no graduates who obtained tenure-track jobs who did not earn an
additional graduate degree.108 While faculty members perceive some
second tier schools like Cardozo as having faculty in certain specialty areas
that are significantly better than faculty at top schools such as Harvard and
Yale, the data plainly shows that an individual seeking to enter legal
academia is far better off attending “inferior” J.D. programs at Harvard and
Yale.
Furthermore, even if the faculty perception rankings did not have such
flaws, the nature of the J.D. as a three year program pursued by future
lawyers and future academics alike makes prospective J.D. students an
inappropriate audience for this ranking. First, many individuals who apply
to J.D. programs thinking they intend to become law professors end up
foregoing the academy without even trying to obtain a tenure-track job.
The reasons for this can range from failings as students (finishing in the
bottom half of the class, not making law review, and so on) to realizing that
academia is simply not a good fit (they discover that they hate writing
academic articles, would prefer to earn more money in private practice, and
so forth). This sort of invisible attrition should not come as a surprise given
that, unlike Ph.D. programs, virtually all law students apply to law school
without ever setting foot in a law school class or knowing anything about
law as an academic discipline. Given how only a slim number of
individuals from any law school class will actually seek to become legal
academics, even among those who entered law school with that ambition, it
is irresponsible for J.D. students to make enrollment decisions based on
how their J.D. school’s faculty is perceived by faculty at other law schools.
Second, and perhaps more importantly, faculty perception rankings
are extremely fluid. Even something as minor as one faculty member
switching schools, retiring, or dying can result in a significant shift in how
an entire school’s faculty is perceived by others. Because faculty departures
and additions occur relatively frequently, faculty perception rankings can
change even more significantly than U.S. News rankings between the time
the prospective students makes the enrollment decision, graduates, and
enters the teaching market. While faculty departures will naturally have
little negative impact on schools such as Yale that are already perceived as
highly prestigious by non-academics,109 such departures, even if only
temporary, could have more significant effects on schools that are
traditionally viewed as having less prestige. Leiter himself illustrates this
by quoting a student on his blog: “If you are going to Tufts to work with
Dennett and he is gone or not teaching in one or both of your first
semesters, then it is not clear that Tufts’ distinguished faculty will be of
108 Id.
109 http://leiterlawschool.typepad.com/leiter/2005/08/how_to_rank_law.html.
CIOLLI-G ARBER 10/15/09 8:31 PM
110 http://leiterreports.typepad.com/blog/2004/10/ma_programs_in_.html.
111 For example, it would take at least 3 years to obtain a LL.M./J.S.D. in International Human
Rights from Saint Thomas University. http://www.stu.edu/Academics/Programs/
LLMJSDinInterculturalHumanRights/ProgramInformation/tabid/1227/JSDinInterculturalHuman
Rights/tabid/1237/Default.aspx
CIOLLI-G ARBER 10/15/09 8:31 PM
112 Cass R. Sunstein, Ranking Law Schools: A Market Test?, 81 IND. L.J. 25, 28 (2006).
113 While schools could gather similar information through looking at cross-admit yields,
response rates may be so low as to make such data misleading.
114 http://www.autoadmit.com/studies/chalfin/chalfin.pdf.
115 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
than Cornell. However, the Avery study does not explain why students
prefer Dartmouth to Cornell. Do they think Dartmouth is more prestigious,
or places its graduates in better jobs at graduation? Or do most students
prefer Hanover to Ithaca? Perhaps students feel Dartmouth has better
dorms. In any case, we simply do not know why Avery’s revealed
preference survey respondents prefer Dartmouth to Cornell – thus, Cornell
administrators do not know what areas they are perceived as weak relative
to Dartmouth administrators. As a result, Cornell cannot determine how it
can improve itself as an institution. Does it need to market the benefits of
living in Ithaca better, or should it spend its resources improving its on
campus dining services? Maybe it is all just a perceived lack of selectivity.
While there is some utility in knowing that students prefer Dartmouth,
Cornell cannot do anything to improve its situation if it does not know why
Dartmouth is ranked higher in the revealed preference survey.
116 Brian Leiter, The Most National Law School Based on Job Placement in Elite Law Firms
(2003), available at http://www.leiterrankings.com/jobs/2003job_national.shtml
117 The following are Leiter’s 45 firms, broken down by city. These firms include the
following: Atlanta: King & Spaulding, Alston & Bird, Kilpatrick Stockton; Boston: Hale and
Door; Ropes & Gray, Goodwin Proctor; Chicago: Kirkland & Ellis, Sidley Austin Brown &
Wood, Mayer Brown Rowe & Maw; Cleveland: Jones Day Reavis & Pogue, Baker & Hostetler;
Dallas: Akin Gump Strauss Hauer & Feld, Haynes and Boone; Washington DC: Covington &
Burling, Williams & Connolly, Wilmer Cutler & Pickering.; Houston: Baker Botts, Fulbright &
Jaworski, Vinson & Elkins; Los Angeles: Latham & Watkins, Gibson Dunn & Crutcher,
O’Melveny & Myers; Miami: Holland & Knight, Greenberg Traurig; Milwaukee: Foley &
Lardner; Minneapolis: Dorsey & Whitney; New York: Cravath, Swaine & Moore, Wachtell
Lipton Rosen & Katz, Sullivan & Cromwell; Philadelphia: Morgan Lewis & Bockius, Dechert,
Drinker Biddle & Reath; Pittsburgh: Kirkpatrick & Lockhart; Portland: Miller Nash, Stoel
Rives; Richmond: Hunton & Williams, McGuireWoods; San Diego: Gray Cary Ware &
Freidenrich; San Francisco: Morrison & Foerster, Wilson Sonsini Goodrich & Rosati, Orrick
Herrington & Sutcliffe; Seattle: Perkins Coie, Davis Wright Tremaine, Foster Pepper &
Shefelman; and St Louis: Bryan Cave.
CIOLLI-G ARBER 10/15/09 8:31 PM
118 Leiter’s 22 schools are Yale, Harvard, Stanford, Chicago, Columbia, NYU, Michigan,
Virginia, Texas, Penn, Cornell, Georgetown, Northwestern, Duke, Vanderbilt, UCLA, Emory,
Washington & Lee, Notre Dame, Minnesota, and George Washington.
119 Michael Sullivan, Law School Job Placement (2005), available at
http://www.calvin.edu/admin/csr/students/sullivan/law/index.htm.
120 John Wehrli, Top 30 Law Schools at the Top 100 Law Firms, Adjusted for School Size
(1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html.
121 Wehrli used the American Lawyer’s list of the top 100 firms for his study. John Wehrli,
Top 30 Law Schools at the Top 100 Law Firms, Adjusted for School Size (1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html
122 The schools Wehrli ranked 11-30 are, in order: Cornell, Penn, Duke, Georgetown, UCLA,
USC, Texas, George Washington, Illinois, Boston University, Vanderbilt, Florida, Fordham, UC
Hastings, Boston College, Washington & Lee, Notre Dame, UC Davis, Washington University in
St. Louis, and Case Western. John Wehrli, Top 30 Law Schools at the Top 100 Law Firms,
Adjusted for School Size (1996), available at
http://web.archive.org/web/19980520150138/http://wehrli.ilrg.com/amlawnormt30.html
CIOLLI-G ARBER 10/15/09 8:31 PM
number of top 100 law firms that have at least one attorney present from
the school.123 The top ten schools according to that ranking are, in order,
the following schools: Harvard, Virginia, Columbia, Georgetown, NYU,
Yale, Michigan, Chicago, Penn, and Cornell.124
National Placement Rankings, Wehrli vs. Leiter vs. Sullivan
123 John Wehrli, The 31 Truly National Law Schools (1996), available at
http://web.archive.org/web/19980520150145/wehrli.ilrg.com/amlawnational.html.
124 In this set of rankings, Harvard and Virginia appear to be tied for 1st place with 100 firms
each, while Columbia, Georgetown, NYU, and Yale are tied for 3rd place with 99 firms each and
Michigan and Chicago are tied for 7th place with 98 firms and Cornell and George Washington
are tied at 10th with 94 firms. However, for unstated reasons, Wehrli did not designate these ties
but gave schools absolute ordinal ranks. The schools ranked 11th to 31st are, in order, George
Washington, Stanford, Duke, Berkeley, Boston University, Northwestern, UCLA, Vanderbilt,
Texas, Fordham, Boston College, UC Hastings, Notre Dame, Rutgers (both campuses), Emory,
Wisconsin, USC, Illinois, Washington University in St. Louis, Tulane, and Minnesota. Wehrli
says that he deliberately stopped his rankings at #31 because the gap between #31 Minnesota and
the unspecified school ranked at #32 was very significant, and thus #31 is his cutoff for the “truly
national schools.” Listing schools in an ordinal ranking without explaining how ties were
decided is inherently unfair and unhelpful to a law school consumer.
CIOLLI-G ARBER 10/15/09 8:31 PM
individuals hired a very long time ago. Wehrli, Leiter, nor Sullivan limited
their studies to associates hired within the last few years. Instead, they
studied all attorneys hired at these firms, whether they were non-
partnership track counsel who graduated in the early 1990s, senior partners
who graduated in the 1960s, or, in Leiter’s case, first year associates who
graduated in 2002. Leiter acknowledges this problem, and concede that his
study “reflect[s] not only who Ropes & Gray was hiring in 1995, but some
of whom they were hiring in 1970.” Leiter also acknowledges that this bias
had a significant impact on his rankings: schools like Michigan and Duke,
which Leiter claims were more prominent in the past, may be artificially
inflated in his rankings, while schools like NYU, which may not have been
as well regarded in the past but have risen to greater prominence in recent
years, may be low ranked relative to their contemporary placement. Leiter
attempts to circumvent this problem by stating that his study “invariably
reflects a school’s reputation among elite firms over a long period of time.”
Although this may be true, and while all three studies have academic merit,
they have little practical value for prospective law students or other
interested parties, who are primarily concerned with contemporary hiring
trends.
Second, the researchers have not adjusted for varying career
preferences amongst students from different schools. When making per
capita adjustments, Wehrli, Leiter, and Sullivan divide the total number of
attorneys by graduating class size. However, the percentage of students
who choose to go into law firms is not constant among all the law schools
in his study. According to the 2005 edition of U.S. News and World
Report’s law school rankings guide, 80% of Columbia Law School’s
graduating class of 2002 was employed at a law firm. In contrast, only 72%
of NYU’s graduating class of 2002 was employed at a law firm.125 Since
NYU graduates are entering private practice in lower proportions than
Columbia graduates, one can expect that using total class size, all else
equal, would artificially inflate Columbia’s ranking relative to NYU’s.
Although Leiter and Sullivan do not address this problem, Wehrli openly
acknowledges it, noting that Harvard ranked higher than Yale even though
its class is three times larger because “a higher % of Yale grades [sic] enter
government service and politics than Harvard.” A far better way to adjust
for the effect of differing class size would have been to divide the total
number of attorneys by the number of graduating students employed at law
firms, since this would have virtually eliminated this problem of sectoral
self selection.
125 U.S. News, The Top 100 Law Schools (2005), available at http://grad-
schools.usnews.rankingsandreviews.com/best-graduate-schools/top-law-schools/rankings (U.S.
News premium login required for access.) .
CIOLLI-G ARBER 10/15/09 8:31 PM
Third, the researchers did not adjust for the geographical employment
preferences of students. This is probably the most significant
methodological flaw of both studies. Just as student career preferences vary
from school to school, the geographical preferences of a school’s students
differ as well. For example, according to the 2005 edition of U.S. News and
World Report’s law school rankings guide, 78% of Columbia’s 2002
graduating class settled in the Middle Atlantic (NY, PA, NJ) region, while
only 6% settled in the Pacific (AK, CA, HI, OR, WA) region. In contrast,
9% of Berkeley’s 2002 graduating class settled in the Middle Atlantic
region, while 75% settled in the Pacific region.126
While Leiter does not mention the Wehrli study, he correctly states
that the AmLaw Top 100 list, which Wehrli used, is heavily biased in favor
of the Northeast, particularly New York City-based firms. Thus, one can
assume that Leiter’s attempt at selecting only the best three firms in a given
region was an attempt to fix the geographical bias present in Wehrli’s study
against schools that do not send many graduates to the northeast. However,
by doing this, Leiter, and later Sullivan, created a new bias. For example,
of the 45 firms Leiter and Sullivan included in their study, only seven are
located in the Middle Atlantic region, while twelve are located in the
Pacific region. The problem is even more apparent than it seems. Within
these regions certain states dominated – 77% of Columbia graduates stayed
in New York, and 69% of Berkeley graduates stayed in California.127
However, while only three New York firms are in Leiter and Sullivan’s
studies, Leiter and Sullivan include seven California firms, which
artificially boosts the rankings of schools like Berkeley, UCLA, and
Stanford, while artificially lowering the rankings of schools that place a
large proportion of graduates in the Northeast such as Columbia, NYU,
Penn and Cornell. A better means of selecting firms would have been to
select a number of firms proportional to the market share of each regional
market relative to the national legal market.
Although firm prestige certainly has a major impact on choice of job,
it is fair to say that most students primarily pick their jobs first based on
location. Leiter concedes this point in his study and poses the rhetorical
question, “How many [students] pick Columbia with the goal of big firm
practice in Dallas or Portland?” I agree with Leiter’s point; as the
employment location charts in U.S. News show, few Columbia students
choose to practice in Dallas or Portland. However, in order to consider
Leiter and Sullivan’s studies wholly accurate, one needs to assume that
students do not have any geographic preferences whatsoever, and will
choose jobs based solely upon prestige. In other words, for these rankings
126 Id.
127 Id.
CIOLLI-G ARBER 10/15/09 8:31 PM
to have efficacy, one would have to assume that a Columbia student would
rather work at the #1 Dallas firm rather than the #5 New York firm – an
assumption that neither I nor probably Professor Leiter are willing to
accept.
Fourth, Wehrli, Leiter, and Sullivan do not properly deal with the bias
introduced by graduates of LL.M programs. Leiter and Sullivan include
LL.M classes as part of a school’s total class size. Leiter justifies this
inclusion by pointing out that Martindale-Hubbell’s search engine does not
distinguish between J.D. graduates and LL.M graduates, and therefore he
has to include LL.M classes as part of total class size to avoid artificially
raising the rankings of schools with large LL.M programs, such as
Georgetown and NYU. Although including LL.M classes does avoid
positive bias towards schools with large LL.M programs geared towards
domestic students, it results in negative bias towards schools with LL.M
programs geared towards international students – many grades of such
programs do not intend to practice in the United States but instead return to
their home countries to work after graduation. Moreover, including LL.M
graduates in the rankings introduces yet another element that makes these
rankings a poor resource for prospective J.D. students who do not intend to
pursue an LL.M. Wehrli’s study, however, introduces the exact opposite
bias – although schools with LL.M programs geared towards internationals
are not artificially penalized, schools with large domestic LL.M programs
like NYU are artificially benefited. The only way any researcher could
have avoided introducing either bias to his study would have been not to
use Martindale-Hubbell’s search engine to gather the data.
Fifth, Martindale-Hubbell’s online directory is an incomplete and
inconsistent source of information. For most law firms, Martindale-Hubbell
includes biographical information on a firm’s associates, partners, and
counsel. However, several law firms only submit biographical information
to Martindale-Hubbell about their partners and counsel, and do not provide
the names of their associates, let alone where they went to law school.
To illustrate just how much damage excluding a firm’s associates can
have, I will use a concrete example. Cravath, Swaine & Moore, one of the
most elite law firms in world, has not included their associates (except for
senior ones) in the online Martindale-Hubbell directory.128 According to
the dataset Sullivan released, Martindale-Hubbell’s search engine found 19
Columbia Law graduates working at Cravath. However, according to
Cravath’s own search engine, there are 87 Columbia Law graduates
128 Lawyer Locator, Cravath, Swaine & Moore LLP (2005), available at
http://www.martindale.com/ (follow “Find Lawyers & Firms; search “(Search For) Organization
Name” for “Cravath, Swaine & Moore, LLP”).
CIOLLI-G ARBER 10/15/09 8:31 PM
132 It is important to note that when Leiter included Texas in the “usual suspects for the top
law schools” he was faculty at the University of Texas School of Law.
http://www.law.uchicago.edu/faculty/leiter/cv.html.
CIOLLI-G ARBER 10/15/09 8:31 PM
133 Christopher Avery et al., The Market for Federal Judicial Law Clerks, 68 U. CHI L. REV.
793 (2001).
134 Even today the study may be outdated, Avery et al. gathered their data from the Spring
2000 edition of the Judicial Yellow Book, and since then the market for federal judicial clerks has
changed significantly.
135 For instance, over the decades many judges will retire or die, while others are confirmed to
take their place. These new judges will naturally bring their own biases into the clerkship hiring
process, and thus even within a decade there may be significant changes in the market for judicial
clerks.
136 Sullivan, for example, has never updated his law school rankings.
137 See, e.g., Jeffrey E. Stake, The Interplay Between Law School Rankings, Reputations, and
Resource Allocation: Ways Rankings Mislead, 81 IND. L.J. 229, 250-55 (2006).
138 Leiter, for instance, has occasionally attributed ranking shifts in his educational quality
rankings to various faculty moves. However, Leiter has made no attempt to isolate whether those
faculty moves actually caused those shifts, or if other factors were involved.
http://www.leiterrankings.com/archives/2000archives_appc_changes.shtml.
139 Anthony Ciolli, The Legal Employment Market: Determinants of Elite Firm Placement and
How Law Schools Stack Up, 45 JURIMETRICS J. 413 (2005).
140 Data collection took place between November 2004 and January 2005. The author
CIOLLI-G ARBER 10/15/09 8:31 PM
obtained this information by examining biographies on law firm websites. Id. at 419.
141 See id. at 423 (“Student geographical preferences vary from school to school…. Failing to
adjust for these preferences can produce distortions.”)
142 See id. (“The percentage of students who choose to go into law firms is not constant among
law schools.”)
143 See id. at 422 (explaining how class size adjustments were made in order to isolate J.D.
students who graduated between 2001 and 2003).
144 Elite firms were defined as any law firm ranked by the American Lawyer or Vault. Id. at
416-418.
145 These nine regions were New England, Mid Atlantic, Midwest, West North Central, South
Atlantic, East South Central, West South Central, Rocky Mountains, and Pacific. Only schools
that sent more than 20 students to a given region between 2001 and 2003 were ranked in that
region. Id. at 424.
146 The regional rankings were aggregated based on market share. Id. at 427.
147 Schools that placed 20 or more graduates in at least two of the nine regions were ranked
nationally. Id.
148 Id. at 428.
149 Id. at 427-28.
150 Id. at 428.
151 Id. at 431-34.
152 Id. at 435-36.
CIOLLI-G ARBER 10/15/09 8:31 PM
year classes.153 Using this information, prospective law students are able to
know what factors they should consider important when judging a school’s
ability to place its graduates even many years after the initial study was
published. Similarly, law school administrators, rather than merely
knowing how their law schools stack up to other law schools, will have a
better idea of what they can do to improve elite firm placement relative to
their peer schools. Based on the AutoAdmit study, one could argue that the
best practices law schools should implement in order to maximize
placement are a traditional letter grade grading system,154 no class rank
disclosure to students or to employers,155 and more required first year
classes.156