Академический Документы
Профессиональный Документы
Культура Документы
Scott Patrick
Professor Abbey
GOVT 610
21 April 2011
People like facts. Facts, as the saying goes, do not lie; they are proven reflections of reality,
separate from all prejudice and normative judgment. In an uncertain world, they are the anchors we can
hold on to, trust in and rely upon. Yet what we often consider "factual" is sometimes anything but. In his
2001 book, Damned Lies and Statistics: Untangling Numbers From the Media, Politicians, and Activists,
sociologist Joel Best expounds on how, despite the importance popularly accredited to them, facts – in
the form of statistics about the social sciences – fall victim to a litany of blunders, deviations and
exploitation. As they travel from the methodology blueprint to our eyes and ears, statistics are poorly
formed, garbled in their relay to consumers or used as weapons in ideological battles in which truth is
among the casualties. Best's book contains valuable lessons for students, scholars and those seeking to
apply their knowledge to the real world, and aids in making his readers more astute when dealing with
statistics and their perversion – including the statistical follies we see in the media today.
Best devotes his book toward examining the use of social statistics in an age when most people
accept such statistics as facts, even when they are wrong. This is due to a propensity in the modern
period requiring facts to be empirical and definitive; they should not be open to interpretation, if
possible. Using statistics in sociology therefore reflected the desire "to bring the authority of science to
debates about social policy" (Best, 2001, p. 13). Yet statistics do not simply emerge immaculate from the
ether. People create them, and people have agendas. Activists generally wish to create awareness about
a problem, and as such, may be inclined to distort statistics to make a social problem seem larger than it
is. In response, those with interests opposed to an activist's agenda may manipulate statistics to
Patrick 2
minimize the issue (pp. 15-17). In the meanwhile, the public often does not know how to interpret the
numbers thrown around in these conflicting campaigns. Many people are innumerate, meaning that
certain numerical concepts bewilder them, prompting them to accept massaged statistics rather than
scrutinize them closely (pp. 19-20). Best encourages people to think critically about who creates or cites
a statistic and what hidden motives they may possess, if any. Once we identify the source of a statistic,
we can then make reasonable assumptions about what response they are trying to provoke from us (pp.
27-28).
Having stimulated us to inquire about the who and why behind statistics, Best analyzes how
people generate faulty statistics, including how estimates become facts. Studying society is not as
straightforward as counting a readily observable phenomenon; it may take some time before some
social problems attract notice, and even established ones like illegal drug use and rape depend on
people accurately reporting when and how they happen. The uncertainty surrounding these cases
create a "dark figure" of unknown instances, leading activists to guess what the "dark figure" is – and,
because they are usually predisposed to exaggerate the problem, they tend to guess higher than the
reality (pp. 33-34). The "dark figure" necessitates guessing, but such guessing becomes troublesome
when advocates treat the guess as a truth, investing significant passion in defending it as such (p. 36).
Another issue with statistic production is defining the problem. If a researcher assumes a very
broad definition of what constitutes an instance of a phenomenon, he or she will find many more cases
than if he or she narrowed his or her definition to very specific instances. Activists, who as noted above
prefer large numbers, may adopt broad definitions and end up including false positives, instances of a
phenomenon that should not be included in the research but are. Opponents of the activists will use a
narrow definition with false negatives, instances of a phenomenon that are included in a study but
should not be (p. 40). It is essential that we examine the exact wording of a definition and wonder
whether it is a logical description of the phenomenon (p. 44). It is understandably impossible to form an
Patrick 3
unwavering consensus about what researchers should include in a definition, but it is still important that
Measuring a phenomenon also proves difficult. Best uses survey research as an example, noting
how questions, when phrased in a particular way, may ignore the nuances behind an opinion (under
what conditions a person might approve of abortion, for instance) or influence respondents to prefer
one response due to the language of a question (pp. 45-48). Furthermore, researchers may decide to
translate what they uncover in a particular way without disclosing their reasoning for that interpretation
(pp. 48-51). This relates to the definition problem and the biases activists and their opponents may bring
to their research, and requires consumers of statistics to return to the who and why questions
mentioned previously.
As researchers usually cannot study every instance of a problem as closely as they would like,
they must draw samples from the overall population and then generalize about the population based on
the sample. If the sample studied does not reflect the characteristics that define a population, however,
it does not matter how large the sample is; the sample simply does not reflect the population and will
produce an erroneous result (pp. 53-54). Activists particularly may latch on to a sensational but atypical
instance of a phenomenon to draw attention to the problem (p. 56). Random sampling helps to ensure
representative samples, but reducing melodrama in statistics requires honesty on promoters' parts.
Once research produces statistics, there remains potential for their misuse. Best discusses how
statistics mutate, drastically changing their message. The reasons for this may be innocent enough, as
innumerate individuals often misunderstand simple statistics (p. 76) as well as complex ones (p. 82),
jumbling data when they repeat it, with the jumbled data being repeated by others. Others, however,
deliberately choose to report certain data in certain ways, making it more likely that the person
presenting the data will be able to persuade the recipient of a particular point of view (p. 94). Regardless
of the precise cause, statistics tend to change because people prefer dramatic figures to moderate ones.
Patrick 4
Statistics can be useful in making comparisons, yet we must employ care to guarantee that a fair
comparison is being made. When researchers compare instances of a phenomenon over time, it is
possible that methods of measurement for that phenomenon have changed (p. 99) or are no longer
accurate, as when inflation modifies monetary value (p. 103). At other times, it is vital to keep context in
mind, such as in comparisons between geographic locales (p. 109) or different social groups (p. 113), as
intervening factors may make a unmodified comparison inappropriate. The circumstances surrounding a
Finally, Best builds upon his point earlier in the book about people with different interests
utilizing statistics. When these individuals or groups compete against one another, a "stat war" takes
place (p. 129). It behooves observers of these contending causes to question the definitions, numbers
and interpretations adopted by each side and to understand them in an informed perspective (pp. 151-
152). While competing interests in these wars need not be deceptive in the statistics they use, they are
certainly not neutral messengers. They have motivations that color their research and how they present
it.
Best's book is valuable from several different perspectives. As a student, his enticement to look
critically at statistics fits with a more general imperative to not just memorize and regurgitate
information one is taught. While he spends a large amount of time discussing how activists, government
officials, the media and others frame data in order to sponsor certain points of view, something similar
occurs in academia. There are different schools of thought within the academic fields, with some being
presently fashionable while others are long-standing orthodoxies. Many social science instructors will
indoctrinate their pupils into a particular school of thought, subtly guiding them into repeating
conventional wisdom. They assign certain classic tomes to the syllabi, and in the pages of these tomes,
the student will find studies (including statistics) that defend the deeply ingrained theories within the
field. While this practice eases newcomers into territory they would not be familiar with, it also
Patrick 5
reinforces rather than challenges the facts upon which the field rests. Edward Said (1978), by contrast,
attacked traditional Western studies of the Middle East for adopting false Eurocentric assumptions that
had long served as the baseline for that field. Similarly, Samuel P. Huntington (1968) questioned
modernization theory in comparative politics and proposed that crafting strong, autonomous
institutions were more crucial in developing countries than their economic expansion or their form of
government. As students of their respective avocations, they dared to reject viewpoints founded on
false foundations, and did so by being critical of what they instructors were teaching them rather than
Perhaps the most useful element of Best's book for the student, however, is the chapter
warning about comparing "apples and oranges". When learning from case studies especially, there is
predilection on the part of the student to assume that what holds true in one instance will repeat itself
in others. As Best points out, this is not always the case. Usually, instructors warn students of practicing
universalism, in which researchers impose values, ideas, policies and other concepts upon other cases
without pause for how the setting may be different and therefore produce different results. Context
matters, and Best strengthens this basic truth by imploring consumers of statistics to weigh all the facts
For the political scientist or public administrator, Best's book serves as a blueprint for using
statistics more honestly – or dishonestly. Although Best intends the book to pass lessons on to the
average individual so they can be more cautious when encountering statistics, someone working in a
political profession could resolve to avoid the sort of errors Best claims activists, researchers and
officials fall prey to as well as the tricks they may consciously use to further their objectives. Contrarily, a
less scrupulous person in the same vocation could take note of the bad practices Best describes and use
them to his or her advantage. A political scientist conducting a survey may note Best's mention of how
some researchers word survey questions in such a way as to get a particular response and strive to
Patrick 6
phrase questions so they are as neutral as possible. Another political scientist doing the same research,
working for a think tank with an ideological bent and hoping to please his employers, may heed the
same passage, realize that this is practice within the field (even if it is a ethically questionable one) and
phrase the questions to get responses in accordance with the think tank's ideas. Despite his designs to
tell a cautionary tale, Best's book could act as a handbook for manipulating data. After all, as he states
many times, the problems surrounding the misuse of statistics have abounded due to the innumeracy of
the masses and the general tendency among the media to grab on to big numbers. Until it becomes as
aware about the abuse of statistics as Best hopes, the public remains vulnerable to manipulation.
As an audience for statistics, however, managers and public administrators could use Best's
book as a warning to be more discerning when interest groups influence them to set policy. Activists will
target them in order to get them to enact one scheme over another and, unless they are discriminating,
managers or administrators may become dupes of misleading statistics that serve narrow interests
rather than the general welfare. They might recognize that research conducted in their municipality or
within their organization derives from a sample that is unrepresentative of the citizens in the area or the
organization's members. Alternatively, they might gleam that researchers have predicated projections
for crime rates or fiscal performance on questionable estimates or incorrect reports. At participatory
events like a town hall meeting or an employer-employee interview, they might hear again and again a
false statistic that has made its way into the common knowledge and be able to correct it (even if those
A scholar may decide to use Best's book when designing his or her research, again for good or
for ill. If, as a student, he or she became aware of the false facts troubling his or her field, he or she may
use Best's book as a guideline to do new, original research devoid of the problems presently plaguing
established studies. Unfortunately, the scholar may choose to adopt underhanded ploys to "tweak" his
or her research in order to turn heads once his or her work is published. (Of course, scholars who are
Patrick 7
published tend to be peer-reviewed, which means there is a greater chance their methodology will be
questioned than if the research was presented for mass consumption, as is usually the case with think
tanks' reports.) Ideally, however, the scholar will avoid the pitfalls outlined by Best. The scholar will
define concepts clearly, consistently articulate estimates as estimates and measurement methodologies
will not incorporate the fallacies of unrepresentative sampling. The scholar should also stay away from
interpreting data in such a way as to arrive at conclusions the data does not overtly support; and if
interpretation does take place, it must have a rational and overtly stated basis.
The scholar may also see Best's book as an indication that some of the authority attributed to
quantitative data in the social sciences has been overblown. Numbers and formulas appear like hard and
untainted objects, but when a scholar applies them to the social sciences, rife with abstractions that are
difficult to pin down and study, it may be more appropriate to use qualitative research. Qualitative
research, though less precise, does have the benefit of getting a researcher to the thick of the
phenomenon under analysis. Rather than surveying a sample with the same questions to achieve limited
responses that can then be measured, a scholar might attain more enlightenment about an issue by
going among a group, experiencing what they experience and building up trust and eventually access to
previously unknown data. Despite being more time-consuming and convoluted than quantitative
research, qualitative research can provide different facets to social problems that, as Best states, are
more complicated than quantitative research sometimes makes them out to be.
While now a decade old, the statistical errors and manipulations Best's book depicts are just as
topical today as they were then. An exploration of recent new articles supplies a few examples. In a
recent online Computer World article, data from the Bureau of Labor Statistics showed a higher
unemployment rate for computer support specialists (7.7%) in 2010 relative to other white-collar
professions, such as lawyers (1.5%) and pharmacists (1.2%) (Machlis, 2011). In a subsequent interview
with Computer World, David Foote, CEO of IT workforce analyst firm Foote Partners, claimed that the
Patrick 8
statistic was misleading because the Labor Department's Standard Occupational Classification classifies
computer support specialists in too narrow "old pure-technology" terms (limited to network engineers
and administrators), failing to include workers who oversee online security or social media development
(Eckle, 2011). This complaint would fall in the category of bad definitions, as it argues that the Labor
Department excludes jobs that should be considered information technology jobs but are not – an
example of false negatives (Best, 2001, p. 40). Foote does not counter with his own statistics in his
interview with Computer World, but it is worth noting that he does have an interest in questioning the
Bureau of Labor Statistics data. As the head of an IT workforce analyst firm, it is to his advantage for the
IT industry to appear less vulnerable to high unemployment rates. If it was particularly vulnerable, the
industry may restrict or disappear altogether, leaving his firm with less work to do. His complaint is not
coming from a neutral position and it is right to recognize this, although whether his point is valid or not
depends on a subjective judgment about whether the Labor Department's definition of what constitutes
an IT job is too broad or too narrow. Best encourages us to make our own decision based on sensible
deliberation, and considering that many workers have had to adopt skills associated with information
technology to adjust to new technological innovations like social media, Foote's argument has merit,
"Dark figures" also continue to cast a long shadow in the media, as do the guesses that
accompany them. On March 29, 2011 potential U.S. presidential candidate and former Republican U.S.
Senator Rick Santorum said that the "reason Social Security is in big trouble is we don't have enough
workers to support the retirees" because "a third of all the young people in America are not in America
today because of abortion." The Web site PolitiFact.com, which is operated by the St. Petersburg Times
and investigates claims made by political figures, looked into the statement to see if abortions were
really having such a significant effect on the U.S. population. In addition to disproving the contention
that a third of pregnancies end in abortion (it is less than a quarter, according to 2003 statistics from the
Patrick 9
U.S. Census Bureau and the Centers for Disease Control and Prevention), Louis Jacobson, one of
PolitiFact.com's researchers, also took issue with Santorum's assumption that every abortion lowers the
U.S. population by one person. Jacobson consulted with the non-profit Guttmacher Institute, whose
spokesperson pointed to a 2005 survey of 1,200 women who reported having had abortions. A
"substantial minority" said they "expected to have children after the abortion" (Jacobson, 2011).
Abortion carries a number of "dark figures", the most obvious of which is that women who have them
performed may not report them, perhaps because of some sense of shame or privacy. In this case,
however, the "dark figure" was the degree to which aborted pregnancies reduced the overall U.S.
population, something that would be extremely difficult to measure without following the lives of
women who have had abortions for an extended period (and, once again, that would be limited to the
women willing to acknowledge that they had abortions). Santorum made an assumption (based on a
statistic not borne out by the data) without phrasing it as such; to the haphazard listener, it sounded
According to a recent blog post by David Kreutzer (2011) with The Heritage Foundation, incumbent
President Barack Obama engaged in what Best refers to as an "apples and oranges" comparison – that
is, using statistics to compare data that do not make for a fair comparison. President Obama has stated
that, because the United States only has about 2% of the world's oil reserves, drilling those reserves
would fail to meet the United States' consumption of 25% of the world's oil (actually 22%, according to
Kreutzer). Kreutzer argues that President Obama is comparing oil reserves to oil production, which are
Imagine that we cut our use of petroleum to one barrel per year, and the rest of the world
eliminated its consumption entirely. We would still only have 2 percent of the world’s reserves,
Patrick 10
but we would consume 100 percent of the world's production. You can see how these statistics
The Heritage Foundation is a well known, widely cited conservative think tank, so it is not surprising that
they would criticize a statistic used by a Democratic president, but again we should judge Kreutzer's
criticism on whether it is reasonable or not. Regardless of where one's politics fall on the left-to-right
spectrum, oil reserves and oil production are indeed two very distinct things and should not be conflated
with one another. Kreutzer's criticism is valid and it would seem as though President Obama did in fact
While the title of his book comes from a cynical quote about statistics (that they are a type of
lie), Best does not desire us to be cynics but rather critics. Just as our automatic reaction when
presented with data should not be to ingest it without looking at it sideways first, we should not dismiss
the data out of hand because it must be inherently corrupt. There are many stages at which data may be
corrupted, from the production stage to the point where it becomes repeatedly circulated among
elements of the population or even the population at large. People are flawed creatures, and as much as
we may love facts for their positive assurance, the human imprint upon them means "facts" can be
flawed as well. Examples proliferate of this, up to and including the present day. The solution lies in
taking personal responsibility and treating statistics with healthy skepticism as well as a good dose of
fairness. Only through rigorous examination will we be able to separate the true, factual statistics from
Works Cited
Best, Joel (2001). Damned Lies and Statistics: Untangling Numbers From the Media, Politicians, and
Eckle, Jamie (2011, April 18). Career Watch: Misleading government stats on IT employment. Computer
http://www.computerworld.com/s/article/355798/Career_Watch_Misleading_government_sta
ts_on_IT_employment
Huntington, Samuel (1968). Political Order in Changing Societies. New Haven, CT: Yale University Press.
Jacobson, Louis (2011, March 31). Rick Santorum says one of every three pregnancies ends in an
meter/statements/2011/mar/31/rick-santorum/rick-santorum-says-one-every-three-
pregnancies-end/
Kreutzer, David (2011, March 31). Quit Repeating Nonsensical Oil Statistics! The Heritage Foundation:
Machlis, Sharon (2011, February 18). Tech unemployment higher than white-collar average. Computer
http://www.computerworld.com/s/article/9210078/Tech_unemployment_higher_than_white_
collar_average