Вы находитесь на странице: 1из 5

A fallacy is an argument that uses poor reasoning.

An argument can be fallacious


whether or not its conclusion is true.
[1][2]
A fallacy can be either formal or informal. An error
that stems from a poor logical form is sometimes called a formal fallacy or simply an
invalid argument. An informal fallacy is an error in reasoning that does not originate in
improper logical form.
[3]
Arguments committing informal fallacies may be formally valid,
but still fallacious.
[4]

Fallacies of presumption fail to prove the conclusion by assuming the conclusion in the
proof. Fallacies of weak inference fail to prove the conclusion due to insufficient
evidence. Fallacies of distraction fail to prove the conclusion due to irrelevant evidence,
like emotion. Fallacies of ambiguity fail to prove the conclusion due to vagueness in
words, phrases, or grammar.
[5]

Some fallacies are committed intentionally (to manipulate or persuade by deception),
others unintentionally due to carelessness or ignorance.
Contents
[hide]
1 Formal fallacy
o 1.1 Common examples
2 Aristotle's Fallacies
3 Whately's grouping of fallacies
4 Intentional fallacies
5 Deductive fallacy
6 Paul Meehl's Fallacies
7 Fallacies of Measurement
8 Other systems of classification
9 See also
10 References
11 Further reading
12 External links
Formal fallacy[edit]
Main article: Formal fallacy
A formal fallacy is a pattern of reasoning that is always wrong. This is due to a flaw in the
logical structure of the argument which renders the argument invalid.
The presence of a formal fallacy in a deductive argument does not imply anything about
the argument's premises or its conclusion. Both may actually be true, or may even be
more probable as a result of the argument, but the deductive argument is still invalid
because the conclusion does not follow from the premises in the manner described. By
extension, an argument can contain a formal fallacy even if the argument is not a
deductive one: for instance an inductive argument that incorrectly applies principles of
probability or causality can be said to commit a formal fallacy.
Common examples[edit]
Main article: List of fallacies Formal fallacies
Aristotle's Fallacies[edit]
Aristotle was the first to systematize logical errors into a list. Aristotle's "Sophistical
Refutations" (De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into
two major types, those depending on language and those not depending on
language.
[6]
These fallacies are called verbal fallacies and material fallacies, respectively.
A material fallacy is an error in what the arguer is talking about, while a verbal fallacy is
an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is
obtained by improper or ambiguous use of words.
[7]

Whately's grouping of fallacies[edit]
Richard Whately divided fallacies into two groups: logical and material. According to
Whately, logical fallacies are arguments where the conclusion does not follow from the
premises. Material fallacies are not logical errors because the conclusion does follow
from the premises. He then divided the logical group into two groups: purely logical and
semi-logical. The semi-logical group included all of Aristotle's sophisms except:ignoratio
elenchi, petitio principii, and non causa pro causa, which are in the material group.
[8]

Intentional fallacies[edit]
Sometimes a speaker or writer uses a fallacy intentionally. In any context, including
academic debate, a conversation among friends, political discourse, or advertising, the
arguer may use fallacious reasoning to try to persuade the listener or reader, by means
other than offering relevant evidence, that the conclusion is true.
Examples of this include the speaker or writer: diverting the argument to unrelated issues
with a red herring (Ignoratio elenchi); insulting someone's character (argumentum ad
hominem), assuming they are right by "begging the question" (petitio principi); making
jumps in logic (non-sequitur); identifying a false cause and effect (post hoc ergo propter
hoc); asserting that everyone agrees (bandwagoning); creating a "false dilemma"
("either-or fallacy") in which the situation is oversimplified; selectively using facts (card-
stacking); making false or misleading comparisons (false equivalence and "false analogy);
generalizing quickly and sloppily (false generalization).
[9]

In humor, errors of reasoning are used for comical purposes. Groucho Marx used
fallacies of amphiboly, for instance, to make ironic statements; Gary Larson employs
fallacious reasoning in many of his cartoons. Wes Boyer and Samuel Stoddard have
written a humorous essay teaching students how to be persuasive by means of a whole
host of informal and formal fallacies.
[10]

Deductive fallacy[edit]
Main articles: Deductive fallacy and formal fallacy
In philosophy, the term formal fallacy for logical fallacies and defined formally as: a flaw
in the structure of a deductive argument which renders the argument invalid. The term is
preferred as logic is the use of valid reasoning and a fallacy is an argument that uses
poor reasoning therefore the term logical fallacy is an oxymoron. However, the same
terms are used in informal discourse to mean an argument which is problematic for any
reason. A logical form such as "A and B" is independent of any particular conjunction of
meaningful propositions. Logical form alone can guarantee that given true premises, a
true conclusion must follow. However, formal logic makes no such guarantee if any
premise is false; the conclusion can be either true or false. Any formal error or logical
fallacy similarly invalidates the deductive guarantee. Both the argument and all its
premises must be true for a statement to be true.
Paul Meehl's Fallacies[edit]
In Why I Do Not Attend Case Conferences
[11]
(1973), psychologist Paul Meehl discusses
several fallacies that can arise in medical case conferences that are primarily held to
diagnose patients. These fallacies can also be considered more general errors of thinking
that all individuals (not just psychologists) are prone to making.
Barnum effect: Making a statement that is trivial, and true of everyone, e.g of all
patients, but which appears to have especial significance to the diagnosis.
Sick-sick fallacy ("pathological set"): The tendency to generalize from personal
experiences of health and ways of being, to the identification of others who are
different from ourselves as being "sick". Meehl emphasizes that though psychologists
claim to know about this tendency, most are not very good at correcting it in their
own thinking.
"Me too" fallacy: The opposite of Sick-sick. Imagining that "everyone does this" and
thereby minimizing a symptom without assessing the probability of whether a
mentally healthy person would actually do it. A variation of this is Uncle George's
pancake fallacy. This minimizes a symptom through reference to a friend/relative
who exhibited a similar symptom, thereby implying that it is normal. Meehl points out
that consideration should be given that the patient is not healthy by comparison but
that the friend/relative is unhealthy.
Multiple Napoleons fallacy: "It's not real to us, but it's 'real' to him." A relativism that
Meehl sees as a waste of time. There is a distinction between reality and delusion
that is important to make when assessing a patient and so the consideration of
comparative realities can mislead and distract from the importance of a patient's
delusion to a diagnostic decision.
Hidden decisions: Decisions based on factors that we do not own up to or challenge,
and for example result in the placing of middle and upper class patients in therapy
while lower-class patients are given medication. Meehl identifies these decisions as
related to an implicit ideal patient who is young, attractive, verbal, intelligent, and
successful (YAVIS). He sees YAVIS patients as being preferred by psychotherapists
because they can pay for long term treatment and are more enjoyable to interact with.
The spun-glass theory of the mind: The belief that the human organism is so fragile
that minor negative events, such as criticism, rejection, or failure, are bound to cause
major trauma to the system. Essentially not giving humans, and sometimes patients,
enough credit for their resilience and ability to recover.
[11]

Fallacies of Measurement[edit]
Increasing availability and circulation of big data are driving proliferation of new metrics
for scholarly authority,
[12][13]
and there is lively discussion regarding the relative usefulness
of such metrics for measuring the value of knowledge production in the context of an
"information tsunami."
[14]
Where mathematical fallacies are subtle mistakes in reasoning
leading to invalid mathematical proofs, measurement fallacies are unwarranted inferential
leaps involved in the extrapolation of raw data to a measurement-based value claim. The
ancient Greek Sophist Protagoras was one of the first thinkers to propose that humans
can generate reliable measurements through his "human-measure" principle and the
practice of dissoi logoi (arguing multiple sides of an issue).
[15][16]
This history helps explain
why measurement fallacies are informed by informal logic and argumentation theory.
Anchoring fallacy: Anchoring is a cognitive bias, first theorized by Amos Tversky and
Daniel Kahneman, that "describes the common human tendency to rely too heavily
on the first piece of information offered (the 'anchor') when making decisions." In
measurement arguments, anchoring fallacies can occur when unwarranted weight is
given to data generated by metrics that the arguers themselves acknowledge is
flawed. For example, limitations of the Journal Impact Factor (JIF) are well
documented,
[17]
and even JIF pioneer Eugene Garfield notes, "while citation data
create new tools for analyses of research performance, it should be stressed that
they supplement rather than replace other quantitative-and qualitative-
indicators."
[18]
To the extent that arguers jettison acknowledged limitations of JIF-
generated data in evaluative judgments, or leave behind Garfield's "supplement
rather than replace" caveat, they court commission of anchoring fallacies.
Naturalistic Fallacy: In the context of measurement, a naturalistic fallacy can occur in
a reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in
the case of sheer quantity metrics based on the premise "more is better"
[14]
or, in the
case of developmental assessment in the field of psychology, "higher is better."
[19]

False Analogy: In the context of measurement, this error in reasoning occurs when
claims are supported by unsound comparisons between data points, hence the false
analogy's informal nickname of the "apples and oranges" fallacy.
[20]
For example,
the Scopus andWeb of Science bibliographic databases have difficulty distinguishing
between citations of scholarly work that are arms-length endorsements, ceremonial
citations, or negative citations (indicating the citing author withholds endorsement of
the cited work).
[21]
Hence, measurement-based value claims premised on the uniform
quality of all citations may be questioned on false analogy grounds.
Argumentum ex Silentio: An argument from silence features an unwarranted
conclusion advanced based on the absence of data. For example, Academic
Analytics' Faculty Scholarly Productivity Index purports to measure overall faculty
productivity, yet the tool does not capture data based on citations in books. This
creates a possibility that low productivity measurements using the tool may
constitute argumentum ex silentio fallacies, to the extent that such measurements
are supported by the absence of book citation data.
Ecological Fallacy: An ecological fallacy is committed when one draws an inference
from data based on the premise that qualities observed for groups necessarily hold
for individuals; for example, "if countries with more Protestants tend to have higher
suicide rates, then Protestants must be more likely to commit suicide."
[22]
In metrical
argumentation, ecological fallacies can be committed when one measures scholarly
productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to
aggregate data about a larger and different group (e.g. "Hispanic" faculty).
[23]

http://www.thefreedictionary.com/Broken+logic
http://www.hfu.edu.tw/~cchi/critical%20thinking%20web/0Fallacy/
http://baike.baidu.com/view/295755.htm

Вам также может понравиться