Вы находитесь на странице: 1из 14

Analysis of variance

Analysis of variance (ANOVA) is a collection of widely known after being included in Fisher’s 1925 book
statistical models used to analyze the differences among Statistical Methods for Research Workers.
group means and their associated procedures (such as
Randomization models were developed by several re-
“variation” among and between groups), developed by searchers. The first was published in Polish by Neyman
statistician and evolutionary biologist Ronald Fisher. In
in 1923.[11]
the ANOVA setting, the observed variance in a partic-
ular variable is partitioned into components attributable One of the attributes of ANOVA which ensured its early
to different sources of variation. In its simplest form, popularity was computational elegance. The structure of
ANOVA provides a statistical test of whether or not the the additive model allows solution for the additive coef-
means of several groups are equal, and therefore general- ficients by simple algebra rather than by matrix calcula-
izes the t-test to more than two groups. ANOVAs are use- tions. In the era of mechanical calculators this simplicity
ful for comparing (testing) three or more means (groups was critical. The determination of statistical significance
or variables) for statistical significance. It is conceptually also required access to tables of the F function which were
similar to multiple two-sample t-tests, but is more conser- supplied by early statistics texts.
vative (results in less type I error) and is therefore suited
to a wide range of practical problems.
2 Motivating example

1 History
While the analysis of variance reached fruition in the
20th century, antecedents extend centuries into the past
according to Stigler.[1] These include hypothesis testing,
the partitioning of sums of squares, experimental tech-
niques and the additive model. Laplace was performing
hypothesis testing in the 1770s.[2] The development of
least-squares methods by Laplace and Gauss circa 1800
provided an improved method of combining observations
(over the existing practices then used in astronomy and
geodesy). It also initiated much study of the contributions No fit.
to sums of squares. Laplace soon knew how to estimate
a variance from a residual (rather than a total) sum of
squares.[3] By 1827 Laplace was using least squares meth-
ods to address ANOVA problems regarding measure-
ments of atmospheric tides.[4] Before 1800 astronomers
had isolated observational errors resulting from reaction
times (the "personal equation") and had developed meth-
ods of reducing the errors.[5] The experimental methods
used in the study of the personal equation were later ac-
cepted by the emerging field of psychology [6] which de-
veloped strong (full factorial) experimental methods to
which randomization and blinding were soon added.[7]
An eloquent non-mathematical explanation of the addi-
tive effects model was available in 1885.[8]
Ronald Fisher introduced the term variance and proposed Fair fit
its formal analysis in a 1918 article The Correlation Be-
tween Relatives on the Supposition of Mendelian Inheri- The analysis of variance can be used as an exploratory
tance.[9] His first application of the analysis of variance tool to explain observations. A dog show provides an
was published in 1921.[10] Analysis of variance became example. A dog show is not a random sampling of the

1
2 3 BACKGROUND AND TERMINOLOGY

duce a very good fit. All Chihuahuas are light and all St
Bernards are heavy. The difference in weights between
Setters and Pointers does not justify separate breeds. The
analysis of variance provides the formal tools to justify
these intuitive judgments. A common use of the method
is the analysis of experimental data or the development
of models. The method has some advantages over corre-
lation: not all of the data must be numeric and one result
of the method is a judgment in the confidence in an ex-
planatory relationship.

Very good fit
3 Background and terminology
breed: it is typically limited to dogs that are adult, pure- ANOVA is a particular form of statistical hypothesis test-
bred, and exemplary. A histogram of dog weights from a ing heavily used in the analysis of experimental data. A
show might plausibly be rather complex, like the yellow- test result (calculated from the null hypothesis and the
orange distribution shown in the illustrations. Suppose sample) is called statistically significant if it is deemed
we wanted to predict the weight of a dog based on a cer- unlikely to have occurred by chance, assuming the truth of
tain set of characteristics of each dog. Before we could do the null hypothesis. A statistically significant result, when
that, we would need to explain the distribution of weights a probability (p-value) is less than a threshold (signifi-
by dividing the dog population into groups based on those cance level), justifies the rejection of the null hypothesis,
characteristics. A successful grouping will split dogs such but only if the priori probability of the null hypothesis is
that (a) each group has a low variance of dog weights not high.
(meaning the group is relatively homogeneous) and (b)
the mean of each group is distinct (if two groups have the In the typical application of ANOVA, the null hypothesis
same mean, then it isn't reasonable to conclude that the is that all groups are simply random samples of the same
groups are, in fact, separate in any meaningful way). population. For example, when studying the effect of dif-
ferent treatments on similar samples of patients, the null
In the illustrations to the right, each group is identified as hypothesis would be that all treatments have the same ef-
X1 , X2 , etc. In the first illustration, we divide the dogs ac- fect (perhaps none). Rejecting the null hypothesis would
cording to the product (interaction) of two binary group- imply that different treatments result in altered effects.
ings: young vs old, and short-haired vs long-haired (thus,
group 1 is young, short-haired dogs, group 2 is young, By construction, hypothesis testing limits the rate of Type
long-haired dogs, etc.). Since the distributions of dog I errors (false positives) to a significance level. Experi-
weight within each of the groups (shown in blue) has a menters also wish to limit Type II errors (false negatives).
large variance, and since the means are very close across The rate of Type II errors depends largely on sample size
groups, grouping dogs by these characteristics does not (the rate will increase for small numbers of samples), sig-
produce an effective way to explain the variation in dog nificance level (when the standard of proof is high, the
weights: knowing which group a dog is in does not allow chances of overlooking a discovery are also high) and
us to make any reasonable statements as to what that dog’s effect size (a smaller effect size is more prone to Type
weight is likely to be. Thus, this grouping fails to fit the II error).
distribution we are trying to explain (yellow-orange). The terminology of ANOVA is largely from the statisti-
An attempt to explain the weight distribution by group- cal design of experiments. The experimenter adjusts fac-
ing dogs as (pet vs working breed) and (less athletic vs tors and measures responses in an attempt to determine
more athletic) would probably be somewhat more suc- an effect. Factors are assigned to experimental units by
cessful (fair fit). The heaviest show dogs are likely to be a combination of randomization and blocking to ensure
big strong working breeds, while breeds kept as pets tend the validity of the results. Blinding keeps the weighing
to be smaller and thus lighter. As shown by the second impartial. Responses show a variability that is partially
illustration, the distributions have variances that are con- the result of the effect and is partially random error.
siderably smaller than in the first case, and the means are ANOVA is the synthesis of several ideas and it is used
more reasonably distinguishable. However, the signifi- for multiple purposes. As a consequence, it is difficult to
cant overlap of distributions, for example, means that we define concisely or precisely.
cannot reliably say that X1 and X2 are truly distinct (i.e.,
“Classical ANOVA for balanced data does three things at
it is perhaps reasonably likely that splitting dogs accord-
once:
ing to the flip of a coin—by pure chance—might produce
distributions that look similar). 1. As exploratory data analysis, an ANOVA is an orga-
An attempt to explain weight by breed is likely to pro- nization of an additive data decomposition, and its
3

sums of squares indicate the variance of each com- Effect How changing the settings of a factor changes the
ponent of the decomposition (or, equivalently, each response. The effect of a single factor is also called
set of terms of a linear model). a main effect.
2. Comparisons of mean squares, along with an F-test Error Unexplained variation in a collection of obser-
... allow testing of a nested sequence of models. vations. DOE’s typically require understanding of
both random error and lack of fit error.
3. Closely related to the ANOVA is a linear model fit
with coefficient estimates and standard errors.”[12] Experimental unit The entity to which a specific treat-
ment combination is applied.
In short, ANOVA is a statistical tool used in several ways
to develop and confirm an explanation for the observed Factors Process inputs that an investigator manipulates
data. to cause a change in the output.
Additionally: Lack-of-fit error Error that occurs when the analysis
omits one or more important terms or factors from
1. It is computationally elegant and relatively robust the process model. Including replication in a DOE
against violations of its assumptions. allows separation of experimental error into its com-
ponents: lack of fit and random (pure) error.
2. ANOVA provides industrial strength (multiple sam-
ple comparison) statistical analysis. Model Mathematical relationship which relates changes
3. It has been adapted to the analysis of a variety of in a given response to changes in one or more factors.
experimental designs. Random error Error that occurs due to natural variation
in the process. Random error is typically assumed to
As a result: ANOVA “has long enjoyed the status of be normally distributed with zero mean and a con-
being the most used (some would say abused) statisti- stant variance. Random error is also called experi-
cal technique in psychological research.”[13] ANOVA “is mental error.
probably the most useful technique in the field of statis-
tical inference.”[14] Randomization A schedule for allocating treatment
material and for conducting treatment combinations
ANOVA is difficult to teach, particularly for complex ex-
[15] in a DOE such that the conditions in one run neither
periments, with split-plot designs being notorious. In
depend on the conditions of the previous run nor
some cases the proper application of the method is best
predict the conditions in the subsequent runs.[nb 1]
determined by problem pattern recognition followed by
the consultation of a classic authoritative test.[16] Replication Performing the same treatment combina-
tion more than once. Including replication allows
an estimate of the random error independent of any
3.1 Design-of-experiments terms
lack of fit error.
(Condensed from the NIST Engineering Statistics hand- Responses The output(s) of a process. Sometimes
book: Section 5.7. A Glossary of DOE Terminology.)[17] called dependent variable(s).

Balanced design An experimental design where all cells Treatment A treatment is a specific combination of fac-
(i.e. treatment combinations) have the same number tor levels whose effect is to be compared with other
of observations. treatments.

Blocking A schedule for conducting treatment combina-


tions in an experimental study such that any effects
on the experimental results due to a known change 4 Classes of models
in raw materials, operators, machines, etc., become
concentrated in the levels of the blocking variable. There are three classes of models used in the analysis of
The reason for blocking is to isolate a systematic ef- variance, and these are outlined here.
fect and prevent it from obscuring the main effects.
Blocking is achieved by restricting randomization.
4.1 Fixed-effects models
Design A set of experimental runs which allows the fit
of a particular model and the estimate of effects. Main article: Fixed effects model
DOE Design of experiments. An approach to problem
solving involving collection of data that will support The fixed-effects model (class I) of analysis of variance
valid, defensible, and supportable conclusions.[18] applies to situations in which the experimenter applies
4 5 ASSUMPTIONS

one or more treatments to the subjects of the experi- 5.1 Textbook analysis using a normal dis-
ment to see whether the response variable values change. tribution
This allows the experimenter to estimate the ranges of re-
sponse variable values that the treatment would generate The analysis of variance can be presented in terms
in the population as a whole. of a linear model, which makes the following as-
sumptions about the probability distribution of the
responses:[21][22][23][24]

4.2 Random-effects models • Independence of observations – this is an assump-


tion of the model that simplifies the statistical anal-
Main article: Random effects model ysis.

• Normality – the distributions of the residuals are


Random effects model (class II) is used when the treat- normal.
ments are not fixed. This occurs when the various fac-
tor levels are sampled from a larger population. Because • Equality (or “homogeneity”) of variances, called
the levels themselves are random variables, some assump- homoscedasticity — the variance of data in groups
tions and the method of contrasting the treatments (a should be the same.
multi-variable generalization of simple differences) dif-
fer from the fixed-effects model.[19]
The separate assumptions of the textbook model imply
that the errors are independently, identically, and nor-
mally distributed for fixed effects models, that is, that the
errors ( ε ) are independent and
4.3 Mixed-effects models

Main article: Mixed model ε ∼ N (0, σ 2 ).

A mixed-effects model (class III) contains experimental


factors of both fixed and random-effects types, with ap- 5.2 Randomization-based analysis
propriately different interpretations and analysis for the
two types. See also: Random assignment and Randomization test

Example: Teaching experiments could be performed by


a college or university department to find a good intro- In a randomized controlled experiment, the treatments
ductory textbook, with each text considered a treatment. are randomly assigned to experimental units, follow-
The fixed-effects model would compare a list of candi- ing the experimental protocol. This randomization is
date texts. The random-effects model would determine objective and declared before the experiment is car-
whether important differences exist among a list of ran- ried out. The objective random-assignment is used to
domly selected texts. The mixed-effects model would test the significance of the null hypothesis, following
compare the (fixed) incumbent texts to randomly selected the ideas of C. S. Peirce and Ronald Fisher. This
alternatives. design-based analysis was discussed and developed by
Francis J. Anscombe at Rothamsted Experimental Sta-
Defining fixed and random effects has proven elusive, with tion and by Oscar Kempthorne at Iowa State Univer-
competing definitions arguably leading toward a linguistic sity.[25] Kempthorne and his students make an assump-
quagmire.[20] tion of unit treatment additivity, which is discussed in the
books of Kempthorne and David R. Cox.

5.2.1 Unit-treatment additivity


5 Assumptions
In its simplest form, the assumption of unit-treatment
The analysis of variance has been studied from several ap- additivity[nb 2] states that the observed response yi,j from
proaches, the most common of which uses a linear model experimental unit i when receiving treatment j can be
that relates the response to the treatments and blocks. written as the sum of the unit’s response yi and the
Note that the model is linear in parameters but may be treatment-effect tj , that is [26][27][28]
nonlinear across factor levels. Interpretation is easy when
data is balanced across factors but much deeper under-
standing is needed for unbalanced data. yi,j = yi + tj .
5.3 Summary of assumptions 5

The assumption of unit-treatment additivity implies that, 5.3 Summary of assumptions


for every treatment j , the j th treatment has exactly the
same effect tj on every experiment unit.
The normal-model based ANOVA analysis assumes the
The assumption of unit treatment additivity usually independence, normality and homogeneity of the vari-
cannot be directly falsified, according to Cox and ances of the residuals. The randomization-based anal-
Kempthorne. However, many consequences of treatment- ysis assumes only the homogeneity of the variances of
unit additivity can be falsified. For a randomized experi- the residuals (as a consequence of unit-treatment addi-
ment, the assumption of unit-treatment additivity implies tivity) and uses the randomization procedure of the ex-
that the variance is constant for all treatments. There- periment. Both these analyses require homoscedasticity,
fore, by contraposition, a necessary condition for unit- as an assumption for the normal-model analysis and as
treatment additivity is that the variance is constant. a consequence of randomization and additivity for the
The use of unit treatment additivity and randomization is randomization-based analysis.
similar to the design-based inference that is standard in However, studies of processes that change variances
finite-population survey sampling. rather than means (called dispersion effects) have been
successfully conducted using ANOVA.[35] There are no
necessary assumptions for ANOVA in its full generality,
but the F-test used for ANOVA hypothesis testing has
assumptions and practical limitations which are of con-
5.2.2 Derived linear model
tinuing interest.
Problems which do not satisfy the assumptions of
Kempthorne uses the randomization-distribution and the
ANOVA can often be transformed to satisfy the assump-
assumption of unit treatment additivity to produce a de-
tions. The property of unit-treatment additivity is not
rived linear model, very similar to the textbook model dis-
invariant under a “change of scale”, so statisticians of-
cussed previously.[29] The test statistics of this derived lin-
ten use transformations to achieve unit-treatment addi-
ear model are closely approximated by the test statistics of
tivity. If the response variable is expected to follow a
an appropriate normal linear model, according to approx-
parametric family of probability distributions, then the
imation theorems and simulation studies.[30] However,
statistician may specify (in the protocol for the experi-
there are differences. For example, the randomization-
ment or observational study) that the responses be trans-
based analysis results in a small but (strictly) nega-
formed to stabilize the variance.[36] Also, a statistician
tive correlation between the observations.[31][32] In the
may specify that logarithmic transforms be applied to the
randomization-based analysis, there is no assumption of
responses, which are believed to follow a multiplicative
a normal distribution and certainly no assumption of in-
model.[27][37] According to Cauchy’s functional equation
dependence. On the contrary, the observations are depen-
theorem, the logarithm is the only continuous transfor-
dent!
mation that transforms real multiplication to addition.
The randomization-based analysis has the disadvantage
that its exposition involves tedious algebra and extensive
time. Since the randomization-based analysis is compli-
cated and is closely approximated by the approach using a
normal linear model, most teachers emphasize the normal
linear model approach. Few statisticians object to model- 6 Characteristics
based analysis of balanced randomized experiments.

ANOVA is used in the analysis of comparative experi-


ments, those in which only the difference in outcomes
is of interest. The statistical significance of the exper-
5.2.3 Statistical models for observational data iment is determined by a ratio of two variances. This
ratio is independent of several possible alterations to the
However, when applied to data from non-randomized ex- experimental observations: Adding a constant to all ob-
periments or observational studies, model-based analysis servations does not alter significance. Multiplying all ob-
lacks the warrant of randomization.[33] For observational servations by a constant does not alter significance. So
data, the derivation of confidence intervals must use sub- ANOVA statistical significance result is independent of
jective models, as emphasized by Ronald Fisher and his constant bias and scaling errors as well as the units used
followers. In practice, the estimates of treatment-effects in expressing observations. In the era of mechanical cal-
from observational studies generally are often inconsis- culation it was common to subtract a constant from all
tent. In practice, “statistical models” and observational observations (when equivalent to dropping leading digits)
data are useful for suggesting hypotheses that should be to simplify data entry.[38][39] This is an example of data
treated very cautiously by the public.[34] coding.
6 7 LOGIC

7 Logic The F-test is used for comparing the factors of the to-
tal deviation. For example, in one-way, or single-factor
The calculations of ANOVA can be characterized as ANOVA, statistical significance is tested for by compar-
computing a number of means and variances, dividing ing the F test statistic
two variances and comparing the ratio to a handbook
value to determine statistical significance. Calculating a
treatment effect is then trivial, “the effect of any treat- F = treatments between variance
ment is estimated by taking the difference between the treatments within variance
mean of the observations which receive the treatment and
the general mean”.[40] M STreatments SSTreatments /(I − 1)
F = =
M SError SSError /(nT − I)

7.1 Partitioning of the sum of squares where MS is mean square, I = number of treatments and
nT = total number of cases
Main article: Partition of sums of squares to the F-distribution with I − 1 , nT − I degrees of free-
dom. Using the F-distribution is a natural candidate be-
ANOVA uses traditional standardized terminology. The cause the test statistic is the ratio of two scaled sums of
definitional
∑ equation of sample variance is s2 = squares each of which follows a scaled chi-squared distri-
1
n−1 (yi − ȳ)2 , where the divisor is called the de- bution.
grees of freedom (DF), the summation is called the sum The expected value of F is 1 + nσ 2 2
Treatment /σError (where n
of squares (SS), the result is called the mean square (MS) is the treatment sample size) which is 1 for no treatment
and the squared terms are deviations from the sample effect. As values of F increase above 1, the evidence is
mean. ANOVA estimates 3 sample variances: a total increasingly inconsistent with the null hypothesis. Two
variance based on all the observation deviations from the apparent experimental methods of increasing F are in-
grand mean, an error variance based on all the observa- creasing the sample size and reducing the error variance
tion deviations from their appropriate treatment means, by tight experimental controls.
and a treatment variance. The treatment variance is based
on the deviations of treatment means from the grand There are two methods of concluding the ANOVA hy-
mean, the result being multiplied by the number of ob- pothesis test, both of which produce the same result:
servations in each treatment to account for the difference
between the variance of observations and the variance of • The textbook method is to compare the observed
means. value of F with the critical value of F determined
The fundamental technique is a partitioning of the total from tables. The critical value of F is a function
sum of squares SS into components related to the effects of the degrees of freedom of the numerator and the
used in the model. For example, the model for a sim- denominator and the significance level (α). If F ≥
plified ANOVA with one type of treatment at different FCᵣᵢ ᵢ ₐ , the null hypothesis is rejected.
levels.
• The computer method calculates the probability (p-
value) of a value of F greater than or equal to the
observed value. The null hypothesis is rejected if
SSTotal = SSError + SSTreatments
this probability is less than or equal to the signifi-
The number of degrees of freedom DF can be partitioned cance level (α).
in a similar way: one of these components (that for er-
ror) specifies a chi-squared distribution which describes The ANOVA F-test is known to be nearly optimal in
the associated sum of squares, while the same is true for the sense of minimizing false negative errors for a fixed
“treatments” if there is no treatment effect. rate of false positive errors (i.e. maximizing power for
a fixed significance level). For example, to test the hy-
pothesis that various medical treatments have exactly
DFTotal = DFError + DFTreatments the same effect, the F-test's p-values closely approxi-
mate the permutation test's p-values: The approximation
See also Lack-of-fit sum of squares. is particularly close when the design is balanced.[30][41]
Such permutation tests characterize tests with maximum
power against all alternative hypotheses, as observed
7.2 The F-test by Rosenbaum.[nb 3] The ANOVA F–test (of the null-
hypothesis that all treatments have exactly the same ef-
Main article: F-test fect) is recommended as a practical test, because of its
robustness against many alternative distributions.[42][nb 4]
7

7.3 Extended logic inconsistent experimental results.[44]


Caution is advised when encountering interactions; Test
ANOVA consists of separable parts; partitioning sources interaction terms first and expand the analysis beyond
of variance and hypothesis testing can be used individu- ANOVA if interactions are found. Texts vary in
ally. ANOVA is used to support other statistical tools. their recommendations regarding the continuation of the
Regression is first used to fit more complex models to ANOVA procedure after encountering an interaction. In-
data, then ANOVA is used to compare models with the teractions complicate the interpretation of experimental
objective of selecting simple(r) models that adequately data. Neither the calculations of significance nor the es-
describe the data. “Such models could be fit without any timated treatment effects can be taken at face value. “A
reference to ANOVA, but ANOVA tools could then be significant interaction will often mask the significance of
used to make some sense of the fitted models, and to test main effects.”[46] Graphical methods are recommended
hypotheses about batches of coefficients.”[43] "[W]e think to enhance understanding. Regression is often useful.
of the analysis of variance as a way of understanding and A lengthy discussion of interactions is available in Cox
structuring multilevel models—not as an alternative to (1958).[47] Some interactions can be removed (by trans-
regression but as a tool for summarizing complex high- formations) while others cannot.
dimensional inferences ...”[43]
A variety of techniques are used with multiple factor
ANOVA to reduce expense. One technique used in facto-
rial designs is to minimize replication (possibly no repli-
8 For a single factor cation with support of analytical trickery) and to combine
groups when effects are found to be statistically (or prac-
Main article: One-way analysis of variance tically) insignificant. An experiment with many insignif-
icant factors may collapse into one with a few factors sup-
[48]
The simplest experiment suitable for ANOVA analysis ported by many replications.
is the completely randomized experiment with a single
factor. More complex experiments with a single factor
involve constraints on randomization and include com- 10 Worked numeric examples
pletely randomized blocks and Latin squares (and vari-
ants: Graeco-Latin squares, etc.). The more complex
Several fully worked numerical examples are available.
experiments share many of the complexities of multiple
A simple case uses one-way (a single factor) analysis. A
factors. A relatively complete discussion of the analy-
more complex case uses two-way (two-factor) analysis.
sis (models, data summaries, ANOVA table) of the com-
pletely randomized experiment is available.

11 Associated analysis
9 For multiple factors
Some analysis is required in support of the design of
the experiment while other analysis is performed after
Main article: Two-way analysis of variance
changes in the factors are formally found to produce sta-
tistically significant changes in the responses. Because
ANOVA generalizes to the study of the effects of mul- experimentation is iterative, the results of one experiment
tiple factors. When the experiment includes observa- alter plans for following experiments.
tions at all combinations of levels of each factor, it is
termed factorial. Factorial experiments are more effi-
cient than a series of single factor experiments and the 11.1 Preparatory analysis
efficiency grows as the number of factors increases.[44]
Consequently, factorial designs are heavily used. 11.1.1 The number of experimental units
The use of ANOVA to study the effects of multiple fac-
tors has a complication. In a 3-way ANOVA with fac- In the design of an experiment, the number of experimen-
tors x, y and z, the ANOVA model includes terms for the tal units is planned to satisfy the goals of the experiment.
main effects (x, y, z) and terms for interactions (xy, xz, Experimentation is often sequential.
yz, xyz). All terms require hypothesis tests. The prolif- Early experiments are often designed to provide mean-
eration of interaction terms increases the risk that some unbiased estimates of treatment effects and of experi-
hypothesis test will produce a false positive by chance. mental error. Later experiments are often designed to
Fortunately, experience says that high order interactions test a hypothesis that a treatment effect has an important
are rare.[45] The ability to detect interactions is a major magnitude; in this case, the number of experimental units
advantage of multiple factor ANOVA. Testing one fac- is chosen so that the experiment is within budget and has
tor at a time hides interactions, but produces apparently adequate power, among other goals.
8 12 STUDY DESIGNS

Reporting sample size analysis is generally required in hint at interactions among factors or among observations.
psychology. “Provide information on sample size and the One rule of thumb: “If the largest standard deviation
process that led to sample size decisions.”[49] The analy- is less than twice the smallest standard deviation, we
sis, which is written in the experimental protocol before can use methods based on the assumption of equal stan-
the experiment is conducted, is examined in grant appli- dard deviations and our results will still be approximately
cations and administrative review boards. correct.”[57]
Besides the power analysis, there are less formal methods
for selecting the number of experimental units. These 11.2.2 Follow-up tests
include graphical methods based on limiting the proba-
bility of false negative errors, graphical methods based A statistically significant effect in ANOVA is often fol-
on an expected variation increase (above the residu- lowed up with one or more different follow-up tests. This
als) and methods based on achieving a desired confident can be done in order to assess which groups are differ-
interval.[50] ent from which other groups or to test various other fo-
cused hypotheses. Follow-up tests are often distinguished
11.1.2 Power analysis in terms of whether they are planned (a priori) or post
hoc. Planned tests are determined before looking at the
Power analysis is often applied in the context of ANOVA data and post hoc tests are performed after looking at the
in order to assess the probability of successfully rejecting data.
the null hypothesis if we assume a certain ANOVA de- Often one of the “treatments” is none, so the treatment
sign, effect size in the population, sample size and signif- group can act as a control. Dunnett’s test (a modification
icance level. Power analysis can assist in study design by of the t-test) tests whether each of the other treatment
determining what sample size would be required in order groups has the same mean as the control.[58]
to have a reasonable chance of rejecting the null hypoth-
esis when the alternative hypothesis is true.[51][52][53][54] Post hoc tests such as Tukey’s range test most commonly
compare every group mean with every other group mean
and typically incorporate some method of controlling for
11.1.3 Effect size Type I errors. Comparisons, which are most commonly
planned, can be either simple or compound. Simple com-
Main article: Effect size parisons compare one group mean with one other group
mean. Compound comparisons typically compare two
sets of groups means where one set has two or more
Several standardized measures of effect have been pro-
groups (e.g., compare average group means of group A, B
posed for ANOVA to summarize the strength of the asso-
and C with group D). Comparisons can also look at tests
ciation between a predictor(s) and the dependent variable
2 2 2 of trend, such as linear and quadratic relationships, when
(e.g., η , ω , or ƒ ) or the overall standardized difference
the independent variable involves ordered levels.
(Ψ) of the complete model. Standardized effect-size esti-
mates facilitate comparison of findings across studies and Following ANOVA with pair-wise multiple-comparison
disciplines. However, while standardized effect sizes are tests has been criticized on several grounds.[55][59] There
commonly used in much of the professional literature, a are many such tests (10 in one table) and recommenda-
non-standardized measure of effect size that has immedi- tions regarding their use are vague or conflicting.[60][61]
ately “meaningful” units may be preferable for reporting
purposes.[55]
12 Study designs
11.2 Follow-up analysis
There are several types of ANOVA. Many statisticians
[62]
It is always appropriate to carefully consider outliers. base ANOVA on the design of the experiment, espe-
They have a disproportionate impact on statistical con- cially on the protocol that specifies the random assign-
clusions and are often the result of errors. ment of treatments to subjects; the protocol’s descrip-
tion of the assignment mechanism should include a spec-
ification of the structure of the treatments and of any
11.2.1 Model confirmation blocking. It is also common to apply ANOVA to obser-
vational data using an appropriate statistical model.
It is prudent to verify that the assumptions of ANOVA
have been met. Residuals are examined or analyzed to Some popular designs use the following types of
confirm homoscedasticity and gross normality.[56] Resid- ANOVA:
uals should have the appearance of (zero mean normal
distribution) noise when plotted as a function of any- • One-way ANOVA is used to test for differences
thing including time and modeled data values. Trends among two or more independent groups (means),e.g.
9

different levels of urea application in a crop, or dif- 14 Generalizations


ferent levels of antibiotic action on several bacte-
rial species,[63] or different levels of effect of some ANOVA is considered to be a special case of linear
medicine on groups of patients. Typically, how- regression[68][69] which in turn is a special case of the
ever, the one-way ANOVA is used to test for dif- general linear model.[70] All consider the observations to
ferences among at least three groups, since the two- be the sum of a model (fit) and a residual (error) to be
group case can be covered by a t-test.[64] When there minimized.
are only two means to compare, the t-test and the
ANOVA F-test are equivalent; the relation between The Kruskal–Wallis test and the Friedman test are
ANOVA and t is given by F = t 2 . nonparametric tests, which do not rely on an assumption
of normality.[71][72]

• Factorial ANOVA is used when the experimenter


wants to study the interaction effects among the 14.1 Connection to linear regression
treatments.
Below we make clear the connection between multi-way
ANOVA and linear regression. Linearly re-order the data
th
• Repeated measures ANOVA is used when the same so that k observation is associated with a response yk
subjects are used for each treatment (e.g., in a and factors Zk,b where b ∈ {1, 2, . . . , B} denotes the
longitudinal study). different factors and B is the total number of factors. In
one-way ANOVA B = 1 and in two-way ANOVA B =
2 . Furthermore, we assume the bth factor has I∏ b levels.
B
• Multivariate analysis of variance (MANOVA) is Now, we can one-hot encode the factors into the b=1 Ib
used when there is more than one response variable. dimensional vector vk .
The one-hot encoding function gb : Ib 7→ {0, 1}Ib is
defined such that the ith entry of gb (Zk,b ) is
{
1 ifi = Zk,b
13 Cautions gb (Zk,b )i =
0 otherwise
The vector vk is the concatenation of all of
Balanced experiments (those with an equal sample size
the above vectors for all b . Thus, vk =
for each treatment) are relatively easy to interpret; Un-
[g1 (Zk,1 ), g2 (Zk,2 ), . . . , gB (Zk,B )] . In order to
balanced experiments offer more complexity. For sin-
obtain a fully general B -way interaction ANOVA we
gle factor (one way) ANOVA, the adjustment for un-
must also concatenate every additional interaction term
balanced data is easy, but the unbalanced analysis lacks
in the vector vk and then add an intercept term. Let that
both robustness and power.[65] For more complex designs
vector be xk .
the lack of balance leads to further complications. “The
orthogonality property of main effects and interactions With this notation in place, we now have the exact con-
present in balanced data does not carry over to the unbal- nection with linear regression. We simply regress re-
anced case. This means that the usual analysis of vari- sponse yk against the vector Xk . However, there is a
ance techniques do not apply. Consequently, the analysis concern about identifiability. In order to overcome such
of unbalanced factorials is much more difficult than that issues we assume that the sum of the parameters within
for balanced designs.”[66] In the general case, “The anal- each set of interactions is equal to zero. From here, one
ysis of variance can also be applied to unbalanced data, can use F-statistics or other methods to determine the rel-
but then the sums of squares, mean squares, and F-ratios evance of the individual factors.
will depend on the order in which the sources of varia-
tion are considered.”[43] The simplest techniques for han-
dling unbalanced data restore balance by either throwing 14.1.1 Example
out data or by synthesizing missing data. More complex
We can consider the 2-way interaction example where we
techniques use regression.
assume that the first factor has 2 levels and the second
ANOVA is (in part) a significance test. The American factor has 3 levels.
Psychological Association holds the view that simply re-
porting significance is insufficient and that reporting con- Define ai = 1 if Zk,1 = i and bi = 1 if Zk,2 = i , i.e.
fidence bounds is preferred.[55] a is the one-hot encoding of the first factor and b is the
one-hot encoding of the second factor.
While ANOVA is conservative (in maintaining a signifi-
cance level) against multiple comparisons in one dimen- With that,
sion, it is not conservative against comparisons in multiple Xk = [a1 , a2 , b1 , b2 , b3 , a1 × b1 , a1 × b2 , a1 × b3 , a2 ×
dimensions.[67] b1 , a2 × b2 , a2 × b3 , 1]
10 17 NOTES

where the last term is an intercept term. For a more con- [4] The F-test for the comparison of variances has a mixed
crete example suppose that reputation. It is not recommended as a hypothesis test to
determine whether two different samples have the same
Zk,1 = 2 variance. It is recommended for ANOVA where two es-
Zk,2 = 1 timates of the variance of the same sample are compared.
While the F-test is not generally robust against departures
Then,
from normality, it has been found to be robust in the spe-
Xk = [0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1] cial case of ANOVA. Citations from Moore & McCabe
(2003): “Analysis of variance uses F statistics, but these
are not the same as the F statistic for comparing two pop-
ulation standard deviations.” (page 554) “The F test and
15 See also other procedures for inference about variances are so lack-
ing in robustness as to be of little use in practice.” (page
556) "[The ANOVA F test] is relatively insensitive to
• AMOVA (analysis of molecular variance)
moderate nonnormality and unequal variances, especially
when the sample sizes are similar.” (page 763) ANOVA
• Analysis of covariance (ANCOVA)
assumes homoscedasticity, but it is robust. The statistical
test for homoscedasticity (the F-test) is not robust. Moore
• ANORVA (analysis of rhythmic variance) & McCabe recommend a rule of thumb.

• ANOVA on ranks

• ANOVA-simultaneous component analysis 17 Notes


• Explained variation [1] Stigler (1986)

• Mixed-design analysis of variance [2] Stigler (1986, p 134)

• Multivariate analysis of variance (MANOVA) [3] Stigler (1986, p 153)

[4] Stigler (1986, pp 154–155)


• One-way analysis of variance
[5] Stigler (1986, pp 240–242)
• Permutational analysis of variance
[6] Stigler (1986, Chapter 7 - Psychophysics as a Counter-
• Repeated measures ANOVA point)

• Two-way analysis of variance [7] Stigler (1986, p 253)

[8] Stigler (1986, pp 314–315)


• Variance decomposition
[9] The Correlation Between Relatives on the Supposition of
Mendelian Inheritance. Ronald A. Fisher. Philosophical
Transactions of the Royal Society of Edinburgh. 1918.
16 Footnotes (volume 52, pages 399–433)

[1] Randomization is a term used in multiple ways in this ma- [10] On the “Probable Error” of a Coefficient of Correla-
terial. “Randomization has three roles in applications: as tion Deduced from a Small Sample. Ronald A. Fisher.
a device for eliminating biases, for example from unob- Metron, 1: 3-32 (1921)
served explanatory variables and selection effects: as a ba-
sis for estimating standard errors: and as a foundation for [11] Scheffé (1959, p 291, “Randomization models were first
formally exact significance tests.” Cox (2006, page 192) formulated by Neyman (1923) for the completely random-
Hinkelmann and Kempthorne use randomization both in ized design, by Neyman (1935) for randomized blocks, by
experimental design and for statistical analysis. Welch (1937) and Pitman (1937) for the Latin square un-
der a certain null hypothesis, and by Kempthorne (1952,
[2] Unit-treatment additivity is simply termed additivity in 1955) and Wilk (1955) for many other designs.”)
most texts. Hinkelmann and Kempthorne add adjectives
and distinguish between additivity in the strict and broad [12] Gelman (2005, p 2)
senses. This allows a detailed consideration of multiple er-
[13] Howell (2002, p 320)
ror sources (treatment, state, selection, measurement and
sampling) on page 161. [14] Montgomery (2001, p 63)

[3] Rosenbaum (2002, page 40) cites Section 5.7 (Permuta- [15] Gelman (2005, p 1)
tion Tests), Theorem 2.3 (actually Theorem 3, page 184)
of Lehmann's Testing Statistical Hypotheses (1959). [16] Gelman (2005, p 5)
11

[17] “Section 5.7. A Glossary of DOE Terminology”. NIST [36] Hinkelmann and Kempthorne (2008, Volume 1, Section
Engineering Statistics handbook. NIST. Retrieved 5 April 6.10: Completely randomized design; Transformations)
2012.
[37] Bailey (2008)
[18] “Section 4.3.1 A Glossary of DOE Terminology”. NIST
Engineering Statistics handbook. NIST. Retrieved 14 Aug [38] Montgomery (2001, Section 3-3: Experiments with a sin-
2012. gle factor: The analysis of variance; Analysis of the fixed
effects model)
[19] Montgomery (2001, Chapter 12: Experiments with ran-
dom factors) [39] Cochran & Cox (1992, p 2 example)

[20] Gelman (2005, pp. 20–21) [40] Cochran & Cox (1992, p 49)

[21] Snedecor, George W.; Cochran, William G. (1967). Sta- [41] Hinkelmann and Kempthorne (2008, Volume 1, Section
tistical Methods (6th ed.). p. 321. 6.7: Completely randomized design; CRD with unequal
numbers of replications)
[22] Cochran & Cox (1992, p 48)
[42] Moore and McCabe (2003, page 763)
[23] Howell (2002, p 323)
[43] Gelman (2008)
[24] Anderson, David R.; Sweeney, Dennis J.; Williams,
[44] Montgomery (2001, Section 5-2: Introduction to factorial
Thomas A. (1996). Statistics for business and economics
designs; The advantages of factorials)
(6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–
453. ISBN 0-314-06378-1. [45] Belle (2008, Section 8.4: High-order interactions occur
rarely)
[25] Anscombe (1948)
[46] Montgomery (2001, Section 5-1: Introduction to factorial
[26] Kempthorne (1979, p 30)
designs; Basic definitions and principles)
[27] Cox (1958, Chapter 2: Some Key Assumptions) [47] Cox (1958, Chapter 6: Basic ideas about factorial exper-
iments)
[28] Hinkelmann and Kempthorne (2008, Volume 1, Through-
out. Introduced in Section 2.3.3: Principles of experi- [48] Montgomery (2001, Section 5-3.7: Introduction to facto-
mental design; The linear model; Outline of a model) rial designs; The two-factor factorial design; One observa-
tion per cell)
[29] Hinkelmann and Kempthorne (2008, Volume 1, Section
6.3: Completely Randomized Design; Derived Linear [49] Wilkinson (1999, p 596)
Model)
[50] Montgomery (2001, Section 3-7: Determining sample
[30] Hinkelmann and Kempthorne (2008, Volume 1, Section size)
6.6: Completely randomized design; Approximating the
randomization test) [51] Howell (2002, Chapter 8: Power)

[31] Bailey (2008, Chapter 2.14 “A More General Model” in [52] Howell (2002, Section 11.12: Power (in ANOVA))
Bailey, pp. 38–40)
[53] Howell (2002, Section 13.7: Power analysis for factorial
[32] Hinkelmann and Kempthorne (2008, Volume 1, Chapter experiments)
7: Comparison of Treatments)
[54] Moore and McCabe (2003, pp 778–780)
[33] Kempthorne (1979, pp 125–126, “The experimenter must
decide which of the various causes that he feels will pro- [55] Wilkinson (1999, p 599)
duce variations in his results must be controlled experi-
[56] Montgomery (2001, Section 3-4: Model adequacy check-
mentally. Those causes that he does not control experi-
ing)
mentally, because he is not cognizant of them, he must
control by the device of randomization.” "[O]nly when [57] Moore and McCabe (2003, p 755, Qualifications to this
the treatments in the experiment are applied by the ex- rule appear in a footnote.)
perimenter using the full randomization procedure is the
chain of inductive inference sound. It is only under these [58] Montgomery (2001, Section 3-5.8: Experiments with a
circumstances that the experimenter can attribute what- single factor: The analysis of variance; Practical interpre-
ever effects he observes to the treatment and the treatment tation of results; Comparing means with a control)
only. Under these circumstances his conclusions are reli-
able in the statistical sense.”) [59] Hinkelmann and Kempthorne (2008, Volume 1, Section
7.5: Comparison of Treatments; Multiple Comparison
[34] Freedman Procedures)

[35] Montgomery (2001, Section 3.8: Discovering dispersion [60] Howell (2002, Chapter 12: Multiple comparisons among
effects) treatment means)
12 18 REFERENCES

[61] Montgomery (2001, Section 3-5: Practical interpretation • Cox, D. R. (2006). Principles of statistical inference.
of results) Cambridge New York: Cambridge University Press.
ISBN 978-0-521-68567-2.
[62] Cochran & Cox (1957, p 9, "[T]he general rule [is] that
the way in which the experiment is conducted determines
not only whether inferences can be made, but also the cal- • Freedman, David A.(2005). Statistical Models:
culations required to make them.”) Theory and Practice, Cambridge University Press.
ISBN 978-0-521-67105-7
[63] One-way/single factor ANOVA. Biomedical Statistics
Archived 7 November 2014 at the Wayback Machine. • Gelman, Andrew (2005). “Analysis of
[64] “The Probable Error of a Mean”. Biometrika. 6: 1–25. variance? Why it is more important than
1908. doi:10.1093/biomet/6.1.1. ever”. The Annals of Statistics. 33: 1–53.
doi:10.1214/009053604000001048.
[65] Montgomery (2001, Section 3-3.4: Unbalanced data)
• Gelman, Andrew (2008). “Variance, analysis of”.
[66] Montgomery (2001, Section 14-2: Unbalanced data in
factorial design) The new Palgrave dictionary of economics (2nd
ed.). Basingstoke, Hampshire New York: Palgrave
[67] Wilkinson (1999, p 600) Macmillan. ISBN 978-0-333-78676-5.
[68] Gelman (2005, p.1) (with qualification in the later text)
• Hinkelmann, Klaus & Kempthorne, Oscar (2008).
[69] Montgomery (2001, Section 3.9: The Regression Ap- Design and Analysis of Experiments. I and II (Sec-
proach to the Analysis of Variance) ond ed.). Wiley. ISBN 978-0-470-38551-7.
[70] Howell (2002, p 604)
• Howell, David C. (2002). Statistical methods
[71] Howell (2002, Chapter 18: Resampling and nonparamet- for psychology (5th ed.). Pacific Grove, CA:
ric approaches to data) Duxbury/Thomson Learning. ISBN 0-534-37770-
X.
[72] Montgomery (2001, Section 3-10: Nonparametric meth-
ods in the analysis of variance)
• Kempthorne, Oscar (1979). The Design and Analy-
sis of Experiments (Corrected reprint of (1952) Wi-
ley ed.). Robert E. Krieger. ISBN 0-88275-105-0.
18 References
• Lehmann, E.L. (1959) Testing Statistical Hypothe-
• Anscombe, F. J. (1948). “The Validity of Com- ses. John Wiley & Sons.
parative Experiments”. Journal of the Royal
Statistical Society. Series A (General). 111 • Montgomery, Douglas C. (2001). Design and Anal-
(3): 181–211. JSTOR 2984159. MR 30181. ysis of Experiments (5th ed.). New York: Wiley.
doi:10.2307/2984159. ISBN 978-0-471-31649-7.

• Bailey, R. A. (2008). Design of Comparative Exper- • Moore, David S. & McCabe, George P. (2003). In-
iments. Cambridge University Press. ISBN 978-0- troduction to the Practice of Statistics (4e). W H
521-68357-9. Pre-publication chapters are available Freeman & Co. ISBN 0-7167-9657-0
on-line.
• Rosenbaum, Paul R. (2002). Observational Studies
• Belle, Gerald van (2008). Statistical rules of thumb
(2nd ed.). New York: Springer-Verlag. ISBN 978-
(2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-
0-387-98967-9
14448-0.
• Cochran, William G.; Cox, Gertrude M. (1992). • Scheffé, Henry (1959). The Analysis of Variance.
Experimental designs (2nd ed.). New York: Wiley. New York: Wiley.
ISBN 978-0-471-54567-5.
• Stigler, Stephen M. (1986). The history of statistics :
• Cohen, Jacob (1988). Statistical power analysis for the measurement of uncertainty before 1900. Cam-
the behavior sciences (2nd ed.). Routledge ISBN bridge, Mass: Belknap Press of Harvard University
978-0-8058-0283-2 Press. ISBN 0-674-40340-1.
• Cohen, Jacob (1992). “Statistics a power primer”.
• Wilkinson, Leland (1999). “Statistical Methods
Psychological Bulletin. 112 (1): 155–159. PMID
in Psychology Journals; Guidelines and Explana-
19565683. doi:10.1037/0033-2909.112.1.155.
tions”. American Psychologist. 5 (8): 594–604.
• Cox, David R. (1958). Planning of experiments. CiteSeerX 10.1.1.120.4818 . doi:10.1037/0003-
Reprinted as ISBN 978-0-471-57429-3 066X.54.8.594.
13

19 Further reading • Wichura, Michael J. (2006). The coordinate-free


approach to linear models. Cambridge Series in Sta-
• Box, G. e. p. (1953). “Non-Normality and tistical and Probabilistic Mathematics. Cambridge:
Tests on Variances”. Biometrika. Biometrika Cambridge University Press. pp. xiv+199. ISBN
Trust. 40 (3/4): 318–335. JSTOR 2333350. 978-0-521-86842-6. MR 2283455.
doi:10.1093/biomet/40.3-4.318. • Phadke, Madhav S. (1989). Quality Engineering us-
ing Robust Design. New Jersey: Prentice Hall PTR.
• Box, G. E. P. (1954). “Some Theorems on
ISBN 0-13-745167-9.
Quadratic Forms Applied in the Study of Analy-
sis of Variance Problems, I. Effect of Inequality
of Variance in the One-Way Classification”. The
Annals of Mathematical Statistics. 25 (2): 290. 20 External links
doi:10.1214/aoms/1177728786.
• SOCR ANOVA Activity and interactive applet.
• Box, G. E. P. (1954). “Some Theorems on
Quadratic Forms Applied in the Study of Analy- • Examples of all ANOVA and ANCOVA models
sis of Variance Problems, II. Effects of Inequal- with up to three treatment factors, including ran-
ity of Variance and of Correlation Between Er- domized block, split plot, repeated measures, and
rors in the Two-Way Classification”. The An- Latin squares, and their analysis in R (University of
nals of Mathematical Statistics. 25 (3): 484. Southampton)
doi:10.1214/aoms/1177728717.
• NIST/SEMATECH e-Handbook of Statistical
• Caliński, Tadeusz; Kageyama, Sanpei (2000). Block Methods, section 7.4.3: “Are the means equal?"
designs: A Randomization approach, Volume I: • Analysis of variance: Introduction
Analysis. Lecture Notes in Statistics. 150. New
York: Springer-Verlag. ISBN 0-387-98578-6.

• Christensen, Ronald (2002). Plane Answers to Com-


plex Questions: The Theory of Linear Models (Third
ed.). New York: Springer. ISBN 0-387-95361-2.

• Cox, David R. & Reid, Nancy M. (2000). The


theory of design of experiments. (Chapman &
Hall/CRC). ISBN 978-1-58488-195-7

• Fisher, Ronald (1918). “Studies in Crop Variation.


I. An examination of the yield of dressed grain from
Broadbalk” (PDF). Journal of Agricultural Science.
11: 107–135. doi:10.1017/S0021859600003750.
Archived from the original (PDF) on 12 June 2001.

• Freedman, David A.; Pisani, Robert; Purves, Roger


(2007) Statistics, 4th edition. W.W. Norton & Com-
pany ISBN 978-0-393-92972-0

• Hettmansperger, T. P.; McKean, J. W. (1998). Ed-


ward Arnold, ed. Robust nonparametric statistical
methods. Kendall’s Library of Statistics. Volume 5
(First ed.). New York: John Wiley & Sons, Inc. pp.
xiv+467 pp. ISBN 0-340-54937-8. MR 1604954.

• Lentner, Marvin; Thomas Bishop (1993). Experi-


mental design and analysis (Second ed.). P.O. Box
884, Blacksburg, VA 24063: Valley Book Com-
pany. ISBN 0-9616255-2-X.

• Tabachnick, Barbara G. & Fidell, Linda S. (2007).


Using Multivariate Statistics (5th ed.). Boston: Pear-
son International Edition. ISBN 978-0-205-45938-
4
14 21 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

21 Text and image sources, contributors, and licenses


21.1 Text
• Analysis of variance Source: https://en.wikipedia.org/wiki/Analysis_of_variance?oldid=795823660 Contributors: Wesley, Ap, Larry
Sanger, Maury Markowitz, Michael Hardy, Lexor, Ixfd64, Tomi, Ronz, Den fjättrade ankan~enwiki, Kaihsu, Dysprosia, Jitse Niesen,
Wikid, Dogface, Gak, Seglea, JosephBarillari, Henrygb, Hadal, Wikibot, HaeB, Giftlite, Tom harrison, Duncharris, Slowking Man, Piotrus,
Neffk, APH, Vsb, Klemen Kocjancic, Rich Farmbrough, Gloucks, Bender235, Cyc~enwiki, Peter Greenwell, Bobo192, Davidruben, Ar-
cadian, NickSchweitzer, AppleJuggler, Arthena, Jtalledo, Ish ishwar, Cipherswarm, Forteblast, BuddhaBubba, Oleg Alexandrov, Brookie,
Woohookitty, Rocastelo, Rchrd, MONGO, Philosophicles, Btyner, RichardWeiss, Graham87, Rjwilmsi, Carwil, Salix alba, Brighteror-
ange, Mohawkjohn, Klonimus, Ucucha, FlaBot, Vonkje, DaGizza, YurikBot, Wavelength, RobotE, Stephenb, Yyy, Pseudomonas, Nicke
L, Hillel.t, DeadEyeArrow, Kkmurray, Novasource, NormDor, K0rq, Eykanal, SmackBot, RDBury, Colinstu, Jtneill, Mcld, Richmeister,
Exlibris, Silly rabbit, The Placebo Effect, Chlewbot, UU, Grover cleveland, Memming, Smtology, Valenciano, TedE, Richard001, G716,
Dilipkumar7, Tlesher, Mr Stephen, Dicklyon, Snezzy, Hu12, DwightKingsbury, Susko, Rji, R~enwiki, Chris53516, Dan1679, CmdrObot,
CBM, Chrike, Loboprof, Cydebot, Travelbird, Verdy p, DumbBOT, Talgalili, Wikid77, Mojo Hand, Headbomb, Dbrodbeck, Vector-
Posse, Mycatharsis, JAnDbot, Glen netherwood, Goskan, Ph.eyes, Magioladitis, JamesBWatson, Ranger2006, Baccyak4H, WhatamIdo-
ing, Hersbruck, CommonsDelinker, Shellwood, Khullah~enwiki, TomyDuby, DarwinPeacock, Somatamonkey, VolkovBot, Orin^soren,
TXiKiBoT, Oshwah, Gloridrith, Antonov86, Nevillerichards, Miturian, Ostrouchov, Insanity Incarnate, Gideon.fell, PhysPhD, EverGreg,
GroverTheGnome, SieBot, BotMultichill, RJaguar3, Yintan, Araignee, Flyer22 Reborn, Patrick57, Genalipsis, Strasburger, Edstat, Cloud-
junkie, Melcombe, Denisarona, Richard David Ramsey, ClueBot, Bob1960evens, Jdgilbey, Unbuttered Parsnip, Gennies, Arjunaraoc,
Excirial, Alexbot, Skbkekas, MwNNrules, Amirmasoudabdol, Qwfp, Theking2, Bgeelhoed, Justin Mauger, Gjnaasaa, Nemhun, Skarebo,
Kembangraps, Tayste, I am not a dog, Fgnievinski, MrOllie, SamatBot, Lightbot, Zorrobot, Quantumobserver, Legobot, Luckas-bot, Yobot,
AnomieBOT, Materialscientist, Citation bot, ArthurBot, Obersachsebot, Xqbot, Erud, Hugo gasca aragon, GrouchoBot, RibotBOT, Kr-
ishano, Nicolas Perrault III, D'ohBot, Dger, Kwiki, Citation bot 1, Pinethicket, Kiefer.Wolfowitz, RedBot, MondalorBot, TobeBot, Trap-
pist the monk, Jonkerz, Duoduoduo, Peacedance, Onel5969, RjwilmsiBot, Kastchei, Slon02, EmausBot, WikitanvirBot, Jasonta, Stefan
Hartmann, Slightsmile, Dcirovic, JA(000)Davidson, Mobius Bot, A930913, Kgwet, Gamma5675, Donner60, Dradamlund, Tijfo098, Ss-
mdZhang, ClueBot NG, Mathstat, Ion vasilief, Arvindhmani, Pengortm, Jj1236, Frietjes, Widr, Vorsorken, P. Hogie, Timflutre, Helpful
Pixie Bot, BG19bot, Isabel duarte, ChrisGualtieri, Illia Connell, Everettr2, Dexbot, Pintoch, Sofiaast, SimonPerera, Me, Myself, and I
are Here, Vanderlindenma~enwiki, Bkfunk, Mi23nen, Limnalid, Ocheejembi, Monkbot, Opencooper, Riceissa, Blamethemessenger, Bar-
racudaMc, Екатерина Конь, HelpUsStopSpam, JJMC89, KrishnanGV, ANOVA457, InternetArchiveBot, Mohammad Hafij, GreenC bot,
Wikisahand, OAbot, Usafabird1, Magic links bot and Anonymous: 327

21.2 Images
• File:ANOVA_fair_fit.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/ANOVA_fair_fit.jpg License: CC BY-SA
3.0 Contributors: Own work Original artist: Vanderlindenma
• File:ANOVA_very_good_fit.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/af/ANOVA_very_good_fit.jpg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Vanderlindenma
• File:Anova,_no_fit..png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Anova%2C_no_fit..png License: CC BY-SA
3.0 Contributors: Own work Original artist: Vanderlindenma
• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Origi-
nal artist: ?
• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although
minimally).”
• File:Fisher_iris_versicolor_sepalwidth.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Fisher_iris_versicolor_
sepalwidth.svg License: CC BY-SA 3.0 Contributors: en:Image:Fisher iris versicolor sepalwidth.png Original artist: en:User:Qwfp (origi-
nal); Pbroks13 (talk) (redraw)
• File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
• File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:File:
Free-to-read_lock_75.svg Original artist: User:Trappist the monk
• File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
• File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?
• File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA 3.0
Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)

21.3 Content license


• Creative Commons Attribution-Share Alike 3.0

Вам также может понравиться