Вы находитесь на странице: 1из 29

HYPOTHESIS TESTING

NONPARAMETRIC TESTS
• Although parametric
tests are the most
commonly used inferential
techniques, there are
many research situations
in which parametric tests
cannot or should not be
used.
These situations are:
• The data involve
measurements on nominal or
ordinal scales.
• The data do not satisfy the
assumptions underlying
parametric tests.
• The data have extremely high
variance, which can undermine
the likelihood of significance
for a parametric test.
Nonparametric tests
• Nonparametric statistics are used
when our data are measured on a
nominal or ordinal scale of
measurement. Chi-square statistics
and their modifications (e.g.,
McNemar Test) are used for nominal
data. All other nonparametric
statistics are appropriate when data
are measured on an ordinal scale of
measurement.
Scale of measurement
Nonparametric tests
Nominal data
Ordinal data
• The tests that are available in NPAR
TESTS can be grouped into three
broad categories based on how the
data are organized: one-sample tests,
related-samples tests, and
independent-samples tests. A one-
sample test analyzes one variable. A
test for related samples compares two
or more variables for the same set of
cases. An independent-samples test
analyzes one variable that is grouped
by categories of another variable.
The one-sample tests that are
available in procedure NPAR
TESTS are:
• BINOMIAL
• CHISQUARE
• K-S (Kolmogorov-Smirnov)
• RUNS
Tests for two related samples are:
• MCNEMAR
• SIGN
• WILCOXON
Tests for k related samples are:
• COCHRAN
• FRIEDMAN
• KENDALL
Tests for two independent
samples are:
• M-W (Mann-Whitney)
• K-S (Kolmogorov-Smirnov)
• W-W (Wald-Wolfowitz)
• MOSES
Tests for k independent
samples are:
• K-W (Kruskal-Wallis)
• MEDIAN
Chi-square test for goodness
of Fit
• This chi-square test is used in
situations in which the measurement
procedure results in classifying
individuals into distinct categories.
The test uses frequency data from a
single sample to test a hypothesis
about the population distribution. The
null hypothesis specifies the
proportion or percentage of the
population for each category on the
scale of measurement. Statistic
computed is X2
The chi-square test for
independence
• This test serves as an alternative to
the independent-measures t-test or
ANOVA in situations in which the
dependent variable involves
classifying individuals into distinct
categories. The sample data consist
of frequency distributions for two or
more separate samples. The null
hypothesis states that separate
populations all have the same
proportions.
The Binomial test
• When the individuals in a
population can be classified into
exactly two categories, the
binomial test uses sample data to
test a hypothesis about the
proportion of the population in
each category. The null hypothesis
specifies the proportion of the
population in each of the two
categories.
The Sign Test
• This test evaluates the
difference between two
treatment conditions using data
from a repeated-measures or
matched-subjects research
design. The null hypothesis
states that there is no
significant difference between
the two treatment conditions.
The Sign Test
• The sign test is a special application
of the binomial test and requires
only that the difference between
treatment 1 and treatment 2 for
each subject be classified as an
increase or a decrease. This test is
used as an alternative to the
related-samples t-test or the
Wilcoxon test in situations where
the data do not satisfy the more
stringent requirements of these two
powerful tests.
The Mann-Whitney U test
• This test uses ordinal data from
two separate samples to test a
hypothesis about the difference
between to populations or two
treatment conditions. This test
is used as an alternative to the
independent-measures t test in
which the sample data can be
rank-ordered. Statistics
computed are U and U‘.
The Wilcoxon test
• The Wilcoxon test uses the
data from a repeated-measures
or matched-samples design to
evaluate the difference
between two treatment
conditions. This test is used as
an alternative to the related-
samples t test in which the
sample data can be rank-
ordered. Statistic computed is
T.
The Kruskal-Wallis Test
• This test uses ordinal data from
three or more separate
samples to test a hypothesis
about the differences between
three or more populations or
treatments. It is used as an
alternative to the independent-
measures ANOVA in situations
in which data can be rank-
ordered. Statistic computed is
H.
The Friedman test
• This test uses ordinal data to
test a hypothesis about the
differences between three or
more treatment conditions
using data from a repeated-
measures design.
Runs test
• The Runs Test procedure tests
whether the order of
occurrence of two values of a
variable is random. A run is a
sequence of like observations.
A sample with too many or too
few runs suggests that the
sample is not random.
Runs test
• Example: Suppose that 20
people are polled to find out
whether they would purchase a
product. The assumed
randomness of the sample
would be seriously questioned
if all 20 people were of the
same gender. The runs test can
be used to determine whether
the sample was drawn at
random.
One-Sample Kolmogorov-
Smirnov Test
• The One-Sample Kolmogorov-Smirnov Test
procedure compares the observed cumulative
distribution function for a variable with a
specified theoretical distribution, which may be
normal, uniform, Poisson, or exponential. The
Kolmogorov-Smirnov Z is computed from the
largest difference (in absolute value) between
the observed and theoretical cumulative
distribution functions. This goodness-of-fit test
tests whether the observations could
reasonably have come from the specified
distribution.
One-Sample Kolmogorov-
Smirnov Test
• Example. Many parametric
tests require normally
distributed variables. The one-
sample Kolmogorov-Smirnov
test can be used to test that a
variable (for example, income)
is normally distributed.
The Spearman Correlation
• The Spearman correlation
measures the degree to which
the relationship between two
variables is one-directional or
monotonic. The Spearman
correlation is used when both
variables, X and Y, are
measured on an ordinal scale
or after two variables have
been transformed to ranks.
Nonparametric tests
• Nonparametric tests do not
require that samples come from
populations with normal
distributions or have any other
particular distributions.
Consequently, nonparametric
tests are called Distribution-free
Tests.
Advantages of Nonparametric
Tests
• 1. Can be applied to a wide variety of
situations because they do not have the
more rigid requirements.
• 2. Do not require normally distributed
populations.
• 3. Can often be applied to categorical data,
such as the genders of survey respondents.
• 4. Usually involve simpler computations
than the corresponding parametric methods
and are therefore easier to understand and
apply.
Disadvantages of
Nonparametric tests
• 1. Tend to waste information because
exact numerical data are often reduced
to a qualitative form.
• 2. Are not as efficient as parametric
tests, so with a nonparametric test we
generally need stronger evidence (such
as a larger sample or greater
differences) before we reject a null
hypothesis.

Вам также может понравиться