Вы находитесь на странице: 1из 4

FORMULA/ TABLE CARD FOR WEISSS INTRODUCTORY STATISTICS, FIFTH EDITION

Larry R. Griffey
NOTATION In the formulas below, unless stated otherwise, we
employ the following notation which may or may not appear with
subscripts:
n sample size
x sample mean
s sample stdev
Q
j
jth quartile
N population size
population mean
population stdev
d paired difference
p sample proportion
p population proportion
O observed frequency
E expected frequency
CHAPTER 3 Descriptive Measures
Sample mean: x
x
n
Range: Range Max Min
Sample standard deviation:
s
_
(x x)
2
n 1
or s
_
x
2
(x)
2
/n
n 1
Quartile positions: (n +1)/4, (n +1)/2, 3(n +1)/4
Interquartile range: IQR Q
3
Q
1
Lower limit Q
1
1.5 IQR, Upper limit Q
3
+1.5 IQR
Population mean (mean of a variable):
x
N
Population standard deviation (standard deviation of a variable):

_
(x )
2
N
or
_
x
2
N

2
Standardized variable: z
x

CHAPTER 4 Probability Concepts


Probability for equally likely outcomes:
P(E)
f
N
,
where f denotes the number of ways event E can occur and
N denotes the total number of outcomes possible.
Special addition rule:
P(A or B or C or ) P(A) +P(B) +P(C) +
(A, B, C, . . . mutually exclusive)
Complementation rule: P(E) 1 P(not E)
General addition rule: P(A or B) P(A) +P(B) P(A & B)
Conditional-probability rule: P(B | A)
P(A & B)
P(A)
General multiplication rule: P(A & B) P(A) P(B | A)
Special multiplication rule:
P(A & B & C & ) P(A) P(B) P(C)
(A, B, C, . . . independent)
Rule of total probability:
P(B)
k

j1
P(A
j
) P(B | A
j
)
(A
1
, A
2
, . . . , A
k
mutually exclusive and exhaustive)
Bayess rule:
P(A
i
| B)
P(A
i
) P(B | A
i
)

k
j1
P(A
j
) P(B | A
j
)
(A
1
, A
2
, . . . , A
k
mutually exclusive and exhaustive)
Factorial: k! k(k 1) 2 1
Permutations rule: (m)
r

m!
(mr)!
Special permutations rule: (m)
m
m!
Combinations rule:
_
m
r
_

m!
r! (mr)!
Number of possible samples:
_
N
n
_

N!
n! (N n)!
CHAPTER 5 Discrete Random Variables
Mean of a discrete random variable X: xP(X x)
Standard deviation of a discrete random variable X:

_
(x )
2
P(X x) or
_
x
2
P(X x)
2
Factorial: k! k(k 1) 2 1
Binomial coefcient:
_
n
x
_

n!
x! (n x)!
Binomial probability formula:
P(X x)
_
n
x
_
p
x
(1 p)
nx
,
where n denotes the number of trials and p denotes the success
probability.
Mean of a binomial random variable: np
Standard deviation of a binomial random
variable:
_
np(1 p)
Poisson probability formula: P(X x) e


x
x!
Mean of a Poisson random variable:
Standard deviation of a Poisson random variable:

-1-
FORMULA/ TABLE CARD FOR WEISSS INTRODUCTORY STATISTICS, FIFTH EDITION
Larry R. Griffey
CHAPTER 7 The Sampling Distribution of the Mean
Mean of the variable x:
x

Standard deviation of the variable x:
x
/

n
Standardized version of the variable x:
z
x
/

n
CHAPTER 8 Condence Intervals for One Population Mean
z-interval for ( known, normal population or large sample):
x z
/2

n
Margin of error for the estimate of : E z
/2

n
Sample size for estimating :
n
_
z
/2

E
_
2
,
rounded up to the nearest whole number.
Studentized version of the variable x:
t
x
s/

n
t -interval for ( unknown, normal population or large sample):
x t
/2

n
with df n 1.
CHAPTER 9 Hypothesis Tests for One Population Mean
z-test statistic for H
0
:
0
( known, normal population or
large sample):
z
x
0
/

n
t -test statistic for H
0
:
0
( unknown, normal population or
large sample):
t
x
0
s/

n
with df n 1.
Wilcoxon signed-rank test statistic for H
0
:
0
(symmetric
population):
W sum of the positive ranks
CHAPTER 10 Inferences for Two Population Means
Pooled sample standard deviation:
s
p

_
(n
1
1)s
2
1
+(n
2
1)s
2
2
n
1
+n
2
2
Pooled t -test statistic for H
0
:
1

2
(independent samples, nor-
mal populations or large samples, and equal population standard
deviations):
t
x
1
x
2
s
p

(1/n
1
) +(1/n
2
)
with df n
1
+n
2
2.
Pooled t -interval for
1

2
(independent samples, normal
populations or large samples, and equal population standard
deviations):
(x
1
x
2
) t
/2
s
p
_
(1/n
1
) +(1/n
2
)
with df n
1
+n
2
2.
Degrees of freedom for nonpooled-t procedures:

__
s
2
1
/n
1
_
+
_
s
2
2
/n
2
__
2
_
s
2
1
/n
1
_
2
n
1
1
+
_
s
2
2
/n
2
_
2
n
2
1
,
rounded down to the nearest integer.
Nonpooled t -test statistic for H
0
:
1

2
(independent samples,
and normal populations or large samples):
t
x
1
x
2
_
(s
2
1
/n
1
) +(s
2
2
/n
2
)
with df .
Nonpooled t -interval for
1

2
(independent samples, and
normal populations or large samples):
(x
1
x
2
) t
/2

_
(s
2
1
/n
1
) +(s
2
2
/n
2
)
with df .
MannWhitney test statistic for H
0
:
1

2
(independent
samples, same shape populations, and n
1
n
2
):
M sum of the ranks for sample data from Population 1
Paired t -test statistic for H
0
:
1

2
(paired sample, and normal
differences or large sample):
t
d
s
d
/

n
with df n 1.
Paired t -interval for
1

2
(paired sample, and normal differ-
ences or large sample):
d t
/2

s
d

n
with df n 1.
Wilcoxon paired-sample signed-rank test statistic for H
0
:
1

2
(paired sample and symmetric differences):
W sum of the positive ranks
-2-
FORMULA/ TABLE CARD FOR WEISSS INTRODUCTORY STATISTICS, FIFTH EDITION
Larry R. Griffey
CHAPTER 11 Inferences for Population Standard Deviations

2
-test statistic for H
0
:
0
(normal population):

n 1

2
0
s
2
with df n 1.

2
-interval for (normal population):
_
n 1

2
/2
s to
_
n 1

2
1/2
s
with df n 1.
F-test statistic for H
0
:
1

2
(independent samples and normal
populations):
F s
2
1
/s
2
2
with df (n
1
1, n
2
1).
F-interval for
1
/
2
(independent samples and normal popula-
tions):
1
_
F
/2

s
1
s
2
to
1
_
F
1/2

s
1
s
2
with df (n
1
1, n
2
1).
CHAPTER 12 Inferences for Population Proportions
Sample proportion:
p
x
n
,
where x denotes the number of members in the sample that have
the specied attribute.
One-sample z-interval for p:
p z
/2

_
p(1 p)/n
(Assumption: both x and n x are 5 or greater)
Margin of error for the estimate of p:
E z
/2

_
p(1 p)/n
Sample size for estimating p:
n 0.25
_
z
/2
E
_
2
or n p
g
(1 p
g
)
_
z
/2
E
_
2
rounded up to the nearest whole number (g educated guess)
One-sample z-test statistic for H
0
: p p
0
:
z
p p
0

p
0
(1 p
0
)/n
(Assumption: both np
0
and n(1 p
0
) are 5 or greater)
Pooled sample proportion: p
p

x
1
+x
2
n
1
+n
2
Two-sample z-test statistic for H
0
: p
1
p
2
:
z
p
1
p
2
_
p
p
(1 p
p
)

(1/n
1
) +(1/n
2
)
(Assumptions: independent samples; x
1
, n
1
x
1
, x
2
, n
2
x
2
are
all 5 or greater)
Two-sample z-interval for p
1
p
2
:
( p
1
p
2
) z
/2

_
p
1
(1 p
1
)/n
1
+ p
2
(1 p
2
)/n
2
(Assumptions: independent samples; x
1
, n
1
x
1
, x
2
, n
2
x
2
are
all 5 or greater)
Margin of error for the estimate of p
1
p
2
:
E z
/2

_
p
1
(1 p
1
)/n
1
+ p
2
(1 p
2
)/n
2
Sample size for estimating p
1
p
2
:
n
1
n
2
0.5
_
z
/2
E
_
2
or
n
1
n
2

_
p
1g
(1 p
1g
) + p
2g
(1 p
2g
)
_
_
z
/2
E
_
2
rounded up to the nearest whole number (g educated guess)
CHAPTER 13 Chi-Square Procedures
Expected frequencies for a chi-square goodness-of-t test:
E np
Test statistic for a chi-square goodness-of-t test:

2
(O E)
2
/E
with df k 1, where k is the number of possible values for the
variable under consideration.
Expected frequencies for a chi-square independence test:
E
R C
n
where R row total and C column total.
Test statistic for a chi-square independence test:

2
(O E)
2
/E
with df (r 1)(c 1), where r and c are the number of possible
values for the two variables under consideration.
CHAPTER 14 Descriptive Methods in Regression and Correlation
S
xx
, S
xy
, and S
yy
:
S
xx
(x x)
2
x
2
(x)
2
/n
S
xy
(x x)(y y) xy (x)(y)/n
S
yy
(y y)
2
y
2
(y)
2
/n
Regression equation: y b
0
+b
1
x, where
b
1

S
xy
S
xx
and b
0

1
n
(y b
1
x) y b
1
x
-3-
FORMULA/ TABLE CARD FOR WEISSS INTRODUCTORY STATISTICS, FIFTH EDITION
Larry R. Griffey
Total sum of squares: SST (y y)
2
S
yy
Regression sum of squares: SSR ( y y)
2
S
2
xy
/S
xx
Error sum of squares: SSE (y y)
2
S
yy
S
2
xy
/S
xx
Regression identity: SST SSR +SSE
Coefcient of determination: r
2

SSR
SST
Linear correlation coefcient:
r
1
n1
(x x)(y y)
s
x
s
y
or r
S
xy
_
S
xx
S
yy
CHAPTER 15 Inferential Methods in Regression and Correlation
Population regression equation: y
0
+
1
x
Standard error of the estimate: s
e

_
SSE
n 2
Test statistic for H
0
:
1
0:
t
b
1
s
e
/

S
xx
with df n 2.
Condence interval for
1
:
b
1
t
/2

s
e

S
xx
with df n 2.
Condence interval for the conditional mean of the response
variable corresponding to x
p
:
y
p
t
/2
s
e
_
1
n
+
(x
p
x/n)
2
S
xx
with df n 2.
Prediction interval for an observed value of the response variable
corresponding to x
p
:
y
p
t
/2
s
e
_
1 +
1
n
+
(x
p
x/n)
2
S
xx
with df n 2.
Test statistic for H
0
: 0:
t
r
_
1 r
2
n 2
with df n 2.
Test statistic for a correlation test for normality:
R
p

xw
_
S
xx
w
2
where x and w denote, respectively, observations of the variable
and the corresponding normal scores.
CHAPTER 16 Analysis of Variance (ANOVA)
Notation in one-way ANOVA:
k number of populations
n total number of observations
x mean of all n observations
n
j
size of sample from Population j
x
j
mean of sample from Population j
s
2
j
variance of sample from Population j
T
j
sum of sample data from Population j
Dening formulas for sums of squares in one-way ANOVA:
SST (x x)
2
SSTR n
j
(x
j
x)
2
SSE (n
j
1)s
2
j
One-way ANOVA identity: SST SSTR +SSE
Computing formulas for sums of squares in one-way ANOVA:
SST x
2
(x)
2
/n
SSTR (T
2
j
/n
j
) (x)
2
/n
SSE SST SSTR
Mean squares in one-way ANOVA:
MSTR
SSTR
k 1
, MSE
SSE
n k
Test statistic for one-way ANOVA (independent samples, normal
populations, and equal population standard deviations):
F
MSTR
MSE
with df (k 1, n k).
Condence interval for
i

j
in the Tukey multiple-comparison
method (independent samples, normal populations, and equal
population standard deviations):
(x
i
x
j
)
q

2
s
_
(1/n
i
) +(1/n
j
),
where s

MSE and q

is obtained for a q-curve with parame-


ters k and n k.
Test statistic for a KruskalWallis test (independent samples, same
shape populations, all sample sizes 5 or greater):
H
SSTR
SST/(n 1)
or H
12
n(n +1)
R
2
j
n
j
3(n +1),
where SSTR and SST are computed for the ranks of the data,
and R
j
denotes the sum of the ranks for the sample data from
Population j. H is approximately chi-square with df k 1.
-4-

Вам также может понравиться