Вы находитесь на странице: 1из 36

Research Methods

Lecture 10
Inferential Statistics:
Two Group Design

Topics
We have looked at statistics that
compare a sample to the
population. Now we deal with
statistics that compare samples:
Parametric Statistics interval or
ratio data
Nonparametric Tests ordinal or
nominal data

Parametric Statistics
A reminder of the terminology:
Null hypothesis tested in a twogroup design using a two-tailed test
(there is no difference):

Alternative hypothesis (2 tail):

Parametric Statistics
For a one-tailed test, the null
hypothesis is
Depending on which alternative
hypothesis is being tested:

Alpha is typically set at .05 (= .05)

Parametric Statistics
Parametric tests for the samples:
The data fit a bell-shaped distribution
Certain parameters are known
Mean
Standard deviation

Use interval-ratio data

Independent groups t test


Independent-groups t test: a
parametric inferential test for
comparing sample means of two
independent groups of scores
Example 20 subjects study some
material. One group study for a
single 6 hour block (Massed) the
other group study for two 3 hour
blocks (Spaced). They then answer
some questions on the material.

Example
Spaced study score

Massed study score

23

15

18

20

25

21

22

15

20

14

24

16

21

18

24

19

21

14

22

17

Independent groups t test


Formula for an independent-groups
t test

Standard error of the difference


between means (next slide)
The standard deviation of the
sampling distribution of mean
differences between dependent
samples in a two-group experiment

Independent groups t test


Formula for converting the mean
differences to standard errors

Independent groups t test


Putting all of this together, the
formula for determining t is:

Independent groups t test


In APA style the result is reported
as:
When a result is significant
The p value is reported as less than
(<) .05

p value or alpha level: indicates the


probability of a Type I error
We want this probability to be small

Statistical Power
Three aspects of a study can
increase power:
Greater differences produced by the
independent variable
Less variability of raw scores in each
condition
Increased sample size

These all increase the t value

Effect size
Effect size
The proportion of variance in the
dependent variable that is accounted
for by the manipulation of the
independent variable

Expressed as Cohens d or r2

Cohens d
Formula for Cohens d

Report Cohens d with the t-score:

Pearsons r2
Measure effect size for the
independent-groups t test using r2
Use the formula:

If r2 is .01, the effect size is small


If r2 is .09, it is medium
If r2 is .25, it is large

Confidence Intervals
Formula for the 95% confidence
interval:
Assumptions of the independentgroups t test:
The data are interval-ratio scale
The underlying distributions are bellshaped
The observations are independent
Homogeneity of variance

Paired or correlated groups t test


Correlated-groups t test
A parametric inferential test used to
compare the means of two related
(within- or matched- subjects) samples

Difference scores
Scores representing the difference
between subjects performance in one
condition and their performance in a
second condition

Paired or correlated groups t test


Standard error of the difference scores
The standard deviation of the sampling
distribution of differences between the
means of independent samples in a twosample experiment
D

Mean of the difference scores

sD
sD =
N

Standard error of the difference scores

sD =
t=

D
sD

(D D)
N 1

Example
Eight people are asked to memorise
2 lists of 20 words each containing
either concrete words (desk lamp
etc.) or abstract words (love hate
etc.)
They are then tested on their recall of
each list

Example
Participant

Concrete

Abstract

13

10

11

19

13

13

12

15

11

10

12

10

13

13

t(7)=3.85, p<.05 (one tailed)

Effect size
For the correlated-groups t test, the
formula for Cohens d and r2 are:
D
d=
sD
2
t
r2 = 2
t + df

Assumptions for the correlated-groups t test


as those for the independentare the same
groups t test except the observations are
correlated

Nonparametric Tests
Nonparametric tests are used with nominal and
ordinal data:
Wilcoxon rank-sum test the equivalent of the
independent groups t test for ordinal data
Wilcoxon matched-pairs signed-ranks test the
equivalent of the paired groups t test for ordinal
data
Chi-Square test of independence the equivalent
of the Chi-squared test for goodness-of-fit to
compare two samples instead of a population
and a sample nominal data

Wilcoxon rank-sum test


or
The Mann-Whitney U Test
Wilcoxon rank-sum test: a
nonparametric inferential test for
comparing sample medians of two
independent groups of scores
Sometimes it may look as if a t test
can be performed but perhaps the
distribution of data is not normal
and so this test must be used

Example
A teacher compares the number of
books read by the girls and boys in
a class
She finds that the data distribution
is skewed and so chooses to
compare the results with a
Wilcoxon rank-sum test (two
independent groups)

Example
Books read - GIRLS

Book read - BOYS

20

10

24

17

29

23

33

19

57

22

35

21

Ws(n1=6, n2=6) = 24, p<.05 (one-tailed)

Wilcoxon rank-sum test


Assumptions of the Wilcoxon rank-sum
test:
The data are ratio, interval, or ordinal
in scale
All of which must be converted to
ranked (ordinal) data
The underlying distribution is not
normal
The observations are independent

Wilcoxon matched-pairs signedranks t test


This test is similar to the correlated
(paired) groups t test except that it
is non-parametric and compares
medians rather than means

Wilcoxon matched-pairs signedranks t test


Assumptions of the Wilcoxon
matched-pairs signed-ranks t test:
The data are ratio, interval, or ordinal
in scale
All of which must be converted to
ranked (ordinal) data
The underlying distribution is not
normal
The observations are dependent or
related

Example
In the previous example the teacher
implements a programme to
encourage reading and again
records the number of books read
by the same group of boys and girls
but now groups them together
She hopes that the programme
results in more books being read

Before programme

After programme

10

15

17

23

19

20

20

20

21

28

22

26

23

24

24

29

29

37

33

40

57

50

35

55

T(N = 11) = 8, p<.05 (one-tailed)

Chi-square test of independence


Chi-square (2) test of independence
A nonparametric inferential test used when
frequency data have been collected to
determine
How well an observed breakdown of people
over various categories fits some expected
breakdown
Formula:

Example
Data is collected from a group of
people, some of who have been
employed as babysitters. They are
asked whether they have taken a
First Aid course

Example
Taken course

Not taken

Babysitters

65

35

Non-babysitters

43

47

X2(1, N = 190) = 5.507, p<.05

Effect size the phi coefficient


Phi coefficient: an inferential test
used to determine effect size for a
chi-square test
For a 2 X 2 contingency table (as in
the example), we use the phi
coefficient, where:
0.1 = small effect
0.3 = medium effect
0.5 = large effect

Chi-square test of independence


Assumptions of the 2 test of
independence
The sample is random
The observations are independent
The data are nominal

Summary
Use the appropriate statistic to analyze data
Consider whether the statistic should be a
parametric or nonparametric
Based on:
The type of data collected
The type of distribution to which the data
conform
Whether any parameters of the
distribution are known
Consider whether a between-subjects or
correlated-groups design has been used

Вам также может понравиться