Вы находитесь на странице: 1из 2

# A t-test is any statistical hypothesis test in which the test statistic follows a Student's tdistribution under the null

## hypothesis. It can be used to determine if two sets of data are

significantly different from each other, and is most commonly applied when the test statistic
would follow a normal distribution if the value of a scaling term in the test statistic were known.
When the scaling term is unknown and is replaced by an estimate based on the data, the test
statistic (under certain conditions) follows a Student's t distribution.
History
The t-statistic was introduced in 1908 by William Sealy Gosset, a chemist working for
the Guinness brewery in Dublin, Ireland ("Student" was his pen name). Gosset had been hired
due to Claude Guinness's policy of recruiting the best graduates from Oxford and Cambridge to
apply biochemistry and statistics to Guinness's industrial processes. Gosset devised the t-test as
an economical way to monitor the quality of stout. The Student's t-test work was submitted to
and accepted in the journal Biometrika and published in 1908.
Definition
A t-tests statistical significance indicates whether or not the difference between two groups
averages most likely reflects a real difference in the population from which the groups were
sampled.

The t-test assesses whether the means of two groups are statistically different from each other.
This analysis is appropriate whenever you want to compare the means of two groups, and
especially appropriate as the analysis for the posttest-only two-group randomized experimental
design.

shows the distributions for the treated (blue) and control (green) groups in a study. Actually, the
figure shows the idealized distribution -- the actual distribution would usually be depicted with
a histogram or bar graph. The figure indicates where the control and treatment group means are
located. The question the t-test addresses is whether the means are statistically different.
The first thing to notice about the three situations is that the difference between the means is the
same in all three. But, you should also notice that the three situations don't look the same -- they
tell very different stories. The top example shows a case with moderate variability of scores
within each group. The second situation shows the high variability case. the third shows the case
with low variability. Clearly, we would conclude that the two groups appear most different or
distinct in the bottom or low-variability case. Why? Because there is relatively little overlap
between the two bell-shaped curves. In the high variability case, the group difference appears
least striking because the two bell-shaped distributions overlap so much.