Академический Документы
Профессиональный Документы
Культура Документы
5.Drawing a Conclusion
Example:
Example:
Conclusion:
Example:
Conclusion:
Q35
Q37.
A test of significance is a formal procedure for
comparing observed data with a claim (also
called a hypothesis), the truth of which is being
assessed. • The claim is a statement about a
parameter, like the population proportion p or
the population mean µ. • The results of a
significance test are expressed in terms of a
probability that measures how well the data and
the claim agree.
A significance test starts with a careful
statement of the claims being compared. The
claim tested by a statistical test is called the null
hypothesis (H 0 ). The test is designed to assess
the strength of the evidence against the null
hypothesis. Often the null hypothesis is a
statement of “no difference.” The claim about
the population that evidence is being sought for
is the alternative hypothesis (Ha ). The
alternative is one-sided if it states that a
parameter is larger or smaller than the null
hypothesis value. It is two-sided if it states that
the parameter is different from the null value (it
could be either smaller or larger).
When conducting a significance test, the goal is
to provide evidence to reject the null hypothesis.
If the evidence is strong enough to reject the null
hypothesis, then the alternative hypothesis can
automatically be accepted. However, if the
evidence is not strong enough, researchers fail
to reject the null hypothesis.
It is important to know how small the p-value
needs to be in order to reject the null
hypothesis.
(DIAGRAM)
The cut-off value for p is called alpha, or the
significance level. The researcher establishes
the value of alpha prior to beginning the
statistical analysis. In social sciences, alpha is
typically set at 0.05 (or 5%). This represents the
amount of acceptable error, or the probability of
rejecting a null hypothesis that is in fact true. It
is also called the probability of Type I error. Once
the alpha level has been selected and the p-
value has been computed: If the p-value is
larger than alpha, accept the null hypothesis and
reject the alternative hypothesis. If the p-value
is smaller than alpha, reject the null hypothesis
and accept the alternative hypothesis.
Q38.
A key part of your dissertation or thesis is the
methodology. This is not quite the same as
‘methods’.
Q39.
Definition
Statistical background
In statistical test theory, the notion of a
statistical error is an integral part of hypothesis
testing. The test goes about choosing about two
competing propositions called null hypothesis,
denoted by H0 and alternative hypothesis,
denoted by H1 . This is conceptually similar to the
judgement in a court trial. The null hypothesis
corresponds to the position of defendant: just as
he is presumed to be innocent until proven
guilty, so is the null hypothesis presumed to be
true until the data provide convincing evidence
against it. The alternative hypothesis
corresponds to the position against the
defendant.
If the result of the test corresponds with reality,
then a correct decision has been made. However,
if the result of the test does not correspond with
reality, then an error has occurred. There are
two situations in which the decision is wrong.
The null hypothesis may be true, whereas we
reject H0. On the other hand, the alternative
hypothesis H1 may be true, whereas we do not
reject H0. Two types of error are distinguished:
Type I error and type II error.[3]
Type I error
The first kind of error is the rejection of a true
null hypothesis as the result of a test procedure.
This kind of error is called a type I error and is
sometimes called an error of the first kind.
In terms of the courtroom example, a type I error
corresponds to convicting an innocent
defendant.
Type II error
The second kind of error is the failure to reject a
false null hypothesis as the result of a test
procedure. This sort of error is called a type II
error and is also referred to as an error of the
second kind.
In terms of the courtroom example, a type II
error corresponds to acquitting a criminal.
A perfect test would have zero false positives
and zero false negatives. However, statistics is a
game of probability, and it cannot be known for
certain whether statistical conclusions are
correct. Whenever there is uncertainty, there is
the possibility of making an error. Considering
this nature of statistics science, all statistical
hypothesis tests have a probability of making
type I and type II errors.[6]
Reliability vs validity
What is reliability?
Reliability refers to how consistently a method
measures something. If the same result can be
consistently achieved by using the same
methods under the same circumstances, the
measurement is considered reliable.
What is validity?
Validity refers to how accurately a method
measures what it is intended to measure. If
research has high validity that means it produces
results that correspond to real properties,
characteristics, and variations in the physical or
social world.