Вы находитесь на странице: 1из 9

PSYB07 Lecture 5 Gabriel Baylon October 2nd, 2013

Probability
You have to be clear as to what event we are talking about. Be clear with your independent and dependent variable. Probability is still probability regardless of size; it is a frequency. o As long as it is above 0, it has a chance to occur regardless of how low the value is

Laws of Probability What is probability? o Seeing structure and order within chaotic, chance events Defines the boundaries between what is mere chance and what probably is not o Coin toss example Asymptotic Trend: Since there is a 50/50 chance of tossing a coin, increasing the number of tosses, the gap between the observed and expected proportions closes by progressively smaller amounts Probability as Rationality raised to Mathematical Precision o Probability can be seen as a ratio: Quotient = numerator/denominator Probability describes the structure that exists within a population of events Relative Frequencies: o A probability equals the ratio of the number of possibilities favourable for the event divided by the total number of outcomes.

Analytic View Rational Logical Expected frequency If an event occurs in A number of ways, it can fail to occur in B ways. Then

P(events) = A/B Relative Frequency View Empirical Sampling with replacement Probability as a limit of relative frequency

Subjective View Belief in the likelihood of an event

Terminology Independence Two events are independent when the occurrence of one does not influence the probability of the other recurring

Event That with which we are concerned Knowing one thing does not necessarily mean being able to predict the probability

Mutually Exclusive Two events are mutually exclusive if the occurrence of one event precludes the occurrence of the other o Ex: One is either a man or a woman. Being one precludes the other

Exhaustive A set of events is exhaustive if the set includes ALL possible outcomes o Ex: Roll of the die (1,2,3,4,5,6)

Probability can range from 0.0 1.0 P(1) = (1/6)/ (1/6 + 5/6) = 1.66 Probability is always A over B

Joint Probabilities Probability of the co-occurrence of two or more events, if they are independent, is given as P( A, B ) = P(A)P(B) When theyre not independent, you cant multiply them to get the probability Ex: o P(Blue) = 0.5 o P(Blue/Male) = 0.4 / = if, or given, this event has occurred Its looking at the probability of being male or having blue eyes o P(Blue and male) = 0.2

Laws of Probability (For realz) Disjunctive ( A or B ) : P(A or B) = P(A) + P(B) Conjunctive (A and B) : P(AandB) = P(A)*P(B) Conjunctive Multiplication Law: o Numerator is the number of favourable events; denominator is the total number of possible events Restriction: All events must be independent

Disjunctive: Addition Law Events must be mutually exclusive Probability of tossing a 1 or tossing a 3 is equal to the sum of the probabilities of the two separate events P(H/A or H/B) = P(H/A) + P(H/B) P(H/A and H/B)

P(Xheadsinarow) = P(head)x

Lecture 6 October 9th, 2013 Gabriel Baylon

Bayes Theorem
Application of the laws of probability P(A if B) = P(A and B)/P(B) o Probability always get smaller What do we know? o P(A) = Probability of an undergraduate have a disease is 0.001 o P(B/A) = Probability of a testing positive if you have a disease is 0.99 o P(B/NOT A) = Probability of a false positive 0.02 What do we want to know? o P(A/B) = Probability of having a disease if you test positive is? Begin with the multiplicative law of probability *Look up chart on lecture slides* Numbers dont mean anything by themselves o We forget to take base rates into account

Testing a Hypothesis
Testing a treatment o Descriptive statistics cannot determine if differences are due to chance. A sampling error occurs when apparent differences are by chances alone Example of Differences due to chance alone o U1 = 100 U2 = 100 o Y1 = 107 y2 = 117 Sampling Distribution: Describe the amount of sample to sample variability to expect for a given statistic Sampling Error of the mean: Sy = S/*square root of* n o n = number of obersvations o S = Standard deviations of the score

o SY = Standard deviation of the mean Simplifying Hypothesis Testing 1. Develop research hypothesis (experimental) 2. Obtain a sample(s) of observation 3. Construct a null hypothesis Y=u y1 y2 = 0 u1 = u2 = ux

4. Obtain an appropriate sampling distribution 5. Reject or fail to reject the null hypothesis Null Hypothesis Most statistics test for the null-hypothesis o Either reject or fail to reject the null based on chance alone Assume the sample comes from the same population ad that the two sample means (even though they may be different) are estimating the same value (population mean). o Why? Method of Contradiction: we can only demonstrate that a hypothesis is false If we thought that the IQ boosting programme worked, what would we actually test? What value of IQ would we test?

Rejection of Non Rejection of the Null Hypothesis If we reject, we then say that we have evidence for our experimental hypothesis, e.g. that our IQ boosting program works If we fail to reject, we DO NOT prove the null to be true

Failure to reject the null hypothesis

proving the null hypothesis true

Type I and Type II Errors


Type I: The null hypothesis is true, but we reject it. The probability of a Type I error is set at 0.05 and is called alpha (symbol)

Type II: The null hypothesis is false, but we fail to reject it. The probability of a Type II error is called beta (symbol) *Look at lecture slide for Chart!* True Type 1 Error Correct 1 False Correct Power = (1 ) Type II Error

Reject the Null Fail to Rejct

The bigger the Standard Deviation, the more difficult it is to reject the null hypothesis What affects power? o Increasing Z-score? o Only have power when null hypothesis is rejected

Lecture 7 23/10/2013 Gabriel Baylon

T-test
Sampling Distribution of the Mean Central Limit Theorem o Given the population mean and variance, the sampling distribution will have: A mean (x) = A variance ( *sigma*2x) = Sigma2=/N Standard Error (mean) (Sigmax) = (sigma)/N o As N increases, the shape of the distribution becomes normal, whatever the shape of the population

Testing a Hypothesis Known We could test a hypothesis concerning a population and a single score by Z = x - / *sigma* As N increases, efficiency increases as efficiency increases, the variability in the means due to chances alone decreases

Factors Affecting Magnitude of t & Decision 1. Difference between (sample mean) and (population mean) a. Larfer the numerator, the larger the t value 2. Size of S2a. As S2 decrease, t increases 3. Size of N a. As N increases, denominator decreases, t increases 4. level 5. One - , or Two tailed test Confidence Limits on Mean Point Estimate o Specific value taken as estimator of a parameter

What does the t-value tell you? It would tell you whether to reject the null hypothesis or not based on the significance cut-off sample score

Bigger variance/smaller variance = < 4 to be considered homogeneric

Regression/Correlation
Are the two conditions different? o In both cases, y is a random variable beyond the control of the experimenter. In the case of correlation; x is a random variable In the case of regression; x is treated as a fixed variable Regression is wishing to predict the value of y on the basis of the value of x Correlation is wishing to express the degree of the relation between x and y General Linear Model: Covariance: A number reflecting the degree to which two variables vary or change in the value together o Get equation from the fucking lecture slides The closer the covariance is to 1, the more correlated the two variables are USED TO TEST FOR LINEAR RELATIONSHIPS

Вам также может понравиться