Вы находитесь на странице: 1из 7

Introduction to Probability Theory

K. Suresh Kumar
Department of Mathematics
Indian Institute of Technology Bombay

August 19, 2017


2

LECTURES 9 - 10

0.1 Notion of Statistical Independence


Concept of statistical independence (in short independence) is a fundamental
concept in probability. We always see statements like ”toss a coin repeatedly
in an independent fashion’ or ’independent trials of a random experiment’
etc. What does the above statements mean or how does one give a pprecise
meaning to the word ’independence’ appearing in those sentences. To this
end, let us look the definition of cartesian product from a different angle.
One can observe that given two sets A and B with n and m elements, the
cartesian product A × B is obtained through independent selection, in the
sense that choice of a ∈ A doesnot play any role in selecting b ∈ B. Also
note that #(A × B) = #A · #B. This put together implies the following.
’independent selections multiply the number of possibilities’ ! This is indeed
the essence of the following example.

Example 0.1 Toss an unbiased coin twice. Here Ω = {H, T } × {H, T } =


{(H, H), (H, T ), (T, H), (T, T )}, here we assumed selection without depen-
dence! Now
1
P {(H, H)} = = P {H} × P {H},
4
where RHS is the probabilities corresponding to first and second toss sepa-
rately. i.e., ’independent selection/trials leads to product rule for probabili-
ties’.

The above discussion points that one way of formalizing the notion of
independent is using the product rule for probabilities as follows.
Definition 4.2 (Independence ) Two events A, B are said to be independent
if
P (AB) = P (A) P (B) .

Remark 0.1 If P (A) > 0, then A and B are independent (i.e. the product
rule for probabilities) iff P (B|A) = P (A) . This confirms the intuition
behind the notion of independence, “the occurrence of one event doesn’t have
any effect on the probability of the occurrence of the other”. i.e., the product
rule for probabilities is a good model for the notion of independence.

Example 0.2 Define the probability space (Ω, F, P ) as follows.

Ω = {HH, HT, T H, T T }, F = P(Ω)


0.1. NOTION OF STATISTICAL INDEPENDENCE 3

and
1
P ({ω}) = , ω ∈ Ω.
4
Consider the events A = {HH, HT }, B = {HH, T H} and C = {HT, T T }.
Then A and B are independent and B and C are dependent.

The notion of independence defined above can be extended to indepen-


dence of three or more events in the following manner.

Definition 4.3 (Independence of three events). The events A, B, C are


independent (mutually) if
(i) A, B ; B, C and C, A are independent and
(ii) P (ABC) = P (A) P (B) P (C) .

If the events A, B, C satisfies only (i), then A, B, C are said to be pairwise


independent.

Example 0.3 Define Ω = {1, 2, 3, 4}, F = P(Ω) and


1
P ({i}) = , i = 1, 2, 3, 4 .
4
Consider the events
A = {1, 2}, B = {1, 3}, C = {1, 4} .
Then A, B, C are pairwise independent but not independent. One type of
dependency.

Example 0.4 Define (Ω, F, P ) as follows. Ω = {1, 2, 3, 4, 5, 6, 7, 8}, F =


P(Ω) and
1
P ({i}) = , i = 1, . . . , 8 .
8
Let
A = {1, 2, 3, 4}, B = {3, 4, 5, 6}, C = {1, 3, 5, 7} .
Then A, B, C are independent.

Example 0.5 Let Ω be given by the ’tossing an unbiased coin thrice’ and
all sample points are equally likely. Consider the events
A = {HHH, HHT, HT H, HT T },
B = {HHT, HHH, T HT, T HH},
C = {HT H, HHH, T T H, T T T }.
4

Now see that (exercise)

P (ABC) = P (A)P (B)P (C), P (BC) 6= P (B)P (C).

This is another type of dependency.

Definition 4.4 The events {A1 , A2 , . . . , An } ⊆ F are said to be independent


if for any distinct Ai1 , . . . , Aim , m ≥ 2 from {A1 , A2 , . . . , An }

P (Ai1 . . . Aim ) = P (Ai1 ) . . . P (Aim ) .

Now we extend the definition of independence to denumerable number


of random variables, because we some times consider random experiments
like ’toss a coin repeatedly countably infinte times in an independent way’.
Definition 4.5 The events {A1 , A2 , · · · } ⊆ F are said to be independent
if any finite subcollection of events from {A1 , A2 , · · · } are independent, i.e.
for any m ≥ 1, {i1 , ı2 , · · · , im } ⊆ N, Ai1 , Ai2 , · · · , Aim are independent.

In the above definition, we are not including the product rule for prob-
abilities for infinite collection, because in practice independent trials are
performed sucessively and hence its only meaning full/ practical to talk
about independent among finite collection (note given n independent trials
to one can perform a trial which is independent of trials already occured),
also it is sufficient in view of continuity property of probabilities.

Example 0.6 Consider the random experiment of ’tossing an unbiased coin


countably infinite times’, i.e. any finite collection of tosses are independent.
Let An denote the event that nth toss results in a H for n 6= 1. Then
{A1 , A2 , · · · } are independent (exercise-would like to see how you write down
the solution!)

One can define conditional independence using product rule for proba-
bilities with conditional probabilities.
Definition 4.6 (Conditional independence) We say that two events A, B ∈
F are conditionally independent given the event C ∈ F, P (C) > 0, if

P (AB|C) = P (A|C) P (B|C).

Example 0.7 There are two coins one is a 61 -coin1 and another is a 56 -
coin. The experiment is the following. Pick a coin at random and toss the
1
We use p-coin to denote a coin with probability p for geting H
0.1. NOTION OF STATISTICAL INDEPENDENCE 5

selected coin twice independently. Consider the events A, first toss gives a
H, B denote the second gives a H and C denote first coin is picked. Now A
and B are dependent but A and B are conditionally independent with respect
to C (exercise)

Using the notion of independence of events, one can define the indepen-
dence of σ-fields as follows.

Definition 4.7 Two σ-fields F1 , F2 of subsets of Ω are said to be indepen-


dent if for any A ∈ F1 , B ∈ F2

P (AB) = P (A) P (B) .

Definition 4.8 The σ-fields F1 , F2 , . . . Fn of subsets of Ω are said to be inde-


pendent if A1 , A2 , . . . , An are independent whenever Ai ∈ Fi , i = 1, 2, . . . , n.

One can go a step further to define independence of any family of σ-fields


as follows.

Definition 4.9 A family of σ-fields {Fi |i ∈ I}, where I is an index set, are
independent if for any finite subset {α1 , · · · αn } ⊆ I, the σ-fields Fα1 , · · · , Fαn
are independent.
In particular, when I = N, we get the definition of independence for a count-
ably infinite collection.

Finally one can introduce the notion of independence of random vari-


ables through the corresponding σ-field generated. It is natural to define
independence of random variables using the corresponding σ-fields, since
the σ-field generated by a random variable gives a collective information
about the random variable.

Definition 4.10 Two random variables X and Y are independent if σ(X)


and σ(Y ) are independent.

Definition 4.11 A family of random variables {Xi | i ∈ I} are independent


if {σ(Xi ) | i ∈ I} are independent.

Example 0.8 Let A, B are independent events iff σ(A) and σ(B) are in-
dependent σ-fields iff the random variables IA and IB are independent.
6

Proof. Let A and B are independent. Consider


P (AB c ) = P (A) − P (AB) = P (A)(1 − P (B)) = P (A) P (B c ) .
Hence A and B c are independent. Changing the roles of A and B we have
Ac and B are independent. Now Ac and B are independent implies that Ac
and B c are independent.
Since
σ(A) = {∅, A, Ac , Ω}, σ(B) = {∅, B, B c , Ω} ,
it follows that σ(A) and σ(B) are independent. Converse statement is obvi-
ous.
The second part follows from
σ(A) = σ(IA ), σ(B) = σ(IB ) .
Example 0.9 The trivial σ-field F0 = {∅, Ω} is independent of any σ-field
of subsets of Ω.
Example 0.10 Define a probability space (Ω, F, P ) as follows:
Ω = {(a1 , a2 , . . ., an ) | ai = 0, 1; i = 1, . . . , n}, F = P(Ω)
and
1
P {(a1 , a2 , . . ., an )} = for all (a1 , . . ., an ) ∈ Ω .
2n
For i = 1, 2, · · ·, n, set
Ai = {(a1 , a2 , · · ·, an )|ai = 1} .
Then A1 , A2 , · · ·, An are independent.

This can be seen from the following.


2n−1 1
P (Ai ) = n
= ∀ i = 1, 2, · · ·, n .
2 2
Note that
Ai1 . . . Aim = {(a1 , . . . , am ) | ai1 = · · · = aim = 1} .
Hence
2n−m 1
P (Ai1 . . . Aim ) = = m ∀ i1 . . . , im .
2n 2
Also
m times
z }| {
1 1 1
P (Ai1 ) · · · P (Aim ) = ··· = m.
2 2 2
0.1. NOTION OF STATISTICAL INDEPENDENCE 7

Example 0.11 Let (Ω, F, P ) is given by as

Ω = {HH, HT, T H, T T }, F = P(Ω)

and
1
P ({ω}) = , ω ∈ Ω.
4
Define two random variables X1 , X2 as

X1 (HH) = X1 (HT ) = 1, X1 (T H) = X1 (T T ) = 0 ,
X2 (HH) = X2 (T H) = 1, X2 (HT ) = X2 (T T ) = 0 .

Then X1 and X2 are independent. Here note that

σ(X1 ) = {∅, {HH, HT }, {T H, T T }, Ω}

and
σ(X2 ) = {∅, {HH, T H}, {HT, T T }, Ω} .

Вам также может понравиться