Вы находитесь на странице: 1из 17

Telecommunications Networks

Review of Probability
•  ‘RANDOM’ means unpredictable.
•  If the receiver at the end of the channel knew
in advance the message output from the
originaAng source, there would be no need for
communicaAon.
•  So there is a randomness in the message
source.
•  Noise waveforms that accompany the signal
are also unpredictable.
Random Experiments (1)
• A random experiment is any situation, process or phenomenon whose
outcome is not known and not predictable in advance, i.e., subject to
random or chance effects
• A trial is a single performance or run of the experiment
• The result of the trial is called an outcome or sample point
• The sample space S of an experiment is the set of all possible
outcomes
• An event E is a subset of the sample space
• A single outcome is called a simple event (cf. compound event)
• Probability theory is closely related to set theory
Random Experiments (2)
• Φ = null event, i.e., event comprising no sample points
• If (A ∩ B) = Φ, A and B are said to be mutually exclusive or disjoint events
– In other words: events are mutually exclusive if the occurrence of one
event precludes the occurrence of the other event (cf. non-mutually
exclusive events)
– Example: Rolling a die (A = get even number, B= get odd number versus
C = divisible by 2, D = divisible by 3)
• AC = complement of event A, i.e., all points in the sample space which are not
in A
• Since the experiment must result in some outcome, SC = Φ
Definition of Probability (1)
• If the sample space S of a random experiment consists of a finite number of
sample points that are equally likely to occurs, then the probability P(A) of
an event A is defined as:

number of sample points in A


P( A) =
number of sample points in S
It follows that
P(S) = 1
But what if the outcomes are neither finite in number nor
equally likely?
Definition of Probability (2)
• Probability can be defined more generally as a measure of relative
frequency. The probability of an event A is a measure of how frequently
A occurs in a very large number of trials, i.e., the ratio of the absolute
frequency of A to the number of trials:

f absolute ( A) no. of times A occurs


f relative ( A) = =
n no. of trials
(1) 0 ≤ f relative ( A) ≤ 1
(2) f relative (S ) = 1
(3) f relative ( A ∪ B ) = f relative ( A) + f relative (B ),
A and B mutually exclusive
Definition of Probability (3):
Axioms of Probability
For a random experiment with a sample space S,
each event A of S is associated with a probability
P(A) such that :
(1) 0 ≤ P( A) ≤ 1
(2) P( S ) = 1
∞  ∞
(3) P U An  = P( A1 ∪ A2 ∪ L) = ∑ P( An ),
 n =1  n =1

for any set of mutually exclusive events A1 , A2 , K


Theorems of Probability (in a
sample space S)
(1) Complementation Rule
P ( Ac ) = 1 − P( A)

(2) Addition Rule for Mutually Exclusive Events


P ( A1 ∪ A2 ∪ L ∪ Am ) = P( A1 ) + P( A2 ) + L + P( Am )

(3) Addition Rule for Arbitrary Events


P ( A ∪ B) = P( A) + P( B ) − P( A ∩ B)
Conditional Probability
• The probability of event B occurring, given that event A has (already)
occurred, is called the conditional probability:
P(B|A) = P(A ∩ B) / P(A), provided P(A)≠0
• Similarly, P(A|B) = P(A ∩ B) / P(B), provided P(B)≠0
• Hence, the multiplication rule:
P(A ∩ B) = P(A) × P(B|A) = P(B) × P(A|B), provided P(A)≠0 and P(B)≠0
• A and B are statistically independent events if:
P(A ∩ B) = P(A)×P(B)
i.e., P(A|B) = P(A) and P(B|A) = P(B)
• In other words, two events A and B are statistically independent if knowledge
that B has occurred does not affect the probability that A occurs—and vice
versa. (Cf., statistically dependent events)
Statistical Independence:
Examples
• (1 )Experiment: Drawing a card at random from a pack of playing
cards. Are the events “spades” and “ace” statistically independent?
• (2) Experiment: Throwing two dice. Are the events “six with first die”
and “even face with second” statistically independent?
• (3) Experiment: Randomly permute letters a, b, c and d. Are the events
“a precedes b” and “c precedes d” statistically independent?
• (4) Experiment: Families with three children. Are the events “the
family has children of both sexes” and “there is at most one girl”
statistically independent? What about for families with two children?
And four children?
Mutually Exclusivity and Statistical Independence
• Consider the simple experiment of throwing a die.
• Events:
– A: Throw a five
– B: Throw an even number
– Show mathematically whether or not A and B are: (i) mutually exclusive?
(ii) statistically independent?
• Suppose we redefine event A as follows (B remains the same):
– A: Throw a six
– Show mathematically whether or not A and B are: (i) mutually exclusive?
(ii) statistically independent?
• What conclusions can we draw from this simple example?
Bayes’ Theorem
Recall that for an arbitrary event A
P( A ∩ H )
P( A | H ) = , for a given hypothesis H
P( H )
n
Let H = {H1 , K , H n ) and U H j = S , i.e., a set of mutually exclusive events
j =1

of which one, and only one, necessarily occurs, i.e., event A


only occurs only in conjunction with some H j . In other words,
A = ( A ∩ H1 ) ∪ ( A ∩ H 2 ) ∪ K ∪ ( A ∩ H n )
Since the ( A ∩ H j ) s are mutually exclusive and P( A ∩ H ) = P( A | H ) ⋅ P( H ),
n
it follows that P( A) = ∑ P( A | H j ) ⋅ P( H j )
j=1

P( A ∩ H k ) P( A | H k ) P( H k )
Bayes' Theorem : P( H k | A) = = n
P( A)
∑ P( A | H
j=1
j ) ⋅ P( H j )
SPECIAL DISTRIBUTIONS
Homework
•  RAYLEIGH DISTRIBUTION?
•  RICEAN DISTRIBUTION?
•  Draw the distribuAons
•  Give examples of where these distribuAons
are used in communicaAon

Вам также может понравиться