Вы находитесь на странице: 1из 92

FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

FACULTY OF ENGINEERING
DEPARTM ENT OF ELECTRICAL AND COMPUTER ENGINEERING

E NE E 331

ENGINEERING PROB ABILITY & STATISTICS

LECTURE NOTES

BY

Dr. WA’EL HASHLAMOUN

SEPTEMBER, 2008
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

C HA P T E R I

F U N D A M E N T A L C O N C E PT S
O F P R O BA B I L I T Y
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

Basic Definitions:
We start our treatment of probability theory by introducing some basic definitions.
Experiment:
By an experiment, we mean any procedure that:
1- Can be repeated, theoretically, an infinite number of times.
2- Has a well-defined set of possible outcomes.

Sample Outcome:
Each of the potential eventualities of an experiment is referred to as a sample outcome(s).
Sample Space:
The totality of sample outcomes is called the sample space (S).
Event:
Any designated collection of sample outcomes, including individual outcomes, the entire
sample space and the null space, constitute an event.

Occur:
An event is said to occur if the outcome of the experiment is one of the members of that event.

EXAMPLE (2-1):
Consider the experiment of flipping a coin three times.
a- What is the sample space?
b- Which sample outcomes make up the event:
A : Majority of coins show heads.

SOLUTION:
a- Sample Space (S) = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}
b- A = {HHH, HHT, HTH, THH}

§ Algebra of Events:
Let A and B be two events defined over the sample space S, then:
- The intersection of A and B, (A ∩ B), is the event whose outcome belongs to both A and B.

- The union of A and B, (A U B), is the event whose outcome belongs to either A or B or both.

- Events A and B are said to be Mutually Exclusive (or disjoint) if they have no outcomes in
common, that is A ∩ B = Ø, where Ø is the null set (a set which contains no outcomes).
- The complement of A (Ac or Ā) is the event consisting of all outcomes in S other than those
contained in A.
- Venn Diagram is a graphical format often used to simplify the manipulation of complex
events.

-1-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

A B A B

A∩B
S AUB
S Ac S
B A B
A
B

A∩B=Ø (A ∩ B)c (A ∩ Bc) U (B ∩ Ac)


S S S
§ De Morgan's Laws:
Use Venn diagrams to show that:
1- (A ∩ B)c = Ac U Bc
2- (A U B)c = Ac ∩ Bc

EXAMPLE (2-2):
An experiment has its sample space as:
S = {1, 2, 3, ……, 48, 49, 50}. Define the events
A : set of numbers divisible by 6
B : set of elements divisible by 8
C : set of numbers which satisfy the relation 2 n , n = 1, 2, 3,…
Find: 1- A, B, C 2- A U B U C 3- A ∩ B ∩ C
SOLUTION:
1- Events A, B, and C are:
A = {6, 12, 18, 24, 30, 36, 42, 48}
B = {8, 16, 24, 32, 40, 48}
C = {2, 4, 8, 16, 32}
2- A U B U C = {6, 12, 18, 24, 30, 36, 42, 48,
8, 16, 32, 40,
2, 4}
3- A ∩ B ∩ C = { Ø }

EXAMPLE (2-3):
The sample space of an experiment is:
S = { - 20 £ x £ 14 }. If A = { - 10 £ x £ 5 } and B = { - 7 £ x £ 0 } find.
1- A U B 2- A ∩ B A
SOLUTION: x
-20 -10 0 5 14
1- A U B = { - 10 £ x £ 5 } B
2- A ∩ B = { - 7 £ x £ 0 } x
-7 0

-2-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

§ Definitions of Probability:
Four definitions of probability have evolved over the years:
- Definition I: Classical (a priori)
If the sample space S of an experiment consists of finitely many outcomes (points) that are
equally likely, then the probability of event A, P(A) is:
Number of outcomes in A
P(A) =
Number of outcomes in S
Thus in particular, P(S) = 1
- Definition II: Relative Frequency (a posteriori)
Let an experiment be repeated (n) times under identical conditions then, the relative
frequency:
f(A) Number of times A occurs
P(A) = lim =
n ®¥ n Number of trials
f(A) is called the frequency of (A)
f(A)
Clearly 0 £ £1
n
f(A)
= 0 if (A) does not occur in the sequence of trials
n
f(A)
= 1 if (A) occurs in each of the (n) trials
n

EXAMPLE (2-4):
In digital data transmission, the bit error probability is (p). If 10,000 bits are transmitted over
a noisy communication channel and 5 bits were found to be in error, find the bit error
probability (p).
SOLUTION:
5
According to the relative frequency definition we can estimate (p) as: (p) =
10,000

- Definition III: Subjective


Probability is defined as a person's measure of belief that some given event will occur.
Example:
What is the probability of establishing an independent Palestinian state in the next 2 years?
Any number we might come up with would be our own personal (subjective) assessment of
the situation.
- Definition IV: Axiomatic
Given a sample space (S), with each event (A) of (S) (subset of S) there is associated a number
P(A), called the probability of (A), such that the following axioms of probability are satisfied:
1- P(A) ³ 0 ; Probability is nonnegative
2- P(S) = 1 ; Probability of the sample space is a certain
3- For the mutually exclusive events (A) and (B) (A ∩ B = Ø)
P(A U B) = P(A) + P(B) ; (A ∩ B = Ø)
4- If (S) is infinite (has infinitely many points), axiom (3) is to be replaced by:
P(A1 U A2 U A3 U ……) = P(A1) + P(A2) + P(A3) + ……
Where A1, A2, A3 …… are mutually exclusive events
(A1 ∩ A2 = Ø A1 ∩ A 3 = Ø A2 ∩ A3 = Ø …......)

-3-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

§ Basic Theorems for Probability:


1- P(Ac) = 1 – P(A)
Proof: S = A U Ac
P(S) = P(A) + P(Ac)
1 = P(A) + P(Ac) è P(Ac) = 1 - P(A)
2- P(Ø) = 0
Proof:
S = S U Sc
S = S U Ø ; Sc = Ø
P(S) = P(S) + P(Ø) è P(Ø) = 0
3- P(A U B) = P(A) + P(B) - P(A ∩ B)
Proof:
For events (A) and (B) in a sample space:
{A U B} = {A ∩ Bc} U {A ∩ B} U {B ∩ Ac}
1 + 2 + 3

A B

1 2 3

S
Where events (1) and (2) and (3) are mutu ally exclusive
P(A U B) = P(1) + P(2) + P(3)
P(A) = P(1) + P(2)
P(B) = P(2) + P(3)
è P(A U B) = {P(1) + P(2)} + {P(2) + P(3)} – {P(2)}
è P(A U B) = P(A) + P(B) – P(A ∩ B)
- Theorem:
If A, B, and C are three events, then:
P(A U B U C) = P(A) + P(B) + P(C) – P(A ∩ B) – P(A ∩ C) – P(B ∩ C) + P(A ∩ B ∩ C)

EXAMPLE (2-5):
One integer is chosen at random from the numbers {1, 2, ……, 50}. What is the probability
that the chosen number is divisible by 6? Assume all 50 outcomes are equally likely.

SOLUTION:
S = {1, 2, 3, …………, 50}
A = {6, 12, 18, 24, 30, 36, 42, 48}
Number of elements in A 8
P(A) = =
Number of elements in S 50

-4-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-6):
If the probability of occurrence of an even number is twice as likely as that of an odd number
in Example (2-5). Find P(A); A is defined above.
SOLUTION:
P(S) = P(even) + P(odd) = 1 ;
Let (P) be the probability of occurrence of an odd number,
then (2P) will be the probability of occurrence of an even number.
(25)(2P) + (25)(P) = 1
1
(50 + 25)(P) = 1 è P =
75
16
P(A) = 8 ´ 2P =
75

EXAMPLE (2-7):
Suppose that a company has 100 employees who are classified according to their marital
status and according to whether they are college graduates or not. It is known that 30% of the
employees are married, and the percent of graduate employees is 80%. Moreover, 10
employees are neither married nor graduates. What proportion of married employees are
graduates?
SOLUTION:
Let: M : set of married employees M G
G : set of graduate employees
N(.) : number of members in any set (.)
10 20 60
è N(S) = 100
N(M) = 0.3 × 100 = 30 10
N(G) = 0.8 × 100 = 80 S
N(M U G)c = 10
è N(M U G) = 100 – 10 = 90
N(M U G) = N(M) + N(G) – N(M ∩ G)
90 = 30 + 80 – N(M ∩ G)
N(M ∩ G) = 30 + 80 – 90 = 20
è Two third of the married employees in the company are graduates.

EXAMPLE (2-8):
An experiment has two possible outcomes; the first occurs with probability (P), the second
with probability (P2), find (P).
SOLUTION:
P(S) = 1
P + P2 = 1
P2 + P – 1 = 0
-1 + 5
P = ; (only the positive root is taken)
2

-5-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-9):
A sample space “S” consists of the integers 1 to 6 inclusive. Each has an associated
probability proportional to its magnitude. If one number is chosen at random, what is the
probability that an even number appears?
SOLUTION:
Sample Space “S” = {1 , 2 , 3 , 4 , 5 , 6}
Event (A) = {2 , 4 , 6}
P(A) = P(2) + P(4) + P(6)
6 6
6(6 + 1)
P(S) = 1 = å p(i) = å a(i) =
i =1 i =1 2
a =1

1
è The proportionality constant a =
21
2 4 6 12
P( A ) = + + =
21 21 21 21

EXAMPLE (2-10):
Let (A) and (B) be any two events defined on (S). Suppose that P(A) = 0.4, P(B) = 0.5, and
P(A ∩ B) = 0.1.
Find the probability that:
1- (A) or (B) but not both occur.
2- None of the events (A) or (B) will occur.
3- At least one event will occur.
4- Both events occur.

SOLUTION:
P(A) = P[(A ∩ Bc) U (A ∩ B)]
Using Venn diagram: A B
P(A) only = 0.3
P(B) only = 0.4 0.4
0.3 0.1
1- P(A or B only) = 0.3 + 0.4 = 0.7
0.2
Note that: S
P(A U B) = P(A) + P(B) – P(A ∩ B)
P(A U B) = 0.4 + 0.5 – 0.1 = 0.8
2- P(none) = P(AUB) = 0.2
3- P(at least one) = P(A U B) = 0.8
4- P(both) = P(A ∩ B) = 0.1

-6-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

§ Discrete Probability Functions:


If the sample space generated by an experiment contains either a finite or a countable infinite
number of outcomes, then it is called a discrete sample space.
Any probability assignment on that space such that:
a- P(s i ) ³ 0
b- å P(s i ) = 1
si Î S

is said to be a discrete probability function.


If (A) is an event defined on (S), then P(A) = å P(s )
si Î A
i

For example, the sample space, “S” = {1 , 2 , 3 , 4 , 5 , 6} is countably finite,


while the set of positive integers, “S” = {1 , 2 , 3 , ……...} is countably infinite.

EXAMPLE (2-11):
The outcome of an experiment is either a success with probability (1/2) or a failure with
probability (1/2). If the experiment is to be repeated until a success comes up for the first
time. What is the probability of that happening on an odd-numbered?
SOLUTION:
Trial (i) Outcome Probability P(i)
1
1 S P(1) =
2
æ1ö æ1ö
2 FS P(2) = ç ÷ ç ÷
è2ø è2ø
2
æ1ö æ1ö
3 FFS P(3) = ç ÷ ç ÷
è2ø è 2ø
3
æ1ö æ1ö
4 FFFS P(4) = ç ÷ ç ÷
è2ø è 2ø
4
æ1ö æ1ö
5 FFFFS P(5) = ç ÷ ç ÷
è2ø è 2ø
......

......

......

k -1 k
æ1ö æ1ö æ1ö
k F F F F ........ F S P(k)= ç ÷ ç ÷ = ç ÷ ; 1£ k £ ¥
è2ø è 2ø è2ø

P(A) = P(1) + P(3) + P(5) + …


¥ 2 i +1 i
æ1ö 1 ¥ æ1ö
P(A) = å ç ÷ = åç ÷
i= 0 è 2 ø 2 i =0 è 4 ø
¥
1 1 1 2
å xk =
1- x
è P(A) =
2
´
1
(Geometric series) è P(A) =
3
k=0
1-
4
Note that, the sample space of this experiment is countably infinite.

-7-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-12):
The discrete probability function for the countably infinite sample space S = {1, 2, 3, …} is:
C
P(x) = 2 ; x = 1, 2, 3, ……
x
a- Find the constant “C” so that P(x) is valid discrete probability function.
b- Find the probability that the outcome of the experiment is a number less than 4.
SOLUTION:
a. By Axiom 2, P(S) = 1
¥
C ¥
1 π2 6
åx =1 x
2
= 1 è C å
x =1 x
2
= 1 è C
6
= 1, è C = 2
π

b. The event “A” is A = {1 , 2 , 3}


P(A) = P(1) + P(2) + P(3)
6 æ 1 1 1 ö 49
è P(A) = 2 çç 2 + 2 + 2 ÷÷ = 2 = 0.827
π è (1) (2) (3) ø 6π

§ Continuous Probability Functions:


If the sample space associated with an experiment is an interval of real numbers, then (S) has
an uncountable infinite number of points and (S) is said to be continuous.
Let f (x) be a real-valued function defined on (S) such that:
a- f ( x) ³ 0
b- ò f ( x) dx = 1
All x
is called a continuous probability function.
If (A) is an event defined on (S), then P(A) = ò f ( x) dx
xÎA
For example, the sample space S = { 1 £ x £ 2 } is uncountably infinite.

EXAMPLE (2-13):
Let the sample space of an experiment be:
“S” = { 1 £ x £ 2 }. The probability function defined over “S” is:
k
f (x) = 2 , 1 £ x £ 2 .
x
a- Find (k) so that f (x) is a valid probability function.
b- Find P (x £ 1.5)

SOLUTION:
2 2
k
a- P(S) = ò f (x) dx = 1 Þ òx 2
dx = 1 Þ k = 2
1 1
1.5
k 2
b- P(x £ 1.5) = òx
1
2
dx =
3

-8-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-14):
The length of a pin that is a part of a wheel assembly is supposed to be 6 cm. The machine
that stamps out the parts makes them 6 + x cm long, where x varies from pin to pin according
to the probability function:
f(x) = k(x + x2) ; 0 £ x £ 2
where (k) is a constant. If a pin is longer than 7 cm, it is unusable. What proportion of pins
produced by this machine will be unusable?
SOLUTION:
P( S ) = ò f ( x) dx = 1
2
k ò (x + x 2 ) dx = 1
0
2
é x2 x2 6
kê + =1 Þ k =
ë2 2 0 28

A cotter pin is not accepted if the error x ³ 1 cm.


2
P( x ³ 1 ) = ò k(x + x
2
) dx
1
2
é x2 x2 6 é 4 8 1 1 ù 23
= kê + = + - - =
ë2 2 1
28 êë 2 3 2 3 úû 28
23
P( x ³ 1 ) = p(pin length ³ 7 cm) =
28

§ Conditional Probabilities and Statistical Independence:


- Definition:
Given two events (A) and (B) with P(A) and P(B) > 0. We define the Conditional Probability
of (A) given (B) has occurred as:
P(A I B)
P(A/B) = …………… (1)
P(B)
and the probability of (B) given (A) has occurred as:
P(A I B)
P(B/A) = …………… (2)
P(A)
In (2), (A) serves as a new (reduced) sample space
and P(B/A) is the fraction of (A) which corresponds to (A ∩ B).

A B

-9-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-15):
A sample space (S) consists of the integers 1 to n inclusive. Each has an associated
probability proportional to its magnitude. One integer is chosen at random, what is the
probability that number 1 is chosen given that the number selected is in the first (m) integers.
SOLUTION:
Let (A) be the event number 1 occurs
(A) = {1}
(B) the outcome belongs to the first m integers
(B) = {1 , 2 , 3 , … , m}
n n n
n (n + 1) 2
å
i =1
Pi = å a i = 1 Þ a å i = 1 è a
i =1 i =1 2
=1 Þ a =
n(n + 1)
P(A I B) P(1) α α 1 2
P(A/B) = = = m = m = è P(A/B) =
P(B) P(B) m(m + 1) m(m + 1)
å
i =1
Pi α å i
i =1 2
2
A priori probability: P(A) =
n (n + 1)
2
A posteriori probability: P(A/B) =
m(m + 1)
Clearly P(A/B) > P(A) due to the additional information given by event (B).

- Theorem: Multiplication Rule


If (A) and (B) are events in a sample space (S) and P(A) ≠ 0, P(B) ≠ 0, then:
P(A ∩ B) = P(A) P(B/A) = P(B) P(A/B)
For three events A, B, and C:
P(A ∩ B ∩ C) = P(A) P(B/A) P(C/B,A)

EXAMPLE (2-16):
A certain computer becomes inoperable if two components A and B both fail. The probability
that A fails is 0.001 and the probability that B fails is 0.005. However, the probability that B
fails increases by a factor of 4 if A has failed. Calculate the probability that:
a- The computer becomes inoperable.
b- A will fail if B has failed.
SOLUTION:
P(A) = 0.001
P(B) = 0.005
P(B/A) = 4 × 0.005 = 0.020
a- The system fails when both A and B fail, i.e.,
P(A I B) = P(A) P(B/A)
P(A I B) = 0.001´ 0.020 = 0.00002
b- P(A I B) = P(A) P(B/A) = P(B) P(A/B)
0.001 ´ 0.020
P(A/B) = = 0.004
0.005

-10-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-17):
A box contains 20 non-defective (N) items and 5 defective (D) items. Three items are drawn
without replacement.
a. Find the probability that the sequence of objects obtained is (NND) in the given order.
b. Find the probability that exactly one defective item is obtained.
SOLUTION:
a. P(NÇNÇD) = P(N)× P(N/N)× P(D/N,N)
20 20 - 1 5 20 19 5
P(NND) = ( )( )( ) = ( )( )( )
25 25 - 1 25 - 2 25 24 23
b. One defective item is obtained, when any one of the following sequences is obtained:
(NND), (NDN), (DNN)
The probability of getting one defective item is the sum of the probabilities of these
sequences and is given as:
20 19 5 20 5 19 5 20 19 20 19 5
( )( )( ) + ( )( )( ) + ( )( )( ) = (3)( )( )( )
25 24 23 25 24 23 25 24 23 25 24 23
Later in Chapter 2, we will see that Part (b) can be solved using the hyper-geometric
distribution.

Definition: Statistical Independence


Two events (A) and (B) are said to be statistically independent if:
P(A ∩ B) = P(A) P(B)
From this definition we conclude that:
P(A) P(B)
P(A/B) = = P(A) Þ a posteriori probability = a priori probability
P(B)
P(A) P(B)
P(B/A) = = P(B)
P(A)
This means that the probability of (A) does not depend on the occurrence or nonoccurrence of
(B) and vice versa. Hence, the given information does not change our initial perception about
the two given probabilities.

Independence of Three Events:


Events (A), (B) and (C) are independent if the following conditions are satisfied:
- P(A ∩ B) = P(A) P(B)
- P(A ∩ C) = P(A) P(C)
- P(B ∩ C) = P(B) P(C)
- P(A ∩ B ∩ C) = P(A) P(B) P(C)

EXAMPLE (2-18):
Let S = {1 , 2 , 3 , 4} ; Pi = 1 . A = {1 , 2} and B = {2 , 3}. Are (A) and (B) independent?
4
SOLUTION:

P(A) = 1 , P(B) = 1 è (A ∩ B) = {2} , P(A ∩ B) = 1


2 2 4
è P(A ∩ B) = P(A) P(B) è Events are independent

-11-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-19):
Consider an experiment in which the sample space contains four outcomes {S1, S2, S3, S4}
such that P(si) = 1 . Let events (A), (B) and (C) be defined as:
4
A = {S1, S2} , B = {S1, S3} , C = {S1, S4}
Are these events independent?
SOLUTION:
1
P(A) = P(B) = P(C) =
2
(A ∩ B) = {S1} ; (A ∩ C) = {S1} ; (B ∩ C) = {S1} ; (A ∩ B ∩ C) = {S1}
P(A ∩ B) = 1 ; P(A ∩ C) = 1 ; P(B ∩ C) = 1 ; P(A ∩ B ∩ C) = 1
4 4 4 4
Check the conditions:
1 1 1 1 1 1
P(A ∩ B) = = P(A) P(B) = ´ ; P(A ∩ C) = = P(A) P(C) = ´
4 2 2 4 2 2
1 1 1
P(B ∩ C) = = P(B) P(C) = ´
4 2 2
1
P(A ∩ B ∩ C) = ≠ P(A) P(B) P(C) = 1 ´ 1 ´ 1 = 1
4 2 2 2 8
è Events are not independent (even though the pair wise conditions of independence are satisfied )

EXAMPLE (2-20): “Reliability of a series system”


Suppose that a system is made up of two components connected in series, each component
has a probability (P) of working “Reliability”. What is the probability that the system works
assuming that components work independently?
P P
SOLUTION:
P(system works) = P(component 1 works ∩ component 2 works)
P(system works) = P × P = P2
* The probability that the system works is also known as the “ Reliability” of the system.

EXAMPLE (2-21): “Reliability of a parallel system”


Suppose that a system is made up of two components connected in parallel. The system
works if at least one component works properly. If each component has a probability (P) of
working “Reliability” and components work independently, find the probability that the
system works.
SOLUTION:
C1
Reliability of the system = P(system works)
P(system works) = P(C1 or C2 or both C1 and C2 works) P

= P(C1 U C2)
= P(C1) + P(C2) – P(C1 ∩ C2) P
C2
= P + P – (P × P) = 2P – P2
* This system fails if both components fail.

-12-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXERCISE:
A pressure control apparatus contains 4 electronic tubes. The apparatus will not work unless
all tubes are operative. If the probability of failure of each tube is 0.03, what is the probability
of failure of the apparatus assuming that all components work independently?

EXERCISE: “Mixed system”


Find the reliability of the shown mixed system, assuming that all components work
independently, and P is the reliability (probability of working) of each component.

P
P P
P

EXAMPLE (2-22):
A coin may be fair or it may have two heads. We toss it (n) times and it comes up heads on
each occasion. If our initial judgment was that both options for the coin (fair or both sides
heads) were equally likely (probable), what is our revised judgment in the light of the data?

SOLUTION:

Let A : event representing coin is fair


B : event representing coin with two heads
C : outcome of the experiment H
1H H2
44H4H4
...3
H
n times

A priori probabilities:

P(A) = 1 , P(B) = 1
2 2
è We need to find P(A/C) = ?
P(A I C) P(A) P(C/A)
P(A/C) = =
P(C) P(C)
P(A) P(H H H ... H / fair coin)
P(A/C) =
P(A) P(H H H ... H / fair coin) + P(B) P(H H H ... H / coin with two heads)
n n
1 æ1ö
æ1ö
ç ÷
ç ÷
2 è2ø
è 2ø 1
P(A/C) = = =
1 æ1ö
n
1 æ1ö
n
1 + 2n
ç ÷ + (1) ç ÷ + 1
2 è 2ø 2 è2ø
1 2n
P(B/C) = 1 - P(A/C) = 1 - =
1 + 2n 1 + 2n

-13-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

§ Theorem of Total Probability:


Let A1, A2, …, An be a set of events defined over (S) such that:
S = A1 U A2 U… U An ; Ai ∩ Aj = Ø for i ≠ j, and P(Ai) > 0 for i = 1, 2, 3, … n.
For any event (B) defined on (S):
P(B) = P(A1) P(B/A1) + P(A2) P(B/A2) + …… + P(An) P(B/An)
A1 B A4
Proof:
For events (A) and (B) in a sample space:
B = {A1∩B} U {A2∩B} U {A3∩B} U {A4∩B} A1 ∩ B A4 ∩ B
Since these events are disjoint, then: A3 ∩ B
A2 ∩ B
P(B) = P(A1∩B) + P(A2∩B) + P(A3∩B) + P(A4∩B) A2 A3
But P(A ∩ B) = P(A) P(B/A) = P(B) P(A/B)
P(B) = P(A1) P(B/A1) + P(A2) P(B/A2) + P(A3) P(B/A3) + P(A4) P(B/A4)

P(A1) P(B/A 1 )

P(A2) P(B/A 2)
B
P(B/A n)
P(An)

EXAMPLE (2-23):
If men constitute 47% of the population and tell the truth 78% of the time, while women tell
the truth 63% of the time. What is the probability that a person selected at random will
answer a question truthfully?
SOLUTION:
A1 = Event representing the selected person is a man
A2 = Event representing the selected person is a woman A1 A2
B = Event representing telling the truth
P(A1) = 0.47 A1∩B
P(A2) = 0.53 A2∩B
B = (A1∩B) U (A2∩B) è P(B) = P(A1∩B) + P(A2∩B)
P(B) = P(A1) P(B/A1) + P(A2) P(B/A2) B
P(B) = (0.47 × 0.7) + (0.53 × 0.63)
P(B) = 0.7

§ Baye's Theorem:
If A1, A2, A3, ……, An are disjoint events defined on (S), and (B) is another event defined on
(S) (same conditions as above), then:

P(A j ) P(B/A j ) P(A j I B)


P(A j /B) = n
=
P(B)
å P(A ) P(B/A )
i =1
i i

-14-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-24):
Suppose that when a machine is adjusted properly, 50% of the items produced by it are of
high quality and the other 50% are of medium quality. Suppose, however, that the machine is
improperly adjusted during 10% of the time and that under these conditions 25% of the items
produced by it are of high quality and 75% are of medium quality.
a- Suppose that one item produced by the machine is selected at random, find the
probability that it is of medium quality.
b- If one item is selected at random, and found to be of medium quality, what is the
probability that the machine was adjusted properly.
SOLUTION:
A1 = Event representing machine is properly adjusted
A2 = Event representing machine is improperly adjusted
H = Event representing item is of high quality
M = Event representing item is of medium quality
From the problem statement we have:
P(A1) = 0.9 ; P(A2) = 0.1
P(H/A1) = 0.5 ; P(H/A2) = 0.25 A1 M
P(M/A1) = 0.5 ; P(M/A2) = 0.75
A2
A1∩M
a- P(M) = P(A1 ∩ M) + P(A2 ∩ M)
A2∩M
P(M) = P(A1) P(M/A1) + P(A2) P(M/A2)
P(M) = (0.9)(0.5) + (0.1)(0.75) = 0.525

P(A 1 Ç M) P(A 1 ) P(M/A 1 )


b- P(A 1 /M) = =
P(M) P(M)
(0.9)(0.5)
P(A1 /M) = = 0.8571
(0.525)

0.5
A1 H
P(A1) = 0.9
0.25

0.5
A2 M
P(A2) = 0.1
0.75

-15-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-25):
Consider the problem of transmitting binary data over a noisy communication channel. Due
to the presence of noise, a certain amount of transmission error is introduced. Suppose that
the probability of transmitting a binary 0 is 0.7 (70% of transmitted digits are zeros) and
there is a 0.8 probability that a given 0 or 1 being received properly.
a- What is the probability of receiving a binary 1.
b- If a 1 is received, what is the probability that a 0 was sent.
SOLUTION:
0.8
A0 B0
A0 = Event representing 0 is sent P(A0) = 0.7
A1 = Event representing 1 is sent
0.2
B0 = Event representing 0 is received
B1 = Event representing 1 is received
From the problem statement we have: 0.2
P(A0) = 0.7 ; P(A1) = 0.3 A1 B1
P(B0/A0) = 0.8 ; P(B0/A1) = 0.2 P(A1) = 0.3
0.8
P(B1/A0) = 0.2 ; P(B1/A1) = 0.8
a- P(B1) = P(A0) P(B1/A0) + P(A1) P(B1/A1)
P(B1) = (0.7)(0.2) + (0.3)(0.8) = 0.38
P(B0) = 1 – P(B1) = 0.62
P(A 0 Ç B1 ) P(A 0 ) P(B1 /A 0 )
b- P(A 0 /B1 ) = =
P(B1 ) P(B1 )
(0.7)(0.2)
P(A0 /B 1 ) = = 0.3684
(0.38)

EXAMPLE (2-26):
In a factory, four machines produce the same product. Machine A1 produces 10% of the
product, A2 20%, A3 30%, and A4 40%. The proportion of defective items produced by the
machines follows:
A1: 0.001 ; A2: 0.005 ; A3: 0.005 ; A4: 0.002
An item selected at random is found to be defective, what is the probability that the item was
produced by machine A1?
SOLUTION:
Let D be the event: Selected item is defective
P(D) = P(A1) P(D/A1) + P(A2) P(D/A2) + P(A3) P(D/A3) + P(A4) P(D/A4)
P(D) = (0.1 × 0.001) + (0.2 × 0.005) + (0.3 × 0.005) + (0.4 × 0.002)
P(D) = 0.0034
P(A1 ) P(D/A1 ) (0.1) (0.001) 0.0001 1
P(A 1 /D) = = = =
P(D) (0.0034) 0.0034 34

-16-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

§ Counting techniques:
Here we introduce systematic counting of sample points in a sample space. This is necessary
for computing the probability P(A) in experiments with a finite sample space (S) consisting of
(n) equally likely outcomes. Then each outcome has probability æç 1 ö÷.
ènø
m
And if (A) consists of (m) outcomes, then P ( A ) =
n
- Multiplication Rule:
If operation A can be performed in n1 different ways and operation B in n2 different ways, then
the sequence (operation A , operation B) can be performed in n1 x n2 different ways.

EXAMPLE (2-27):
There are two roads between A and B and four roads between B and C. How many different
routes can one travel between A and C.
SOLUTION: A B C

n=2x4=8

§ Permutation:
Consider an urn having (n) distinguishable objects (numbered 1 to n). We perform the
following two experiments:
1- Sampling without replacement:
An object is drawn; its number is recorded and then put aside, another object is drawn; its
number is recorded and then put aside, the process is repeated (k) times. The total number of
ordered sequences {x1, x2, ……., xk} (repetition is not allowed) called permutation is:
N = n (n – 1) (n – 2) …… (n – k + 1)
n!
N= …………….. (1) 1
2
( n - k )!
where n! = n (n – 1) (n – 2) …… (3) (2) (1) 3
n
2- Sampling with replacement:
If in the previous experiment, each drawn object is dropped back into the urn and the process
is repeated (k) times. The number of possible sequences {x1, x2, ……., xk} of length (k) that
can be formed from the set of (n) distinct objects (repetition allowed):
N = nk …………….. (2)

EXAMPLE (2-28):
How many different five-letter computer passwords can be formed:
a- If a letter can be used more than once.
b- If each word contains each letter no more than once.
SOLUTION:
a- N = (26)5
26!
b- N =
(26 - 5)!

-17-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-29):
An apartment building has eight floors (numbered 1 to 8). If seven people get on the elevator
on the fist floor, what is the probability that:
8 ‚
a- All get off on different floors?
b- All get off on the same floor? 7 ‚
6 ‚
SOLUTION:
Number of points in the sample space:
5 ‚
First person can get off at any of the 7 floors. 4 ‚
3 ‚
Person (2) can get off at any of the 7 floors and so on. 2 ‚
è The number of ways people can get off:
(N) = 7 × 7 × 7 × 7 × 7 × 7× 7 = 7 7
‚‚‚‚‚‚‚
a- Here the problem is to find the number of permutations of 7 objects taking 7 at a time.
7!
P= 7
7
b- Here there are 7 ways whereby all seven persons get off on the same floor.
7
P= 7
7

EXAMPLE (2-30):
If the number of people getting on the elevator on the first floor is 3:
a- Find the probability they get off the elevator on different floors.
b- Find the probability they get off the elevator on the same floor.
SOLUTION:
Number of points in the sample space (N) = 7 × 7 × 7 = 73
7 ´ 6´ 5
a- P =
73
7
b- P = 3
7

EXAMPLE (2-31):
If the number of floors is 5 (numbered 1 to 5) and the number of people getting on the
elevator is 8. Find the probability that exactly 2 people get off the elevator on each floor.
SOLUTION:
Number of points in the sample space (N) = 4 × 4 × 4 × 4 × 4 × 4 × 4 × 4 = 4 8
æ 8 öæ 6 öæ 4 öæ 2 ö
çç ÷÷çç ÷÷çç ÷÷çç ÷÷
2 2 2 2
P = è øè øè8 øè ø
4

-18-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-32):
To determine an "odd man out", (n) players each toss a fair coin. If one player's coin turns up
differently from all the others, that person is declared the odd man out. Let (A) be the event
that some one is declared an odd man out.
a- Find P(A)
b- Find the probability that the game is terminated with an odd man out after (k) trials
SOLUTION:
Number of outcomes in event (A) n
a- P(A) = 64444744448
Number of possible sequences
number of outcomes leading to an odd man out:
O O O O O ... O
H H H H H H
T T T T T T
(n – 1) Heads and one Tail
(n – 1) Tails and one Head H H H H H ... T ü
2n n
P(A) = n = n -1 H H H H ... T H ïï
2 2 ï
with an odd man out, a success is obtained H H H ... T H H ý Þ (n)
and the game is over. M ï
ï
b- A second trial is needed when the experiment T H H H H ... H ïþ
ends with a failure:
è P(a second trial is needed) = 1 – P(A) T T T T T ... H ü
For (k) trials: T T T T ... H T ïï
ï
P(F F S) = P(F) k -1 P(S) T T T ... H T T ý Þ (n)
1F4F2F4
...3
ï
k -1 Trials M
ï
P(F
1F4F2F4 F S) = [1 - P(A)]k-1 P(A)
...3 H T T T T ... T ïþ
k -1 Trials

§ Combination:
In permutation, the order of the selected objects is essential. In contrast, a combination of
a given objects means any selection of one or more objects without regard to order.
The number of combinations of (n) different objects, taken (k) at a time, without repetition is
the number of sets that can be made up from the (n) given objects, each set containing (k)
different objects and no two sets containing exactly the same (k) objects.
The number is:
ænö n!
çç ÷÷ =
è k ø k ! ( n - k )!
Note that:
æ Arrange (k) objects ö æ First select (k) ö æ Arrange the (k) ö
çç ÷÷ is the same as çç ÷÷ and then çç ÷
è1selected from (n) ø è objects from (n) ø è selected objects ÷ø
44424443 144 42444 3 144 42444 3
N ænö k!
ç ÷
ç ÷
çk÷
è ø

æ nö n! ænö N n!
N = çç ÷÷ ´ k! where N = è çç ÷÷ = =
èkø (n - k )! è k ø k! k! (n - k )!

-19-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

EXAMPLE (2-33):
From four persons (set of elements), how many committees (subsets) of two members
(elements) may be chosen?
SOLUTION:
Let the persons be identified by the initials A, B, C and D
Subsets: (A , B) , (A , C) , (A , D) , (B , C) , (B , D) , (C , D)
æ 4ö 4!
N = çç ÷÷ = =6
è 2 ø 2! (4 - 2)!
Missing sequences: (A , A) , (B , B) , (C , C) , (D , D) è (repetition is not allowed)
Missing sequences: (B , A) , (C , A) , (D , A)
(C , B) , (D , B) , (D , C) è (order is not important)

EXAMPLE (2-34):
Consider the rolling of a die twice, how many pairs of numbers can be formed for each case?
SOLUTION:
n = 6 and k = 2
Case I: Permutation D2
1 2 3 4 5 6
a- With repetition D1
N = nk = 62 = 36 1 (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)

b- Without repetition 2 (2,1) (2,2) (2,3) (2,4) (2,5) (2,6)

n! 6! 3 (3,1) (3,2) (3,3) (3,4) (3,5) (3,6)


N= = = 30
(n - k)! (6 - 2)! 4 (4,1) (4,2) (4,3) (4,4) (4,5) (4,6)
Case I: Combination 5 (5,1) (5,2) (5,3) (5,4) (5,5) (5,6)
ænö n! 6! 6 (6,1) (6,2) (6,3) (6,4) (6,5) (6,6)
çç ÷÷ = = = 15
è k ø k!(n - k)! 2! (6 - 2)!

EXAMPLE (2-35):
In how many ways can we arrange 5 balls numbered 1 to 5 in 10 baskets each of which can
accommodate one ball?
SOLUTION:
n! 10! 10!
The number of ways ( N) = = =
(n - k )! (10 - 5)! 5!
NOTE:
If we remove the numbers of the balls so that the balls are no longer distinguishable, then:
ænö n! 10! 10!
The number of ways çç ÷÷ = = =
è k ø k!(n - k )! 5!(10 - 5)! 5! 5!
This is because the permutation within the 5 balls is no longer needed.

Arrangement of Elements of Two Distinct Types


When a set contains only elements of two distinct types, type (1) consists of k elements and
type (2) consists of (n-k) elements, then the number of different arrangements of all the

-20-
FUNDAMENTAL CONCEPTS OF PROBABILITY CHAPTER II

elements in the set is given by the binomial coefficient. Suppose, for example, that we have k
ones and (n-k) zeros to be arranged in a row, then the number of binary numbers that can be
ænö
formed is çç ÷÷ . If n = 4 and k = 1, then the possible binary numbers are (0001, 0010, 0100,
èk ø
1000).

Exercise: How many different binary numbers of five digits can be formed from the numbers
1, 0? List these numbers.

Exercise: How many different binary numbers of five digits can be formed from the numbers
1, 0 such that each number contains two ones? List these numbers.

Exercise: In how many ways can a group of five persons be seated in a row of 10 chairs?

The Multinomial Coefficient:


The number of ways to arrange n items of which n1 are of one type, n2 of a second type, …, nk
æ n ö n!
of a k'th type is given by N = çç ÷÷ =
n
è 1 n 2 . . n k ø n1!n2 !...n k !

- Comments: Stirling's formula


Computing n! can be approximated by: n! ~ 2p n n +1/2 e - n

-21-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

C H A P T ER I I

SINGLE RANDOM VARIABLES AND


P R O B A B I LI T Y D I S T R I B U T I O N S
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

§ Definition:
A real-valued function whose domain is the sample space is called a random variable (r.v).
- The random variable is given an uppercase letter X, Y, Z, … while the values assumed by this
random variable are given lowercase letters x, y, z, …
- The whole idea behind the r.v is a one to one mapping from the sample space on the real line
via the mapping function X(s).
- Associated with each discrete r.v (X) is a Probability Mass Function P(X = x). This density
function is the sum of all probabilities associated with the outcomes in the sample space that
get mapped into (x) by the mapping function (random variable X).
- Associated with each continuous r.v (X) is a Probability Density Function (pdf) fX(x).This fX(x)
is not the probability that the random variable (X) takes on the value (x), rather fX(x) is a
continuous curve having the property that:
b
P(a £ X £ b) = ò f X ( x ) dx
a

§ Definition:
The cumulative distribution function of a r.v (X) defined on a sample space (S) is given by:
FX(x) = P{X £ x}
- Properties of F X(x)
1- FX(– ∞) = 0
2- FX(∞) = 1
3- 0 £ FX(x) £ 1
4- FX(x1) £ FX(x2) if x1 £ x2
5- FX(x+) = FX(x) function is continuous from the right
6- P{x1 £ X £ x2} = FX(x2) – FX(x1)

EXAMPLE (3-1):
A chance experiment has two possible outcomes, a success with probability 0.75 and a failure
with probability 0.25. Mapping function (random variable X) is defined as:
x = 1 if outcome is a success
x = 0 if outcome is a failure
SOLUTION:
P(X < 0) = 0 ; P(X £ 0) = 0.25 ; P(X < 1) = 0.25 ; P(X £ 1) = 1

P(X £ x)
F S 1.0

0.75
P(X = x)
0.75

0.25
0.25

Real Line x
x 0 1
0 1
Probability Mass Function Cumulative Distribution Function

-21-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-2):
Let the above experiment be conducted three times in a row.
a- Find the sample space.
b- Define a random variable (X) as X = number of successes in the three trials.
c- Find the probability mass function P(X = x).
d- Find the cumulative distribution function FX(x) = P{X £ x}
SOLUTION:
In the table below we show the possible outcomes and the mapping process:
Sample Outcome P(si) x P(X = x)
F F F (0.25)3 0 (0.25)3 = 0.015625
F F S (0.75) (0.25)2
S F F (0.75) (0.25)2 1 3 x (0.75) (0.25)2 = 0.140625
F S F (0.75) (0.25)2
S S F (0.75)2 (0.25)
S F S (0.75)2 (0.25) 2 3 x (0.75)2 (0.25) = 0.421875
F S S (0.75)2 (0.25)
S S S (0.75)3 3 (0.75)3 = 0.421875

FFS FSS
FFF SFF SFS SSS
FSF SSF

P(X = 2)
Probability P(X = 2)
0.421875 0.421875
Mass
Function
P(X = 1)
0.140625

P(X = 0)
0.015625
Real Line
x
0 1 2 3

FX(x) = P{X £ x} P(0) +P(1)+(2) + P(3)


= 1.0

Cumulative
Distribution P(0) + P(1) + P(2)
= 0.578125
Function

P(0) + P(1)
= 0.15625

P(X = 0)
= 0.015625
x
0 1 2 3

Binomial Distribution: (n) = number of trials ; (x) number of successes in (n) trials

-22-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-3):
Suppose that 5 people including you and your friend line up at random. Let (X) denote the
number of people standing between you and your friend. Find the probability mass function
for the random variable (X).
SOLUTION:
Number of different ways by which the 5 people can arrange themselves = 5!
This is the total number of points in the sample space.
Let (A) denote you,
(B) denote your friend.
The random variable (X) assumes four possible values 0, 1, 2, 3 as shown below:
A B O O Oü
O A B O Oïï
ý Þ (X = 0)
O O A B Oï
O O O A Bïþ

A O B O Oü
ï
O A O B Oý Þ (X = 1)
O O A O Bïþ

A O O B Oü
ý Þ (X = 2)
O A O O Bþ

A O O O B} Þ (X = 3)

Any Sequence similar to what is shown can be done in: 2{! ´ 3{!
you and your the other
friend people

4 ´ 2!´ 3!
P(X = 0) = = 0.4
5! fX(x)
3 ´ 2!´ 3!
P(X = 1) = = 0.3 0.4
5!
2 ´ 2! ´ 3! 0.3
P(X = 2) = = 0 .2
5!
1 ´ 2! ´ 3! 0.2
P(X = 3) = = 0.1
5!
0.1

x
0 1 2 3

Probability Mass Function

-23-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

§ Continuous Random Variables and Distribution:


- Definition:
A random variable and its distribution are called of continuous type if the corresponding
cumulative distribution function FX(x) can be given by an integral of the form:
x
FX (x) = òf

X (u) du

where fX(x) is the probability density function related to FX(x) by:


d
f X (x) = FX (x)
dx
- Properties of fX(x)
1- fX(x) ³ 0 ; nonnegative
¥
2- òf

X (x) dx = 1
x2
3- P{x1 £ X £ x2}= òf
x1
X (u) du ; Probability is the area under the fX(x) curve between x1 and x2.

EXAMPLE (3-4):
Let (X) have the pdf: fX(x) = 0.75 (1 – x2) ; {-1 £ x £ 1}
1- Verify that fX(x) is indeed a valid pdf. fX(x)
2- Find:
a- FX(x) 0.75
1 1
b- P{ - £ X £ }
2 2
SOLUTION:
¥ 1 x
1
1- ò f X (x) dx = 1 è 2ò 0.75 (1 - x 2 ) dx -1
-¥ 0 Probability Density Function
3 1
u
= 2 ´ (0.75 u - 0.75 = 2(0.75 - 0.25) = 1.0 FX(x)
3 0
x 1
2-a) FX (x) = ò 0.75 (1 - u 2 ) du = 0.5 + 0.75x – 0.25 x 3

1
0.5
2
1 1
2-b) P{ - £X £ } = ò 0.75 (1 - u
2
) du
2 2 1 x
-
2 -1 1
1 1 Cumulative Distribution Function
= FX( ) – FX( - ) = 0.6875
2 2
EXERCISE:

Find x0 such that FX(x) = P{X £ x0} = 0.95

SOLUTION:
P{X £ x0} = 0.5 + 0.75x 0 – 0.25 x03 = 0.95 è 3 x0 – x03 = 1.8 è x0 @ 0.73

-24-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

§ M ean and Variance of a Distribution:


- Definition:
The mean value or expected value of a random variable (X) is defined as:
m X = E{X} = å x i P(X = x i ) if x is discrete
¥
m X = E{X} = ò x f X (x) dx if x is continuous

- Definition:
The variance of a random variable (X) is defined as:
s 2X = E{(X - m x ) 2 } = å (X - m x ) 2 P(X = x i ) if x is discrete
¥
s = E{(X - m x ) } = ò (x - m x ) 2 f X (x) dx
2
X
2
if x is continuous

s X = s 2X is the standard deviation

The variance is the measure of the spread of the distribution.

- Definition:
For any random variable (X) and any continuous function Y = g (X), the expected value of
g(X) is defined as:

E{g(X)} = å g(x i ) P(X = x i ) if x is discrete


¥
E{g(X)} = ò g ( x ) f X (x) dx if x is continuous

- Theorem:
Let (X) be a random variable with mean m X , then:

s 2X = E(X 2 ) - m 2X
Proof:
¥
s 2X = E{(X - m X ) 2 } = ò (x - m X ) 2 f X (x) dx

¥
s 2X = ò (x 2 - 2xm X + m 2X ) f X (x) dx

¥ ¥ ¥
s 2X = ò x 2 f X (x) dx - 2m X ò x f X (x) dx + m 2X ò f X (x) dx
-¥ -¥ -¥

s 2X = E(X 2 ) - 2m X m X + m 2X

σ 2X = E(X 2 ) - μ X2

-25-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

- Illustration:
1. The center of mass for a system of particles of masses m1, m2 ..., mn placed at x1, x2 …, xn is:
1
x cm = (x1m1 + x 2 m 2 + ...... x n m n )
å mi
If we let m1 = p1, m2 = p2, …, then:
x cm = x1p1 + x 2p 2 + ...... x n p n (The mean of a discrete distribution)
2. If ρ(x) is the density of a rigid body along the x-axis, then the center of mass is:
1
x cm = ò x ρ(x) dx
M
Where M = ò ρ(x) dx
Again, if ρ(x) is replaced by fX(x), the pdf function, then:
x cm = ò x f X (x) dx is the mean of continuous distribution.
Moment of Inertia:
3. If the particles in (1) above rotate with angular velocity (w), then the moment of inertia is
evaluated as:
n
I = å m i x i2
i =1
With mi replaced by pi , we have:
n
I = å p i x i2
i =1

4. If the rigid body in (2) rotates with angular velocity (w), then:
I = ò x 2 ρ(x) dx è E(x 2 ) = ò x 2 f X (x) dx

5. The variance E{(X - μ X ) 2 } parallels the moment of inertia about the center of mass.
“Recall the parallel axis theorem”
I = Icm + M h2
E(x 2 ) = σ 2 + E 2 (x)

EXAMPLE (3-5):
In the kinetic theory of gases, the distance (x) is described by the exponential function
-x
1
f X (x) = e λ x > 0
λ
a- Find the mean free path defined as the average distance between collisions.
SOLUTION:
¥
Mean Free Path = μ X = E{X} = ò x f X (x) dx
0
¥ -x
æ1ö 1
= ò x ç ÷ e dx = λ =
λ

0 èλø 2 π d 2 N/V
Where (N/V) is the number of molecules per unit volume and (d) is the molecular diameter.
b- If the average speed of a molecule is ν m/s, what is the average collision rate.
ν
Rate =
λ

-26-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-6):
Maxwell’s Distribution Law:
The speed of gas molecules follows the distribution:
3
-M ν 2
æ M ö2 2 2RT f (v)
f (n ) = 4 π ç ÷ ν e ν³0
è2πR Tø
Where v is the molecular speed
T is the gas temperature in Kelvin
R is the gas constant (8.31 J/mol.K)
M is the molecular mass of the gas v
a- Find the average speed, ν
b- Find the root mean square speed vrms
c- Find the most probable speed
SOLUTION:
¥
8R T
a- ν = E(v) = ò v f (v) dv =
0
πM
¥
3R T 3R T
b- E{n } = (n rms ) = òn 2 f ( v) dv = è n rms = ; rms = E(n 2 )
2 2

0
M M
c- The most probable speed is the speed at which f (n ) attains its maximum value.
Therefore, we differentiate f (n ) with respect to (v), set the derivative to zero and solve
for the maximum. The result is:

2R T
Most probable speed =
M
Root è
Mean è E(.)
Square è v2
è rms = E(n 2 )

Exercise
The radial probability density function for the ground state of the hydrogen atom (the pdf of
the electron position from the atom) is given by
4
f ( r ) = 3 r 2 e -2 r / a for r > 0
a
where a is the Bohr radius (a = 52.9 pm).

a. What is the distance from the center of the atom that the electron is most likely to be
found?
b. Find the average value of r?, (the mean distance of the electron from the center of the
atom).
c. What is the probability that the electron will be found within a sphere of radius a
centered at the origin?

- Theorem:
Let (X) be a random variable with mean μ X and variance s 2X .
Define Y = aX + b ; (a) and (b) are real constants, then:

-27-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

mY = a mx + b ………… (a)
s =a s
2
Y
2 2
X ………… (b)
Proof:
a- μ Y = E{aX + b}
¥
= ò (ax + b) f X (x) dx

¥ ¥
= a ò x f X (x) dx + b ò f X (x) dx è mY = a m x + b
-¥ -¥

b- s 2Y = E{(Y - m Y ) 2 }
= E{[(ax + b) - (am X + b)] 2 } = E{[a(x - m X )]2 }
= a 2 E{(x - m X ) 2 } è σ Y2 = a 2 σ X2

EXAMPLE (3-7):
Find the mean and the variance of the binomial distribution considered earlier (Example 3-2)
with n = 3 and P(S) = 0.75
SOLUTION:
Mean = m X = E{X} = å x i P(X = x i )

x P(X = x) x . P(X = x)
0 0.015625 0
1 0.140625 0.140625
2 0.421875 0.843750
3 0.421875 1.265625
∑ 2.25

åx P(X = x i ) = 2.25 = 3 x 0.75


i

è E(X) = n p = number of trials x probability of a success

Variance = s 2X = E(X 2 ) - [E(X)]2 ; E{X 2 } = å x i2 P(X = x i )

x x2 P(X = x) x2 . P(X = x)
0 0 0.015625 0
1 1 0.140625 0.140625
2 4 0.421875 1.687500
3 9 0.421875 3.796875
∑ 5.625

σ 2X = 5.625 - (2.25)2 = 0.5625 = 3 x 0.75 x 0.25


= number of trials x probability of success x probability of Failure è σ 2X = n p (1 - p)

-28-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-8):
Find the mean and the variance of the uniform distribution shown in the figure.

SOLUTION:
fX(x)

¥
Mean = m X = E{X} = ò x f X (x) dx
-¥ 1
b
1 a+b b-a
mX = ò x dx =
a
b-a 2 x
Var(X) = s 2X = E(X 2 ) - [E(X)]2 a b
b
1 b 3 - a 3 a 2 + ab + b 2
E{X 2 } = ò x 2 dx = =
a
b-a 3(b - a) 3
2
a + ab + b 2 æ a + b ö
2
(b - a ) 2
s =
2
X -ç ÷ =
3 è 2 ø 12

EXAMPLE (3-9):
X - mX
Let Z = (Standardized r.v.), show that the mean of (Z) is zero and the variance is 1.
sX
SOLUTION:
X mX
Z can be written as: Z = - = aX + b
sX sX
1 1
Mean = m Z = E{Z} = E{(X - m X )} = {E(X) - E(m X )} = 0
sX sX
1
Var(Z) = s 2Z = 2 s 2X = 1
sX

- Some useful properties of expectation:


- E{a} = a ; a is a constant
- E{a g(X)} = a E{g(X)} ; a is a constant
- E{ g 1 (X) + g 2 (X)} = E{ g 1 (X)} + E{g 2 (X)}

- The median and the mode:


- Definition:
For a continuous random variable (X), the median of the distribution of (X) is defined to be a
point (x0) such that:
P(X < x 0 ) = P(X ³ x 0 )

- Definition:
If a random variable (X) has a pdf fX(x), then the value of (x) for which fX(x) is maximum is
called the mode of the distribution.

-29-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

§ Common Discrete Random Variables:


I. The Binomial Distribution
- Definition:
A random experiment consisting of (n) repeated trials such that:
a- The trials are independent.
b- Each trial results in only two possible outcomes, a success and a failure.
c- The probability of a success (p) on each trial remains constant
Is called a binomial experiment.
The r.v (X) that equals the number of trials that results in a success has a binomial distribution
with parameters (n) and (p).
The probability mass function of (X) is:
ænö
P(X = x ) = çç ÷÷ p x (1 - p) n - x ; x = 0, 1, 2, ...... , n
èxø
Theorem:
If (X) is a binomial r.v with parameters (n) and (p), then:
m X = E(X) = n p

s 2X = Var(X) = n p (1 - p) = n p q

EXAMPLE (3-10):
Suppose that the probability that any particle emitted by a radioactive material will penetrate
a certain shield is 0.02. If 10 particles are emitted. Find thee probability that:

a- Exactly one particle will penetrate the shield.


b- At least two particles will penetrate the shield.
SOLUTION:
ænö
P(X = x) = çç ÷÷ p x (1 - p) n - x ; p = 0.02 ; n = 10
èxø
æ10 ö
a- P(X = 1) = çç ÷÷ (0.02) 1 (1 - 0.02) 10 -1
è1ø
æ ö
10 10
b- P(X ³ 2) = å çç ÷÷ (0.02) x (1 - 0.02) 10 - x
x =2 è x ø

Also: P(X = 0) + P(X = 1) + P(X ³ 2) = 1


è P(X ³ 2) = 1 – [P(X = 0) + P(X = 1)]
éæ 10 ö æ10 ö ù
P( X ³ 2) = 1 - êçç ÷÷ (0.02) 0 (1 - 0.02) 10 -0 + çç ÷÷ (0.02)1 (1 - 0.02) 10 -1 ú
ëè 0 ø è1ø û

-30-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-11):
Consider the parallel system shown in the figure. The system fails if at least three of the five
machines making up the system fail. Find the reliability of the system assuming that the
probability of failure of each unit is 0.1 over a given period of time.
SOLUTION:
Let (X) be the number of machines in failure.
(X) has a binomial distribution.
P(system fails) = P(number of machines in failure ³ 3 )
= P(x ³ 3)
æ 5ö æ5ö æ5ö
= çç ÷÷ (p)3 (1 - p) 2 + çç ÷÷ (p)4 (1 - p) + çç ÷÷ (p)5
è 3ø è 4ø è5ø
P(system fails) = 0.00856 ; when p = 0.1
è Reliability = 1 – P(Failure) = 0.99144

EXAMPLE (3-12):
The process of manufacturing screws is checked every hour by inspecting 10 screws selected
at random from the hour’s production. If one or more screws are found defective, the
production process is halted and carefully examined. Otherwise the process continues. From
past experience it is known that 1% of the screws produced are defective. Find the probability
that the process is not halted.
SOLUTION:
Let (X) be the number of defective items in the sample.
P(system is not halted) = P(X = 0) = P(number of defective items is zero)
æ10 ö
= çç ÷÷ (p)0 (1 - p)10 - 0
è 0ø
æ10 ö
= çç ÷÷ (0.01) 0 (0.99)10 - 0 = (0.99)10 = 0.9043
è 0ø

EXAMPLE (3-13):
Thirty students in a class compare birthdays. What is the probability that:
a- 5 of the students have their birthday in January?
b- 5 of the students have their birthday on January 1st?
c- At least one student is born in January?
SOLUTION:
1 11
a- P(success) = ; P(failure) =
12 12
Number of trials (n) = 30
Required number of successes (k) = 5

-31-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

ænö
P(5 successes in 30 trials) = çç ÷÷ (p) k (1 - p) n - k
èkø
5 30 - 5 5 25
æ 30 ö æ 1 ö æ 11 ö æ 30 ö æ 1 ö æ 11 ö
P(5 successes in 30 trials) = çç ÷÷ ç ÷ ç ÷ = çç ÷÷ ç ÷ ç ÷
è 5 ø è 12 ø è 12 ø è 5 ø è 12 ø è 12 ø
1 364
b- P(success) = ; P(failure) =
365 365
Number of trials (n) = 30
Required number of successes (k) = 5
5 25
ænö æ 30 ö æ 1 ö æ 364 ö
P(5 successes in 30 trials) = çç ÷÷ (p) k (1 - p) n - k = çç ÷÷ ç ÷ ç ÷
èkø è 5 ø è 365 ø è 365 ø
1 11
c- P(success) = ; P(failure) =
12 12
é æ 30 ö æ 1 ö 0 æ 11 ö 30 - 0 ù
P(X ³ 1) = 1 - P(X = 0) = ê1 - çç ÷÷ ç ÷ ç ÷ ú = 1 - 0.0735 = 0.9265
ëê è 0 ø è 12 ø è 12 ø ûú

EXAMPLE (3-14):
The captain of a navy gunboat orders a volley of 25 missiles to be fired at random along
a 500-foot stretch of shoreline that he hopes to establish as a beach head. Dug into the beach
is a 30-foot long bunker serving as the enemy's first line of defense. What is the probability
that exactly three shells will hit the bunker?
SOLUTION:
500 ft
30
P(success) = = 0.06 30 ft
500
ænö
P(3 successes in 25 shells) = çç ÷÷ (p) k (1 - p) n - k
èkø
For p = 0.06 and n = 25
æ 25 ö æ 25 ö
P(3 successes in 25 shells) = çç ÷÷ (0.06) 3 (1 - 0.06) 25 - 3 = çç ÷÷ (0.06) 3 (0.94) 22
è3ø è3ø

II. The Geometric Distribution


Let the outcome of an experiment be either a success with probability (p) or a failure with
probability (1 – p). Let (X) be the number of times the experiment is performed to the first
occurrence of a success. Then (X) is a discrete random variable with integer values ranging from
one to infinity. The probability mass function of (X) is:
P(X = x) = P( F
1F4F2
F4 . F S) = P(F) x - 1 P(S)
. .3
x -1

= (1 – p)x–1 (p) ; x = 1, 2, 3, ……

- Theorem:
The mean and the variance of (X) are:

-32-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

1
μ X = E(X) =
p
1- p
σ 2X = Var(X) =
p2

EXAMPLE (3-15):
Let the probability of occurrence of a flood of magnitude greater than a critical magnitude in
a given year be 0.02. Assuming that floods occur independently, determine the “return period”
defined as the average number of years between floods.
SOLUTION:
(X) has a geometric distribution with p = 0.01
1 1
μ X = E(X) = = = 50 years
p 0.02

-33-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

III. Hyper-geometric Distribution


Consider the sampling without replacement of a lot of (N) items, (k) of which are of one type
and (N – k) of a second type. The probability of obtaining (x) items in a selection of (n) items
without replacement obeys the hyper-geometric distribution:
ækö æ N - kö
çç ÷÷ çç ÷
è x ø è n - x ÷ø Type I Type II
P(X = x) =
æ Nö k (N – k)
çç ÷÷
èn ø (N objects)
x = 0 , 1 , 2 , ……… , min(n , k)

NOTE:
k Type I Type II
p= is the ratio of items of type (I) to the total population x (n – x)
N
Sample of size (n)
- Theorem:
The mean and the variance of the hyper-geometric random variable are:
k
m X = E(X) = n =np
N
n k (N - k) (N - n) æ k öæ N-k öæ N-n ö
s 2X = Var(X) = 2
= nç ÷ç ÷ç ÷
N (N - 1) è N ø è N ø è N -1 ø
æ k öæ k öæ N-nö æ N-nö
= n ç ÷ ç1 - ÷ ç ÷ = n p (1 - p) ç ÷
è N ø è N ø è N -1 ø è N -1 ø

EXAMPLE (3-16):
Fifty small electric motors are to be shipped. But before such a shipment is accepted, an
inspector chooses 5 of the motors randomly and inspects them. If none of these tested motors
are defective, the lot is accepted. If one or more are found to be defective, the entire shipment
is inspected. Suppose that there are, in fact, three defective motors in the lot. What is the
probability that the entire shipment is inspected?
SOLUTION:
Let (X) be the number of defective motors found, then (X) assumes the values (0 , 1 , 2 , 3).
P(entire shipment is inspected) = P(X ³ 1) = 1 – P(X = 0)
æ 3 ö æ 47 ö
çç ÷÷ çç ÷÷
èxø è5 - xø
P( X = x ) =
æ 50 ö
çç ÷÷
è5 ø
æ 3 ö æ 47 ö
çç ÷÷ çç ÷÷
0 5
P( X = 0) = è ø è ø = 0.72 (The lot is accepted)
æ 50 ö
çç ÷÷
è5ø
P(X ³ 1) = 1 – 0.72 = 0.28

-34-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-17):
A committee of seven members is to be formed at random from a class with 25 students of
whom 15 are girls. Find the probability that:
a- No girls are among the committee
b- All committee members are girls
c- The majority of the members are girls
SOLUTION:
Let (X) represents the number of girls in the committee.
æ15 ö æ10 ö
çç ÷÷ çç ÷÷
0 7
a- P(X = 0) = è ø è ø
æ 25 ö
çç ÷÷
è7ø
æ 15 ö æ 10 ö
çç ÷÷ çç ÷÷
7 0
b- P(X = 7) = è ø è ø
æ 25 ö
çç ÷÷
è7ø
c- P(majority are girls) = P(X = 4) + P(X = 5) + P(X = 6) + P(X = 7)
æ15 ö æ 10 ö
ç ÷ç
7 ç x ÷ç 7- x÷
÷
=å è ø è ø
x =4 æ 25 ö
çç ÷÷
è7ø

- Theorem:
For large (N), one can use the approximation:
ænö k
P ( X = x ) @ çç ÷÷ P x ( 1 - P ) n - x ; P =
è xø N
n
This approximation gives very good results if £ 0.1 , for the example above:
N
3 æ5ö
P= = 0.06 è P( X = 0) @ çç ÷÷ (0.06)0 (1 - 0.06) 5-0 = 0.733
50 è 0ø
IV. Poisson Distribution
- Definition:
A discrete random variable (X) is said to have a Poisson distribution if it has the following
probability mass function:
bx
P ( X = x ) = e -b ; x = 0 , 1 , 2 , ……… where (b) is a positive constant.
x!
- Theorem:
If (X) is a Poisson r.v with parameter (b), then its mean and variance are:
m X = E(X) = b
s X2 = Var(X) = b

-35-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

- Poisson Process:
Consider a counting process in which events occur at a rate of (l) occurrence per unit time.
Let X(t) be the number of occurrences recorded in the interval (0 , t), we define the Poisson
process by the following assumptions:
1- X(0) = 0 , i.e., we begin the counting at time t = 0.
2- For non-overlapping time intervals (0 , t1) , (t2 , t3), the number of occurrences {X(t1) – X(0)}
and {X(t 3) – X(t2)} are independent.
3- The probability distribution of the number of occurrences in any time interval depends only on
the length of that interval.

4- The probability of an occurrence in a small time interval (Dt) is approximately (l Dt).


X(t0) X(t1) X(t2) X(t3)

t=0 t1 t2 t3
Using the above assumptions, one can show that the probability of exactly (x) occurrences in
any time interval of length (T) follows the Poisson distribution and,
( lT ) x
P ( X = x ) = e - lT ; x = 0 , 1 , 2 , 3 , ………
x!
- Theorem:
Let (b) be a fixed number and (n) any arbitrary positive integer. For each nonnegative integer (x):
æn ö bx
Lim çç ÷÷ (p) x (1 - p) n - x = e - b ; where p = b/n
n ®¥ x x!
è ø

EXAMPLE (3-18):
Messages arrive to a computer server according to a Poisson distribution with a mean rate
of 10 messages/hour.
a- What is the probability that 3 messages will arrive in one hour.
b- What is the probability that 6 messages will arrive in 30 minutes.
SOLUTION:
a- l = 10 messages/hour è T = 1 hour
(10 ´ 1) x (10) x
P( X = x ) = e -10´1 = e -10 ; x = 0 , 1 , 2 , 3 , ………
x! x!
(10) 3
P( X = 3) = e -10
3!
b- l = 10 messages/hour è T = 0.5 hour
1 x
1 (10 ´ ) x
-10´
P( X = x ) = e 2 2 = e -5 (5) ; x = 0 , 1 , 2 , 3 , ………
x! x!
(5) 6
P ( X = 6) = e -5
6!

-36-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-19):
The number of cracks in a section of a highway that are significant enough to require repair is
assumed to follow a Poisson distribution with a mean of two cracks per mile.
a- What is the probability that there are no cracks in 5 miles of highway?
b- What is the probability that at least one crack requires repair in ½ miles of highway?
c- What is the probability that at least one crack in 5 miles of highway?
SOLUTION:
a- l = 2 cracks/mile è T = 5 miles
( 2 ´ 5 )x (10) x
P(X = x ) = e - 2´5 = e -10 ; x = 0 , 1 , 2 , 3 , ………
x! x!
P(X = 0) = e-10
b- l = 2 cracks/mile è T = 5 miles
1 x
1 (2 ´ ) x -1
- 2´
P( X = x ) = e 2 2 = e -1 (1) = e ; x = 0 , 1 , 2 , 3 , ………
x! x! x!
¥ -1
e
P(X ³ 1) = å = [1 - P(X = 0)] = 1 - e -1
x =1 x!
c- l = 2 cracks/mile è T = 5 miles
( 2 ´ 5 )x (10) x
P( X = x ) = e - 2´5 = e-10 ; x = 0 , 1 , 2 , 3 , ………
x! x!
¥
e -10 (10) x
P(X ³ 1) = å = [1 - P(X = 0)] = 1 - e -10
x =1 x!

EXAMPLE (3-20):
Given 1000 transmitted bits, find the probability that exactly 10 will be in error. Assume that
1
the bit error probability is .
365
SOLUTION:
X: random variable representing number of bits in error.
Exact solution:
1
P(bit error) = ; Number of trials (n) = 1000
365
Required number of bits in error (k) = 10
10 990
ænö æ1000 ö æ 1 ö æ 364 ö
P(X = 10) = çç ÷÷ (p) k (1 - p ) n - k = çç ÷÷ ç ÷ ç ÷
èkø è 10 ø è 365 ø è 365 ø
Approximate solution:
bx 1 1000
P( X = x ) = e -b ; b = n p = 1000 ´ =
x! 365 365
b10
P( X = 10) = e - b
10 !
Exercise:
Perform the computation and compare the difference

-37-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

§ Common Continuous Random Variables:


I. Exponential Distribution:
- Definition:
It is said that a random variable (X) has an exponential distribution with a parameter l (l > 0)
if (X) has a continuous distribution for which the pdf fX(x) is given as:

f X ( x ) = l e - lx ; x ³0
The cumulative distribution function is:
F X ( x ) = 1 - e - lx ; x ³0
fX(x) FX(x)

x x

The exponential distribution is often used in a practical problem to represent the distribution of
the time that elapses before the occurrence of some event. It has been used to represent the
such periods of time as the period for which a machine or an electronic component will
operate without breaking down, the period required to take care of a customer at some service
facility, and the period between the arrivals of two successive customers at a facility.

If the event being considered occurs in accordance with a Poisson process, then both the
waiting time until an event will occur and the period of time between any two successive
events will have exponential distribution.

Occurrence of events

Time
x x
- Theorem:
If the random variable (X) has an exponential distribution with parameter (l), then:
1
m X = E(X) = and
l
1
s X2 = Var(X) =
l2
Exercise
The number of telephone calls that arrive at a certain office is modeled by a Poisson random
variable. Assume that on the average there are five calls per hour.
a. What is the average (mean) time between phone calls?
b. What is the probability that at least 30 minutes will pass without receiving any
phone call?
c. What is the probability that there are exactly three calls in an observation interval
of two consecutive hours?
d. What is the probability that there is exactly one call in the first hour and exactly two
calls in the second hour of a two-hour observation interval?

-38-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-21):
Suppose that the depth of water, measured in meters, behind a dam is described by an
exponential random variable with pdf:
ì 1 -x

ï e 13.5
x >0
f X (x) = í13.5
ï 0 o.w
î
There is an emergency overflow at the top of the dam that prevents the depth from exceeding
40.6 m. There is a pipe placed 32.0 m below the overflow that feeds water to a hydroelectric
generator (turbine).

a- What is the probability that water is wasted though emergency overflow?


b- What is the probability that water will be too low to produce power?
c- Given that water is not wasted in overflow, what is the probability that the generator will
have water to derive it?
SOLUTION:
¥ -x
1 13.5
a- P(water wasted through emergency) = P(X ³ 40.6 m) = ò e dx = e -3
40.6
13.5
b- P(water too low to produce power) = P(x < 8.6 m) ( )
= 1 - e -0.637 = 0.47

c- P(generator has water to derive it / water is not wasted) = P(x > 8.6 / x < 40.6)
40.6 -x
1 13.5
P(x > 8.6 I x < 40.6) P(8.6 < x < 40.6) ò 13.5 e dx
= = = 40.6 8.6
= 0.504
P(x < 40.6) P(x < 40.6) -x
1 13.5
ò0 13.5 e dx = e
-3

II. Rayleigh Distribution:


The Rayleigh density and distribution functions are:
-x
2
f X (x) = x e b ; x ³0
b
-x 2
FX (x) = 1 - e b
; x ³0
The Rayleigh pdf describes the envelope of white noise when passed through a band pass filter.
It is used in the analysis of errors in various measurement systems.
- Theorem:
πb b(4 - π)
μ X = E(X) = and σ 2X = Var(X) =
4 4
III. Cauchy Random Variable:
This random variable has:
a/p 1 1 æ xö
fX ( x )= 2 , F X ( x ) = + tan - 1 ç ÷
x +a 2
2 p èa ø

-39-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

IV. Gaussian (Normal) Distribution :


- Definition:
A random variable (X) with pdf:
-( x - m X )2
1 2 s 2X
f X (x) = e - ¥ < x < ¥
2ps 2
X

has a normal distribution with parameters ( m X ) and ( s 2X ) where - ¥ < x < ¥ and s 2X ³ 0 .
Furthermore:
E( x ) = m X ; Var ( x ) = s 2X

Infinite number of normal distributions can be formed by different combination of parameters.

m1 ≠ m2 m1 = m2 m1 ≠ m2
s1 = s2 s1 ≠ s2 s1 ≠ s2

- Definition:
A normal random variable with mean zero and variance one is called a standard normal
random variable. A standard normal random variable is denoted as Z.

fX(x) fZ (z)
1
2 p s 2X

0 .607
2 p s 2X

z
μX - σX mX μ X + σ X x 0 z

- Definition:
The function Ф(z) = P{Z £ z} is used to denote the cumulative distribution function of a
standard normal random variable:
z u2
1 -
Ф(z) = ò
2p

e du 2

This function is tabulated for z ³ 0


For z < 0 ; Ф(z) = 1 – Ф(–z)

-40-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

x - Scale
μX - 4σX μX - 3σX μX - 2σX μ X - σX μX μX + σX μX + 2σX μX + 3σX μX + 4σX

z - Scale
-4 -3 -2 -1 0 1 2 3 4

fZ (z)

z
0 1 a b
Ф(1) Ф(b) - Ф(a)

1 -1
Area = 1 - Ф(1) Ф(-1) = 1 - Ф(1)

Area = Ф(1) - Ф(-1)


= Ф(1) – [1 – Ф(1)]
= 2Ф(1) – 1
-1 1

-41-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

- Cumulative Distribution Function:


x - ( x - m X )2
1
ò
2 s 2X
P(X £ x) = FX ( x ) = e dx
-¥ 2 p s 2X
æ x -μX ö dx
Let u = çç ÷÷ è du = è dx = s X du
è σX ø sX
x -m X
sX -u2
1
FX ( x ) = ò
-¥ 2 ps 2X
e 2
s X du

z -u2
1
Ф(z) = ò
-¥ 2p
e 2
du

æ Z -μX ö
Ф(z) = F çç ÷÷
è σ X ø
Therefore, we conclude that:
æ x -μX ö
1- P(X £ x0) = F çç 0 ÷÷
è σX ø
æ x -μX ö æ x -μX ö
2- P(x0 £ X £ x1) = F çç 1 ÷÷ – F çç 0 ÷÷
è σX ø è σX ø

EXAMPLE (3-22):
Suppose the current measurements in a strip of wire are assumed to follow a normal
distribution with a mean of 10 mA and variance 4 (mA)2. What is the probability that
a measurement will exceed 13 mA?
SOLUTION:
X = current in mA
æ X -μX ö æ X - 10 ö
Z = çç ÷=ç
÷ è 2 ÷ø
è σX ø
ì æ X - 10 ö æ 13 - 10 ö ü
P(X > 13) = P í Z = ç ÷>ç ÷ = 1.5 ý
î è 2 ø è 2 ø þ
P(X > 13) = P {Z > 1.5 } = 1 - F (z ) è From tables:
= 1 – 0.93319 = 0.06681

fX(x) fZ(z)

z
10 13 x 0 1.5

-42-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-23):
The diameter of a shaft in an optical storage drive is normally distributed with mean 0.25
inch and standard deviation of 0.0005 inch. The specifications on the shaft are 0.25 ± 0.0015
inch. What proportion of shafts conforms to specifications?
fX(x)
SOLUTION:

P(0.2485 < X < 0 .2515)


ìæ 0.2485 - 0.2500 ö æ 0.2515 - 0.2500 ö ü
= P íç ÷<Z <ç ÷ý
îè 0.0005 ø è 0.0005 øþ 0.2485 0.25 0.2515

= P {- 3 < Z < 3} fZ(z)

= Ф(3) – Ф(–3)
= 2 Ф(3) – 1 è From tables:
= Ф(3) – Ф(–3) = (2 x 0.99865) – 1 = 0.9973
-3 0 3

EXAMPLE (3-24):
Assume that the height of clouds above the ground at some location is a Gaussian random
variable (X) with mean 1830 m and standard deviation 460 m. find the probability that clouds
will be higher than 2750 m.
fX(x)
SOLUTION:

P(X > 2750) = 1 - P(X £ 2750)

ì æ 2750 - 1830 ö ü x
= 1 - PíZ £ ç ÷ý 1830 2750
î è 460 øþ
fZ(z)
= 1 - P( Z £ 2 . 0 )
= 1 – Ф(2.0) è From tables:
= 1 – 0.9772
z
0 2.0
P(X > 2750) = 0 .0228

Exercise
The tensile strength of paper is modeled by a normal distribution with a mean of 35 pounds
per square inch and a standard deviation of 2 pounds per square inch.
a. If the specifications require the tensile strength to exceed 33 lb/in2 , what is the
probability that a given sample will pass the specification test?
b. If 10 samples undergo the specification test, what is the probability that at least 9
will pass the test?
c. If 20 samples undergo the test, what is the expected number of samples that pass
the test?

Exercise
The rainfall over Ramallah district follows the normal distribution with a mean of 600 mm and
a standard deviation of 80 mm. The rainfall is distributed over 500 km2 area. Find:

-43-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

1. The probability of obtaining a rainwater volume less than 206 MCM (MCM = Million
Cubic Meter)
2. Find the mean and the standard deviation of the volume (V) of rainfall in MCM.
3. Flooding condition will be considered if the rainfall is higher than 900 mm.
Find the probability of flooding for any given year.

- Remark:
The area under the Gaussian curve within (k) standard deviations of the mean is given in the
following table:
Area
k
P(m X - ks X £ X £ m X + ks X )
1 0.6826
2 0.9544
3 0.9973
Total probability outside an interval of 4 standard
4 0.99994 è
deviations on each side of the mean is only 0.00006

§ Normal Approximation of the Binomial and Poisson Distribution:


- Theorem: De-Moiver-Laplace
For large (n) the binomial distribution
- ( x - np ) 2
ænö x 1
çç ÷÷ p (1 - p) n - x ~ e 2npq ( ~ : asymptotically equal)
èxø 2pnpq
Which is a normal distribution with mean (n p) and variance (n p q). Therefore, if (X) is
æX-n pö
a binomial r.v, then Z = ç ÷ is approximately a standard normal r.v.
ç npq÷
è ø
The theorem gives better results when (n p > 5) and (n p q > 5)
b
ænö
P(a £ X £ b ) = å çç ÷÷ p x (1 - p ) n - x ~ F (b ) - F ( a ) where:
x=a è x ø

æ b-np ö æ ö
b=ç ÷ and a = ç a - n p ÷
ç npq ÷ ç npq ÷
è ø è ø

EXAMPLE (3-25):
Consider a binomial experiment with n = 1000 and p = 0.2. if X is the number of successes,
find the probability that X £ 240 .
SOLUTION:
æ 1000 ö
240
Exact solution: P(X £ 240) = ÷÷ (0.2 )x (1 - 0.2 )1000 - x
å çç
x=0 è x ø
Applying the Demoiver-Laplace theorem:
æ 240 - 1000 ´ 0 . 2 ö
P(X < 240 ) = F çç ÷÷ = F (3 . 162 ) = 0 . 999216
è 1000 ´ 0 . 2 ´ 0 . 8 ø

-44-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

- Theorem:
If (X) is a Poisson r.v with E(X) = b and Var(X) = b, then: æX -b ö
Z = çç ÷÷
è b ø
is approximately a standard normal r.v. The approximation is good for (b > 5).
- (x - b)2
bx 1
e-b
® e 2b
x! 2πb

EXAMPLE (3-26):
Assume the number of asbestos particles in a cm3 of dust follow a Poisson distribution with
a mean of 1000. If a cm3 of dust is analyzed, what is the probability that less than 950
particles are found in 1 cm3?
SOLUTION:
950
(1000 ) x
Exact solution: P(X £ 950) = å e -1000
x =0 x!
ì 950 - 1000 ü
Approximate: P(X < 950 ) = P í Z £ ý = P {Z £ - 1 .58} = 0 .057
î 1000 þ

§ Transformation of Random Variables:


Let (X) be a random variable with a pdf fX(x). If Y = g(X) is a function of (X), then (Y) is
a random variable. Its pdf is to be determined. The function g(X) is a single valued function of
its argument.

I. Discrete Case:

EXAMPLE (3-27):
Let (X) be a binomial r.v with parameters (n = 3) and (p = 0.75). Let Y = g(x) = 2X + 3
P(Y = y) = P(X = x)
SOLUTION:
The table below shows the (x) and (y) values and their probabilities.
ænö x y
P(X = x ) = çç ÷÷ p x (1 - p) n - x Y = g(X)
x
è ø (0, 1, 2, 3) (3, 5, 7, 9)

y = 2x + 3
10

(y) 6
4

0
0 1 2 3 4
(x)

-45-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

x y P(X = x) P(Y = y)

0 3 (1 – p)3 (1 – p)3
1 5 3 p (1 – p)2 3 p (1 – p)2
2 7 3 p2 (1 – p) 3 p2 (1 – p)
3 9 p3 p3

EXAMPLE (3-28):
1
Let (X) has the distribution P{X = x} = ; x = -3 , -2 , -1 , 0 , 1 , 2
6
Define Y = g(x) = X 2. Find the pdf of the random variable Y.
P(X=x) = 1/6
SOLUTION:
x y P(X = x) P(Y = y)
7 9 1/6 1/6 x
-3 -2 -1 0 1 2
-2 4 1/6 1/6
-1 1 1/6 1/6 Probability Density Function fX(x)
0 0 1/6 1/6 P(Y=1) P(Y=4)
1 1 1/6 1/6
2 4 1/6 1/6 P(Y=0) P(Y=9)

The distribution of Y is:


P(Y = 0) = 1/6 0 1 4 9 y
P(Y = 1) = 2/6
P(Y = 4) = 2/6 Probability Density Function fY(y)
P(Y = 9) = 1/6
II. Continuous Case:
Let Y = g(X) be a monotonically increasing or decreasing function of (x).
P(x < X < x + D x ) = P {y ( x ) < Y < y ( x + D x )} y
y + Dy
P(x < X < x + D x ) = P {y < Y < y + D y} y(x+Dx)

fX(x) Dx = fY(y) Dy
y
y(x)
Dx f X (x ) f X (x )
f Y ( y) = f X ( x ) = =
Dy Dy dy
Dx dx
y1 < y < y2 x
x x+Dx

EXAMPLE (3-29):
Let (X) be a Gaussian r.v with mean (0) variance (1).
Let Y = X2. Find fY(y)

-46-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

SOLUTION:
f X ( x) dy
f Y ( y) = 2 0£y£¥ ; = 2x
dy dx dx
- (x )2
2 1
f Y ( y) = e 2
, x= y
2x 2p
-y
1 1
f Y ( y) = e 2

y 2p
-y
1
f Y ( y) = e 2
; y³0
2 py

EXAMPLE (3-30):
Let (X) be a uniform r.v in the interval (-1 , 4). If Y = X2. Find fY(y)
SOLUTION:
f X (x ) 2 ´ 1 5 1
For (-1 £ X £ 1) : è f Y ( y) = 2 = =
dy dx 2x 5 y
f X (x) 1 5 1 y
For (1 < X £ 4 ) : è f Y ( y) = = =
dy dx 2 x 10 y
16
ì 1
ï 0 < y £1
ï5 y
ïï 1
f Y ( y) = í 1 < y £ 16
ï10 y
ï
ï 0 x
ïî Otherwise -1 1 4

-47-
SINGLE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHAPTER III

EXAMPLE (3-31):
ìl e - lx x > 0
Let (X) be a r.v with the exponential pdf: f X ( x) = í
î 0 x <0
Let Y = 2X + 3. Find fY(y) and the region over which it is defined.
SOLUTION:
f X ( x) dy
f Y ( y) = ; =2
dy dx dx
æ y -3ö
fX ç ÷
f X (x ) y -3 è 2 ø
f Y ( y) = , but x = Þ f Y ( y) =
2 2 2
ì l -l ( 2 ) y - 3
y -3

ïï e >0 ì l -l æç y-3 ö÷
ï e è2 ø y>3
f Y (y) = í 2 2 è f Y (y) = í 2
ï y-3 ï
0 <0 î 0 y <3
ïî 2
1

ò le
-lx
NOTE: P(3 < Y £ 5) = P(0 < X £ 1) = dx = 1 - e - l
0

EXAMPLE (3-32):
Let (X) be a Gaussian r.v with mean ( m X ) variance ( s 2X )
Let (Y) = aX + b be any r.v. Find fY(y)
SOLUTION:
dy y-b
Y = aX + b è = a and x =
dx a
y-b
-( - m X )2
-(x - mX )2 a - ( y - b -a m X ) 2
1 1 2 s 2X 1 2 s 2X 1 2 (a s X ) 2
f Y ( y) = e = e = e
a 2ps 2
X 2 p (a s X ) 2
2 p (a s X ) 2

but from previous results we have: m Y = a m x + b and s = a s hence, 2


Y
2 2
X
- ( y - m Y )2
1 2 s 2Y
f Y ( y) = e
2 p s 2Y
Therefore, Y is Gaussian with mean ( m Y = a m X ) and variance ( s 2Y = a2 s 2X )

SPECIAL CASE:
If (X) is a Gaussian r.v with mean ( m X ) variance ( s 2X ), then:
æ X -μX ö
÷ is a Gaussian r.v with mean ( m Z = 0) variance ( s Z = 1).
2
the r.v Z = ç
ç ÷
è σX ø
That is (Z) is a standard normal random variable.
GENERAL RESULT:
A linear transformation of a Gaussian random variable is also Gaussian.

-48-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

C H A P T ER I I I

P R O B A B I LI T Y D I S T R I B U T I O N S
F O R M O R E T H A N O NE
R A N D O M V A R IA B L E
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

In certain experiments we may be interested in observing several quantities as they occur, such
as carbon content (X) and hardness (Y) of steel; input (X) to a system and output (Y) at a
given time too.

- If we observe two quantities (X) and (Y), each trial gives a pair of values X = x and Y = y,
(x,y) which represents a point (x,y) in the x-y plane.

- The joint cumulative distribution function of two r.v X and Y is defined as:
FXY(x,y) D P{X £ x , Y £ y} y

Event (A) = {X £ x}
(x,y)
Event (B) = {Y £ y}

x
A B
R
R
S A∩B

I. Discrete Two Dimensional Distribution:


A random variable (x,y) and its distribution are called discrete if (x,y) can assume only
countably finite or at most countably infinite pairs of values (x1,y1), (x2,y2), ……
The joint probability mass function of (X) and (Y) is:
Pij = P{X = xi , Y = yj } such that

FXY(x,y) = å åP
xi £ x y j £ y
ij

and ååP
i j
ij =1

II. Continuous Two Dimensional Distribution:


A random variable (x,y) and its distribution are called continuous if FXY(x,y) can be given by:
y x

FXY (x, y) = òòf


-¥ - ¥
XY ( x , y) dx dy

where fXY(x,y) is the joint probability density function


(f being continuous and nonnegative)

-47-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

y
- Properties of the joint pdf:
y2
1- fXY(x,y) ³ 0
¥ ¥ R
2- òò
-¥ - ¥
f XY ( x , y) dx dy = 1 y1
y2 x 2
x1 x1 x
3- P(x 1 < X £ x 2 , y1 < Y £ y 2 ) = òò
y1 x 1
f XY ( x, y) dx dy
y
and in general:
R
P(x , y Î R) = òò f XY ( x , y) dx dy
R
x
§ Marginal Distributions of a Discrete Distribution:
P(X = xi) = P(X = xi , Y arbitrary)
= å P(X = x i , Y = y j )
y

This is the probability that (X) may assume a value (x), while (Y) may assume any value
which we ignore.
Likewise:
P(Y = yj) = å P(X = x
x
i , Y = yj)

§ Marginal Distributions of a Continuous Distribution: y


For a continuous distribution we have:
x
æ¥ ö
FX (x) = P(X £ x) = ò çç ò f XY (x , y) dy ÷÷dx
-¥ è - ¥ ø
d
but f X (x) = FX (x)
dx
¥
è f X (x) = òf

XY ( x , y) dy ; Marginal pdf
x
¥
x
è f Y (y) = òf

XY ( x , y ) dx ; Marginal pdf

§ Independence of Random Variable:


Region such that:
- Theorem: {X ≤ x}
Two random variables (X) and (Y) are said to be independent if:
FXY(x,y) = FX(x) FY(y) holds for all (x,y), or equivalently:
fXY(x,y) = fX(x) fY(y)
Proof:
FXY(x,y) = P{X £ x , Y £ y}
Let: A: event {X £ x}
B: event {Y £ y}
A and B are independent if:
P(A ∩ B) = P(A) P(B)
P(X £ x ,Y £ y) = P(X £ x) P(Y £ y)

-48-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

§ Conditional Densities:
Let (X) and (Y) be discrete random variables. The conditional probability density function of
(Y) given (X = x), that is the probability that (Y) takes on the value (y) given that (X = x), is
given by:
P(X = x , Y = y)
f Y / X ( y) = P(Y = y / X = x) =
P( X = x )
If (X) and (Y) are continuous, then the conditional pdf of (Y) given (X = x) is given by:
f X , Y (x, y)
f Y / X ( y) =
f X (x)

EXAMPLE (4-1):
Let (X) and (Y) be continuous random variables with joint pdf:
1
f X , Y ( x , y ) = (6 - x - y ) ; 0 < x £ 2 , 2 < y £ 4
8 y
1- Find fX(x) and fY(y). 4
2- Find fY/X(y).
3- Find P(2<y £ 3)
4- Find P(2<y £ 3 / x=1)
3
R
2
SOLUTION:
1
¥
x
1- f X (x) = òf

XY ( x , y) dy
0 1 2
4
1 1
= ò 8 (6 - x - y) dy = 8 (6 - 2x )
2
; 0<x £ 2
¥ 2
1 1
f Y (y) = ò

f XY ( x, y) dx = ò 8 (6 - x - y) dx = 4 (5 - y)
0
; 2<y £ 4

f X , Y ( x , y)
2- f Y / X ( y) =
f X (x)
1
(6 - x - y )
(6 - x - y )
= 8 = ; 0<x £ 2 , 2<y £ 4
1 (6 - 2 x )
(6 - 2 x )
8
3 3
1 5
3- P(2<Y £ 3) = ò f Y (y) dy = ò (5 - y) dy =
2 2
4 8
(5 - y )
4- f Y / X ( y / x = 1) = ; 2<y£ 4
4
3
5- y 5
P(2<Y £ 3 / X=1) = ò dy =
2
4 8

Exercise:
1- Find P(2<Y £ 3 / 0 £ X £ 1)
2- Find m X , μ Y , σ 2X , and σ 2Y
3- Are X and Y independent?

-49-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-2):
Suppose that the random variable (X) can take only the values (1 , 2 , 3) and the random
variable (Y) can take only the values (1 , 2 , 3 , 4). The joint pdf is shown in the table.

Y
1 2 3 4
X
1 0.1 0 0.1 0
2 0.3 0 0.1 0.2
3 0 0.2 0 0

f XY (x, y)
0.3
X
0.2

3
0.2
2
0.1 0.1
1
0.1

1
2
3
4
1- Find fX(x) and fY(y).
Y
2- Find P(X ³ 2)
3- Are (X) and (Y) independent.

SOLUTION:
1- P(X = 1) = 0.2 P(Y = 1) = 0.4
P(X = 2) = 0.6 P(Y = 2) = 0.2
P(X = 3) = 0.2 P(Y = 3) = 0.2
P(Y = 4) = 0.2

∑ = 1.0 ∑ = 1.0
2- P(X ³ 2) = P(X = 2) + P(X = 3) = 0.6 + 0.2 = 0.8
3- Check all pairs (x,y) for:
?
P(X = x ,Y = y) = P(X = x) P(Y = y)
P(X = 1 , Y = 1) = 0.1 ≠ (0.2 x 0.4) = 0.08 è we do not continue
è X and Y are not independent

Exercise:
– Find m X and s 2X

-50-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-3):
Let (X) and (Y) have the joint pdf:
ìk x y , 0 £ x £ 1, 0 £ y £ 1,
f XY ( x , y) = í y
î 0 , otherwise
1
a- Find (k) so that fX(x) is a proper pdf.
b-
c-
Find P(X > 0.5 , y > 0.5)
Find fX(x) and fY(y)
R
d- Are (X) and (Y) independent. 0.5

SOLUTION:
x
¥ ¥
æ1 ö 1
a- ò ò f XY ( x , y) dx dy = 1 è k ò yçç ò x dx ÷÷ dy = 1 (0,0) 0.5 1
-¥ - ¥ 0 è0 ø
1 æ 2 1 ö 1 1
ç x ÷ k k y2 k k
kò y dy = ò y dy = = è \ =1 Þ k = 4
0 è
ç 2 0 ÷ 20 2 2 0 4 4
ø
1 1
b- P(X > 0.5 , y > 0.5) = ò ò 4 x y dx dy
0.5 0.5

æ x2 1
ö æ y2 1
ö
4ç ÷ç ÷ = (1 - 0.25) ´ (1 - 0.25) = 0.75 ´ 0.75 = 0.5625
ç 2 ÷ç 2 ÷
è 0.5 ø è 0.5 ø
1
c- f X (x) = ò f XY ( x , y ) dy
0
1 1
y2
f X (x) = ò 4 x y dy = 4 x =2x
0
2 0
1 1
x2
f Y (y) = ò 4 x y dx = 4 y =2y
0
2 0
Since fXY(x,y) = fX(x) fY(y)
è 4 x y = (2 x) (2 y) è (X) and (Y) are independent.

EXAMPLE (4-5):
For example (4-3), find P(X > Y). y
SOLUTION: 1
1 x 1 1
P(Y < X) = òò 4 x y dx dy = ò ò 4 x y dy dx = ò ò 4 x y dx dy y=x
R 0 0 0 y

1 æ y2 ö x 1 2 1
= ò4x ç ÷ dx = 4 x x dx = 2 x 3 dx
ç 2 ÷ ò0 2 ò0
0 è 0 ø
x
æ x4 1 ö 1
P(Y < X) = 2 ç ÷= 1
ç 4 ÷ 2
è 0ø

-51-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-4):
Two random variables (X) and (Y) have the joint pdf:
ì5 2 y
ï x y , 0< y< x <2
f XY ( x , y) = í16 2
ïî 0 , otherwise
a- Verify that fXY(x,y) is a valid pfd.
b- Find the marginal density functions of X and Y.
c- Are X and Y statistically independent?
d- Find P{X<1} , P{Y<0.5} , P{XY<1} R
SOLUTION: x
2 x 2
5
òò f ò ò 16 x
2
a- XY ( x , y) dx dy = y dy dx
R 0 0

2
æx ö 2 æ 2 x
ö 2 4 5
2
5 5 2ç y ÷ dx = 5 x dx = 5 ´ 1 ´ x 5 1 32
16 ò0 çè ò0 16 ò0 ç 2 16 ò0 2
x 2
ç y dy ÷ dx = x = ´ ´ =1
÷ ÷ 16 2 5 16 2 5
ø è 0 ø 0

¥ x x
5 2 5 2 y2 5 4
b- f X (x) = ò

f XY ( x , y) dy è f X (x) = ò
0
16
x y dy =
16
x
2
=
32
x
0

ì5 4 2
ï x , 0<x<2 5 4 2
5 x5
f X ( x ) = í 32 è check ò x dx = = 1 Þ OK
ïî 0 , otherwise 0 32 32 2 0

¥ 2 2
5 5 x3 5
f Y (y) = ò f XY ( x , y) dx è f Y (y) = ò x 2 y dx = y = y(8 - y 3 )
-¥ y
16 16 3 y
48
ì5 2
ï y(8 - y ) , 0 < y < 2 5 æ 2 y5
3 2
5
f Y ( y) = í 48 è ò (8y - y ) dy =
4
ç 4y - = 1 Þ OK
ïî 48 48 çè 5 0
0 , otherwise 0

c- Since fXY(x,y) ≠ fX(x) fY(y) è (X) and (Y) are not statistically independent.
1 1 1
5 4 5 x5 1
d- P{X<1} = ò
0
f X (x) dx = ò
0 32
x dx =
32 5
=
32
= 0.03125
0
0.5 0 .5
5
òf (y) dy = ò 48 y(8 - y
3
P{Y<0.5} = Y ) dy
0 0 y
0.5
5 æ 2 y 4
105
= ç 4y - = = 0.1025
48 çè 4 0
1024 2
1 1
P{XY<1} = P{Y< } y=
X x
1 1
P{Y<
X
}= òò f
R
XY ( x , y) dx dy

1
1 x
5 2 x
5 2
2
1
R
x
P{Y< } =
X ò0 ò0 16 x y dy dx + ò1 ò0 16 x y dy dx 1 2

-52-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

§ Addition of Means and Variances:


- Review: (Basic operations on a single random variable)
¥
- E(X) = òx f

X (x) dx
¥
- E{g(X)} = ò g(X) f X (x) dx

- E{g1 (X) + g 2 (X)} = E{g1 (X)}+ E{g 2 (X)}
- If Y = aX + b
E(Y) = a E(X) + b è μY = a μ X + b è σ 2Y = a 2σ 2X

- Definition:
The expected value of a function g(x,y) of two random variables (X) and (Y) is:
E{g(x,y)} = å å g(x
xi yj
i , y j ) P(X = x i , Y = y j ) ; X and Y are discrete

¥ ¥
= ò ò g(x , y) f
-¥ - ¥
XY ( x , y) dx dy ; X and Y are continuous

Since summation and integration are linear processes, we have:


E{a g1(x,y) + b g2(x,y)} = a E{g1(x,y)} + b E{g2(x,y)}
- Theorem: Addition of Means
The mean or expected value of a sum of random variables is the sum of the expectations.

E(x1 + x2 + ……. + xn) = E(x1) + E(x2) + ……. + E(xn)

- Theorem: Multiplication of Means


The expected value of the product of independent r.v equals the product of the expected values.
E(x1 x2 ……. Xn) = E(x1) E(x2) ……. E(xn)
Proof:
If (X) and (Y) are independent random variables, then fXY(x,y) = fX(x) fY(y), so:
¥ ¥ ¥ ¥
E(XY) = ò ò x y f XY (x, y) dx dy = ò x f X (x ) dx
-¥ - ¥ -¥
òy f

Y ( y) dy = E(X) E(Y)

And in general, if (X) and (Y) are independent, then:


E{g1(X) g2(Y)} = E{g 1(X)} E{g2(Y)}
- Theorem: Addition of Variances
- Definition:
The correlation coefficient between two random variables (X) and (Y) is:
E{(X - μ X )(Y - μ Y )} μ XY
ρ XY Δ =
σX σY σX σY
where m XY is called the covariance and r XY is bounded between - 1 £ r XY £ 1
when r XY = 0 , (X) and (Y) are said to be uncorrelated.

when r XY = ±1 , (X) and (Y) are said to be fully correlated.

-53-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

- Theorem:
Let Y = a1X1 + a2X2 , then
σ 2Y = a 12 σ 2X1 + a 22 σ 2X 2 + 2 a 1a 2 σ X1 σ X 2 ρ X1X 2

Proof:
s 2Y = E{(Y - m Y ) 2 } = E{(a 1 X 1 + a 2 X 2 - a 1m X1 - a 2 m X 2 ) 2 }
= E{[a 1 (X 1 - m X1 ) + a 2 (X 2 - m X 2 )]2 }
= E{a 12 (X 1 - m X1 ) 2 } + E{a 22 (X 2 - m X 2 ) 2 } + 2 a 1 a 2 E{(X 1 - m X1 )(X 2 - m X 2 )}
= a 12 s 2X1 + a 22 s 2X 2 + 2 a 1a 2 m XY
m XY
since r XY = \ σ 2Y = a12σ 2X1 + a 22σ 2X2 + 2 a1a 2σ X1 σ X2 ρ XY
sX sY
- Theorem: Multiplication of Means
If (X) and (Y) are independent random variables, then they are uncorrelated.
Proof:
μ XY = E{(X - μ X )(Y - μ Y )}
= E{XY} - m Y E{X} - m X E{Y} + m X m Y
= E{XY} - E{X}E{Y}
But since (X) and (Y) are independent, then E{XY} = E{X}E{Y}
m
è m XY = 0 è r XY = XY = 0
sX sY
- Theorem:
Let Y = a1X1 + a2X2 , and (X) and (Y) are independent random variables, then
s 2Y = a 12 s 2X1 + a 22 s 2X 2

This result follows immediately from the above two theorems.


The sum of independent random variables equals the sum of the variances of these variables.
§ Functions of Random Variables:
- Let (X) and (Y) be random variables with a joint pdf fXY(x,y) and let g(x,y) be any continuous
function that is defined for all (x,y). then:
Z = g(x,y) is a random variable. The objective is to find fZ(z).
- When (X) and (Y) are discrete random variables, we may obtain the probability mass function
P{Z = z} by summing all probabilities for which g(x,y) equals the value of (z) considered,
thus:
P(Z = z ) = Σ Σ P(X = x i , Y = yj)
g(x, y) =z

- In the case of continuous random variables (X) and (Y) we find FZ(z) first:
FZ(z) = P{Z £ z} = òò f XY ( x , y) dx dy
g(x, y) £ z

d Fz (z )
Then we find: fZ(z) =
dz

-54-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

- Theorem:
Let Z = X + Y and let (X) and (Y) be independent random variables, then;
¥
f Z (z ) =

òf X ( x ) f Y (z - x ) dx

Proof:
FZ (z ) = P(Z £ z ) = P (X + Y £ z )
FZ (z ) = P(Y £ z - X) = òò f XY ( x , y) dx dy
R
¥ z -x
= ò òf
-¥ - ¥
XY ( x , y) dx dy

since (X) and (Y) be independent random variables, then


fXY(x,y) = fX(x) fY(y), so:
¥ z -x
Fz (z ) = ò òf
-¥ - ¥
X (x ) f Y ( y) dx dy

¥ æ z -x ö
Fz (z ) = ò-¥çç -ò¥ f Y ( y) dy ÷÷ f X ( x) dx
è ø
¥
Fz (z ) = òf

X (x ) FY (z - x ) dx

d Fz (z )
fZ(z) =
dz
¥
f Z (z ) =

òf X (x) f Y (z - x) dx

The Convolution Integral

z
y=z-x

z
x

-55-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-6):
Consider the joint pdf shown in the table (considered before in example 4-1).
Let Z = X + Y.
1- Find the probability mass function of (Z), P{Z = z}.
2- Find P(X = Y).
3- Find E{XY}

Y
1 2 3 4
X
1 0.1 0 0.1 0
2 0.3 0 0.1 0.2
3 0 0.2 0 0
SOLUTION:
1- Possible values of (Z) and their probabilities are shown as follows:
Z P(Z = z)
2 P(X = 1 , Y = 1) = 0.1
3 P(X = 1 , Y = 2) + P(X = 2 , Y = 1) = 0 + 0.3 = 0.3
4 P(X = 1 , Y = 3) + P(X = 3 , Y = 1) + P(X = 2 , Y = 2) = 0.1 + 0 + 0 = 0.1
5 P(X = 1 , Y = 4) + P(X = 2 , Y = 3) + P(X = 3 , Y = 2) = 0 + 0.1 + 0.2 = 0.3
6 P(X = 2 , Y = 4) + P(X = 3 , Y = 3) = 0.2 + 0 = 0.2
7 P(X = 3 , Y = 4) = 0

2- P(Y = X) = summation of probabilities over all values for which x = y.


= P(X=1 , Y=1) + P(X=2 , Y=2) + P(X=3 , Y=3)
= 0.1 + 0 + 0 = 0.1

3- E{XY} = å åx y
xi yj
i j P(X = x i , Y = y j )

= (1)(1) P(X=1 , Y=1) + (1)(3) P(X=1 , Y=3) + (2)(1) P(X=2 , Y=1)


+ (2)(3) P(X=2 , Y=3) + (2)(4) P(X=2 , Y=4) + (3)(2) P(X=3 , Y=2)

= (1)(1) (0.1) + (1)(3) (0.3) + (2)(1) (0.3)


+ (2)(3) (0.1) + (2)(4) (0.2) + (3)(2) (0.2)

E{XY} =5

Exercise:
Let Z = |X – Y|
- Find the pmf of Z: P(Z = z)

-56-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-7):
Let (X) and (Y) be two identical and independent random variables, such that:
ì a e - ax , ìa e - ay , y > 0 fX(x)
x>0
f X (x) = í ; f Y ( y) = í
î 0 , otherwise î 0 , otherwise
Let Z = X + Y. Find fZ(z ). x
SOLUTION: fY(x)
¥ z
x
ò f X ( x) f Y (z - x ) dx = ò a e -ax . a e
-a (z - x)
f Z (z ) = dx
-¥ 0 fY(-x)
z

f Z (z ) = ò a 2 e -ax . e
- az + a x
dx
0
x
z

ò dx
-az fY(z -
f Z (z ) = a 2 e
0 x)
f Z (z ) = a ze 2 -az x
z

EXAMPLE (4-8):
Let (X) and (Y) be two identical and independent random variables, such that:
ì1 ì1
ï , 0<x<2 ï , 0< y<2
f X ( x) = í 2 ; f Y ( y) = í 2 fX(x)
ïî0 , otherwise ïî0 , otherwise
Let Z = X + Y. Find fZ(z ). ½
x
SOLUTION: 2
fY(x)
Z=X+Y è 0<z<4
- For (z < 0) è fZ(z ) = 0 x
2
- For (0 < z < 2) fY(-x)
¥
f Z (z ) = òf

X (x) f Y (z - x) dx
x
z
-2
1 1 z
f Z (z ) = ò ´ dx = fY(z -
0
2 2 4 ½x)
- For (z = 2) x
2 2 -2 +z z
1 1 x 1
f Z (z ) = ò ´ dx = =
0
2 2 40 2 fZ(z )
- For (2 < z < 4)
2 2
x 1
f Z (z ) = ò
1 1
´ dx = = [(2 - (-2 + z )] = 1 [4 - z ] ½
- 2 +z
2 2 4 - 2 +z 4 4
z
1 1 4
è Total area = ´ 4 ´ = 1 è It is a pdf 0 2
2 2

-57-
PROBABILITY DISTRIBUTIONS FOR MORE THAN ONE R.V. CHAPTER IV

EXAMPLE (4-9):
Let (X) and (Y) be two uniformly distributed and independent random variables, such that:
ì1 ì1 fX(x)
ï , 0<x<2 ï , 0< y<4
f X ( x) = í 2 ; f Y ( y) = í 4
ïî0 , otherwise ïî0 , otherwise
Let Z = X + Y. Find fZ(z ). x
2
fY(x)
SOLUTION:
- For (z < 0) è fZ(z ) = 0 x
4
z
fY(-x)
1 1 1
- For ( 0 < z < 2 ) è f Z (z ) = ò ´ dx = z
0
2 4 8
x
-4
2
1 1 1 1
- For ( 2 <z < 4 ) è f Z (z ) = ò ´ dx = ´ 2 = fY(z -
0
2 4 8 4 x)
- For ( 4 <z < 6 ) x
-4 +z z
2 2
1 1 1 1
f Z (z ) = ò
-4 +z
´ dx = x
2 4 8 -4 +z
= (6 - z )
8
fZ(z )

ì 1
ï 8z 0 <z < 2 1/4
ï Area = 1
ï 1 z
f Z (z ) = í 2 <z < 4 2 4 6
ï 4
ï1
ï 8 (6 - z ) 4 < z < 6
î

-58-
ELEMENTARY STATISTICS CHAPTER V

C H A PT E R I V

ELEMENTARY STATISTICS
ELEMENTARY STATISTICS CHAPTER V

With an experiment in which we observe some quantity (number of defectives, noise,


power, …), there is associated a random variable X whose probability distribution is given by:
FX ( x ) = P{X £ x}
- In statistics, we take a random sample (x1, x2, ……, xn) of size (n) by performing that
experiment (n) times. The purpose is to draw conclusions from properties of sample about
properties of the distribution of the corresponding X (the population). We do this by
calculating point estimators or confidence intervals or by performing a test of parameters or
by a test for distribution functions.
- For populations we define numbers called parameters ( m X and s 2X in normal distribution, (p)
in binomial distribution, λ for the exponential distribution) that characterize important
properties of the distributions. The unknown parameter (q) is estimated by some appropriate
function of the observations qˆ = f ( x 1 , x 2 , ........, x n )
The function is called statistics or an estimator. A particular value of the estimator is called an
estimate of q.
- The sample mean m̂ X is defined as :
1 n
mˆ X = å x i
n i =1
- The sample variance ŝ 2X is defined as:
1 n
sˆ 2X = å (x i - mˆ X ) 2
n - 1 i =1
sˆ X = sˆ 2X is called the standard deviation of the sample.
- A computationally simpler expression for ŝ 2X is:
2
n
æ n ö
nå x i - ç å x i ÷
2

sˆ 2X =
i =1 è i =1 ø
n (n - 1)
§ Regression Techniques:
Suppose in a certain experiment we take measurements in pairs, i.e. (x1,y1) , (x2,y2), … (xn,yn).
We suspect that the data can be fit in a straight line of the form y = ax + b .
Suppose that the line is to be fitted to the (n) points and let ( Î ) denote the sum of the squares
of the vertical distances at the (n) points, then
n
Î = å [y i - (a x i + b)]
2

i =1
The method of least squares specifies the values of a and b so that Î is minimized.
¶Î n
= -2 å ( y i - a x i - b) x i = 0 y (xi, axi + b)
¶a i =1

¶Î n
= -2 å ( y i - a x i - b) = 0
¶b i =1
n n
nb + a å x i = å y i ……… (1) (xi,yi )
i =1 i =1
n n n
bå x i + a å x i2 = å x i y i ……… (2)
i =1 i =1 i =1 x

-59-
ELEMENTARY STATISTICS CHAPTER V

In matrix form, these equations are:


æ n
ç å x i ö÷ æç β ö÷ = æç å y i ö÷
ç x 2÷ç ÷ ç ÷
è å i å xi ø è αø è å xi yi ø
These two equations are called the normal equations.
Solving the above two equations, we get:
n n n
nå x i y i - å x i å y i
i =1 i =1 i =1 C XY
a= =
n (n - 1) sˆ 2
X sˆ 2X
b = mˆ Y - amˆ X
n n n
nå x i y i - å x i å y i
1 n
where C XY = å ( x i - mˆ X ) ( y i - mˆ Y ) =
n - 1 i =1
i =1 i =1

n ( n - 1)
i =1

n n
1 1
mˆ X = å
n i =1
x i and mˆ Y = å y i
n i =1
A useful formula for a may also be taken as:
n

åx y
i =1
i i - n mˆ X mˆ Y
a= n
(Curve passes through m̂ X and m̂ Y .)
åx i =1
2
i - n mˆ 2
X

§ Fitting a Polynomial by the Method of Least Squares:


Suppose now that instead of simply fitting a straight line to (n) plotted points, we wish to fit a
polynomial of the form:
y = b1 + b 2 x + b 3 x 2
The method of least squares specifies the constants b 1 , b 2 and b 3 so that the sum of the
squares of errors Î is minimized.
[ ]
n
Î = å y i - (b 1 + b 2 x i + b 3 x 2i )
2

i =1

Taking partial derivatives of Î with respect to b1 , b 2 and b 3 .


n n n
b 1 n + b 2 å x i + b 3 å x i2 = å y i ……… (1)
i =1 i =1 i =1
n n n n
b 1 å x i + b 2 å x 2i + b 3 å x 3i = å x i y i ……… (2)
i =1 i =1 i =1 i =1
n n n n
b 1 å x 12 + b 2 å x 3i + b 3 å x 14 = å x i2 y i ……… (3)
i =1 i =1 i =1 i =1
In matrix form, these equations are:
æ n
ç å x i å x i2 ö÷ æç b1 ö÷ æç å y i ö÷
ç å x i å x i2 å x 3i ÷ ç b 2 ÷ = ç å x i y i ÷
ç 4 ÷ç ÷ ç ÷
è å x i å x i å x i ø èb3 ø è å x i yi ø
2 3 2

Then these equations can be solved simultaneously.

-60-
ELEMENTARY STATISTICS CHAPTER V

Fitting an Exponential by the Method of Least Squares:

Suppose that we suspect the data to fit an exponential equation of the form:
y = a e bx ……… (1)
Taking the natural logarithm:
ln( y) = ln(a) + ln(e bx )
ln( y) = ln(a) + b x ……… (2)
Let Y / = ln(y) ; β / = ln(a) ; α / = b
So, equation (2) now becomes Y / = β / + α / x
Which is the case of the straight line treated first. For each yi take its natural logarithm to get Yi/ .
The new pairs of the data are (x1, lny1), (x2, lny2), … (xn, lnyn), the solution of which is known.
EXAMPLE (5-1):
Suppose that the polynomial to be fitted to a set of (n) points is y = b x. It can be shown that:
n

åx y
i =1
i i
b= n

åx
i =1
2
i

EXAMPLE (5-2):
Let y = a xb.
Taking the ln of both sides, then:
ln y = ln a + b ln x
y' = b' + a' x' (Linear regression)
where: y' = ln y , b' = ln a , a' = b , x' = ln x

EXAMPLE (5-3):
-x b
If y = 1 - e a

Manipulation of this equation yields:


æ 1 ö
ln ln çç ÷÷ = - ln a + b ln x
è1- y ø
which is the standard form:
y' = b' + a' x' (Linear regression)

EXAMPLE (5-4):
L
If y =
1 + e a + bx
This form reduces to:
æ L- yö
ln çç ÷÷ = a + b x
è y ø
which is in the standard form:
y' = b' + a' x' (Linear regression)

-61-
ELEMENTARY STATISTICS CHAPTER V

§ Model followed:
- Data are the measured values of the random variables
obtained from replicates of a random experiment.
- The random variables X1, X2, …, Xn have the same Population
N
distribution and are assumed independent. (one distribution)

- Definition:
Independent random variables X1, X2, ……, Xn with the same
distribution are called a random sample.
Size
- Definition: (n)
Random Sample
A statistic is a function of a random sample. X1, X2, …, Xn

- Definition:
A probability distribution of a statistic is called its sampling distribution.
- Theorem:
If X1, X2, …, Xn are independent random variables with E( x i ) = m i and Var(x i ) = s i2
And the random variable Y is:
Y = C1X1 + C2X2 + …… + CnXn then,
E ( Y ) = C1 m 1 + C 2 m 2 + ......... + C n m n
s 2Y = C 12 s 12 + C 22 s 22 + ......... + C 2n s 2n
Furthermore, If X1, X2, …, Xn are independent normal r.v, then (Y) is a normal r.v.

§ Sampling Distribution of the Sample Mean μ̂ X :


- Suppose that the random sample of size (n) is taken from a normal population with mean m X
and variance s 2X . Then X1, X2, …, Xn are independent and identically distributed (iid) normal
random variables. Therefore
X + X 2 + ......... + X n
mˆ X = 1
n
m + m X + ......... + m X n m X
E(mˆ X ) = X = = mX
n n
s 2 + s 2X + ......... + s 2X n s 2X s 2X
sˆ 2X = X = 2 =
n2 n n
s2
è m̂ X is normal random variable with E(mˆ X ) = m X and Var (mˆ X ) = sˆ 2X = X
n

- If we are sampling from a population that has an unknown probability distribution, the
sampling distribution of the sample mean will still be approximately normal with:
s 2X
mean = E(mˆ X ) = m X and variance = Var (mˆ X ) = sˆ 2X = if the sample size (n) is large.
n
This is one of the most useful theorems in statistics. It is called the central limit theorem.

-62-
ELEMENTARY STATISTICS CHAPTER V

§ Central Limit Theorem:


- If X1, X2, …, Xn is a random sample of size (n) taken from a population with mean m X and
variance s 2X , and if m̂ X is the sample mean, then the limiting form of the distribution of:
mˆ - m X
Z= X as n ® ¥, is the standard normal distribution.
sX / n

- In many cases of practical interest, if n ³ 30 , the normal approximation will be satisfactory


regardless of the shape of the population. If n < 30, the central limit theorem will work well if
the distribution of the population is not severely non-normal.

The theorem works well for small samples n = 4 , 5 when the population has a continuous
distribution.

EXAMPLE (5-5):
An electronic company manufactures resistors that have a mean resistance of 100 Ω and a
standard deviation of 10 Ω. Find the probability that a random sample of n = 25 resistors will
have an average resistance less than 95 Ω.

SOLUTION:
m̂ X is approximately normal with:
mean = E (mˆ X ) = 100 Ω.

s 2X 10 2
Var (mˆ X ) = sˆ 2X = =
n 25
s 2X 10 2
sˆ X = = =2
n 25
95 - 100 m̂ X
P{ m̂ X < 95} = P{Z < }
2 95 100

= F ( -2.5) = 0.00621

EXAMPLE (5-6):
The lifetime of a special type of battery is a random variable with mean 40 hours and
standard deviation 20 hours. A battery is used until it fails, then it is immediately replaced by
a new one. Assume we have 25 such batteries, the lifetime of which are independent,
approximate the probability that at least 1100 hours of use can be obtained.

SOLUTION:
Let X1, X2, …, X25 be the lifetimes of the batteries.
Let Y = X1 + X2 + …… + X25 be the overall lifetime of the system
Since Xi are independent, then Y will be approximately normal with mean and variance given

-63-
ELEMENTARY STATISTICS CHAPTER V

by:
E{Y} =
s 2X 10 2
Var (mˆ X ) = sˆ 2X = =
n 25
s 2X 10 2
sˆ X = = =2
n 25
95 - 100
P{ m̂ X < 95} = P{Z < }
2 m̂ X
95 100
= F ( -2.5) = 0.00621

§ Estimation of Parameters:
- The field of statistical inference consists of those method used to make decisions or to draw
conclusions about a population. These methods utilize the information contained in a sample
from a population in drawing conclusions.

- Statistical inference may be divided into two major areas:


Parameter estimation and hypotheses testing.

- By parameters we mean quantities appearing in distributions such as:


(p in the binomial, m X and s 2X in normal, rate l in a Poisson process).

- We consider two types of estimators, point estimators and interval estimators.

-64-
ELEMENTARY STATISTICS CHAPTER V

- Point Estimate:
A point estimate of some population parameter (q) is a single numerical value (qˆ ) .
- Interval Estimate:
An interval estimate is an interval (confidence interval) obtained from a sample.

Unknown ˆ)
Statistic (Θ Remarks
Parameter (q)
1 n Used to estimate the mean
mX mˆ X = å xi
n i =1
regardless of whether the
variance is known or unknown.
1 n Used to estimate the variance
s 2X sˆ 2X = å
n - 1 i =1
(x i - mˆ X ) 2
when the mean is unknown.
1 n Used to estimate the variance
s 2X sˆ 2X = å (x i - m X ) 2
n i =1 when the mean is known.
Used to estimate the probability of
x a success in a binomial distribution.
p P̂ =
n n : sample size
x : number of successes in the sample
n
x 1i n
x Used to estimate the difference in
m X1 - m X 2 mˆ X1 - mˆ X2 = å - å 2i
i =1 n 1 i =1 n 2 the means of two populations.
x1 x 2 Used to estimate the difference in
p1 – p2 P̂1 - P̂2 = -
n1 n 2 the proportions of two populations.

§ Desirable Properties of Point Estimators:


1- An estimator should be close to the true value of the unknown parameter.
- Definition:
A point estimator (qˆ ) is unbiased estimator of (q) if E (q) ˆ = q.
If the estimator is biased, then E (qˆ ) - q = B is called the bias of the estimator (qˆ ) .

2- Let qˆ 1 , qˆ 2 , ...... be unbiased estimators of (q).


A logical principle of estimation when selecting Distribution
among several estimators is to chooses the one
Distribution of q̂ 1
that has the minimum variance.
of q̂ 2
- Definition:
If we consider all unbiased estimators of (q),
the one with the smallest variance is called
the minimum variance unbiased estimator (MVUE).
q
When ( Var (qˆ 1 ) < Var (qˆ 2 ) , q̂ 1 is called more efficient than q̂ 2 )
The variance Var (qˆ ) = E{[qˆ - E(q)] 2 } is a measure of the imprecision of the estimator.

3- the mean square error of an estimator (qˆ ) of the parameter (q) is defined as:
MSE (qˆ ) = E (qˆ - q) 2
This measure of goodness takes into account both the bias and imprecision.

-65-
ELEMENTARY STATISTICS CHAPTER V

MSE (qˆ ) can also be expressed as:


MSE (qˆ ) = E{[qˆ - E(qˆ ) + E(qˆ ) - q] 2 } = E{[(qˆ - E(qˆ )) + ((E(qˆ ) - q))] 2 }
=B

MSE(qˆ ) = E(qˆ - E(qˆ )) 2 + 2 B E{(qˆ - E(qˆ )} + B 2 = Var (qˆ ) + B 2


=0

MSE (qˆ ) = Var(qˆ ) + B 2

- Definition:
An estimator whose variance and bias goes to zero as the number of observations go to infinity
is called consistent.

EXAMPLE (5-6):
Show that the sample variance ŝ 2X is an unbiased estimator of the variance s 2X
(mean unknown).
SOLUTION:
ì 1 n ü 1 ìn ü
E{sˆ 2X } = E í å
î n - 1 i =1
( x i - mˆ X ) 2 ý = E íå (x i2 + mˆ 2X - 2 mˆ X x i )ý
þ n - 1 î i =1 þ
1 ì n
æn n
öü
E{sˆ 2X } = E íå x i2 + n mˆ 2X - 2 mˆ X ç å x i ÷ ý
n - 1 î i =1 è n i =1 ø þ
1 ìn 2 ü
E{sˆ 2X } = Eíå x i + n mˆ 2X - 2 n mˆ 2X ý
n - 1 î i =1 þ
1 ìn 2 ü 1 ìn 2 ü
E{sˆ 2X } = E íå x i - n mˆ 2X ý = íå E{x i } - nE{ mˆ X }ý
2

n - 1 î i =1 þ n - 1 î i =1 þ
From previous results we know:
E{x i2 } = m 2X + s 2X
s2
E{mˆ 2X } = m 2X + X
n
1 ìn s 2X ü
E{sˆ 2X } = íå E (m X + s X ) - n (m X +
2 2 2

n - 1 î i =1 n þ
(n - 1) s 2X
E{sˆ 2X } =
1
n -1
2
{
n mX + n sX - n mX - sX =
2 2 2

n -1
} = s 2X

E{s X } = s X
ˆ 2 2

-66-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

C H A P T ER V

E S T IM A T IO N T H E O R Y
A N D A PP L I C A T IO N S
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

Method for Obtaining Point Estimators: The Maximum Likelihood (ML) Estimator.
- Let us take (n) statistically independent samples {X1, X2, …, Xn} from a population.
Let the parameter of the population to be estimated (q).
- The joint pdf of the samples is (expressed in terms of q).
L(q) = f{x1, x2, …, xn ; q} = f{x1 ; q} . f{x2 ; q} …… f{xn ; q} è due to independent of xi.
L(q) is called the likelihood function. The maximum likelihood technique looks for that value
(qˆ ) of the parameter that maximizes the joint pdf of the samples.
- A necessary condition for the maximum likelihood estimator of (q) is:
d d
L(q) = 0 or equivalently ln {L (q)} = 0
dq dq
(The ln(*) is a monotonically increasing function of the argument)

EXAMPLE (6-1):
Given a random sample of size (n) taken from a Gaussian population with parameters m X and
s 2X . Use the ML technique to find estimators for the cases:
a- The mean m X when the variance s 2X is assumed known.
b- The variance s 2X when the mean m X is assumed known.
c- The mean m X and variance s 2X when both are assumed unknown.
SOLUTION:
n
(xi -mX )2
- å
=e
(x i - m X ) 2 n
( )
n - ( x i -m X ) 2 n
1 2 s 2X
L=Õ è ln( L) = - å
i =1

e 2 s 2X - ln 2 p s 2X
(2 p s ) 2s X
n 2
i =1 2 p s 2X 2 2 i =1 2
X
d
a- Set ln L(m) = 0 è treating s 2X as a constant.
dm X
n
1 n
å (x
i =1
i - mˆ X ) 2 = 0 è mˆ X = å xi
n i =1
……… (1) Unbiased Estimator

Thus the ML estimator of the mean is the sample average mentioned earlier.
d
b- Set ln L(s 2X ) = 0 è treating m X as a constant
ds 2X
1 n
The result is sˆ 2X = å (x i - m X ) 2 ……… (2) Unbiased Estimator
n i =1
Note that: the division is by (n) since we are using the known mean of the distribution
d d
c- Set ln L(m X , s 2X ) = 0 and ln L(m X , s 2X ) = 0
dm X ds 2X
1 n 1 n
This results in: mˆ X = å x i and sˆ 2X = å ( x i - mˆ X ) 2
n i =1 n i =1
(n - 1) s 2X
ŝ 2X is a biased estimator since E{sˆ 2X } =
n
1 n
For this general case, the unbiased estimator of s 2X is: sˆ 2X = å ( x i - mˆ X ) 2
n - 1 i =1
Which is the sample variance introduced earlier.

-66-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

§ Finding Interval Estimators for the Mean and Variance:


- An interval estimate of an unknown parameter of (q) is an interval of the form q 1 £ q £ q 2
where the end points q1 and q 2 depend on the numerical value of the parameter to be
estimated (qˆ ) for a particular sample. From the sampling distribution of (qˆ ) we will be able to
determine values of q 1 and q 2 such that:
P (q 1 £ q £ q 2 ) = 1 - a , a>0 ; 0 < a <1
where: q is the unknown parameter
(1 - a ) is the confidence coefficient
a is called the confidence level.
q1 and q 2 are the lower and the upper confidence limits

I. Confidence Interval on the Mean: (Variance Known)


- Suppose that the population of interest has a Gaussian distribution with unknown mean m X
and known variance s 2X .
1 n s2
The sampling distribution of mˆ X = å x i is Gaussian with mean m X and variance X .
n i =1 n
mˆ - m X
Therefore, the distribution of the statistic Z = X is a standard normal distribution.
sX / n

{
P -z a £ Z £z a }= 1 - a ìï mˆ - m X
è P í- z a £ X
ïî sX n
üï
£z a ý =1- a

2 2 2
þ
P{mˆ X -z a sX n £ mX £ mX + z a sX n =1 - a
ˆ }
a/2 a/2
2 2

z
-z a/2 z a/2
Confidence Interval

Error z
mˆ X - z a s X
2
n m̂ x mx mˆ X + z a s X
2
n
- Definition:
If m̂ X is the sample mean of a random sample of size (n) from a population with known
variance s 2X , a 100(1 – a)% confidence interval on m X is given by:
mˆ X - z a s X n £ m X £ mˆ X + z a s X n
2 2

where z a is the upper 100(a/2)% point of the standard normal.


2
§ Choice of the Sample Size:
The definition above means that in using m̂ X to estimate m X , the error E = | m̂ X – m X | is less
than or equal to z a s X n with confidence 100(1 – a). In situations where the sample size
2

can be controlled, we can choose (n) so that we are 100(1 – a)% confident that the error in
estimating m X is less than a specified error (E).
2
æz a s X ö
(n) is chosen such that E = z a s X n è n =ç 2 ÷ .
2 ç E ÷
è ø

-67-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

EXAMPLE (6-2):
The following samples are drawn from a population that is known to be Gaussian.
7.31 10.80 11.27 11.91 5.51 8.00 9.03 14.42 10.24 10.91

Find the confidence limits for a 95% confidence level if the variance of the population is 4.
SOLUTION:
From the sample we have:
n = 10
1 n
mˆ X = å x i = 9.94
n i =1
z a = 1.96

{ }
2

P mˆ X - z a s X n £ m X £ mˆ X + z a s X n =1- a
2 2

ì 1.96 ´ 4 1.96 ´ 4 ü
Pí9.94 - £ m X £ 9.94 + ý = 0.95
î 10 10 þ
P{8.70 £ m X £ 11.1796} = 0.95

II. Confidence Interval on the Mean: (Variance Unknown)


- Suppose that the population of interest has a normal distribution with unknown mean m X and
unknown variance s 2X .

- Definition:
Let X1, X2, ……, Xn be a random sample for a normal distribution with unknown mean m X
and unknown variance s 2X . The quantity
mˆ - m
T= X has a (t-distribution) with (n – 1) degree of freedom. fT(t)
sˆ X / n
æ ( k + 1) ö
Gç ÷
The t-pfd is: f ( t ) = è 2 ø 1
T k +1
-¥<t<¥
ækö a/2 a/2
p k Gç ÷ æ t 2 ö 2
è 2 ø çç ÷÷
è k + 1ø
ta/2 t
(k) is the number of degrees of freedom. -t a/2
k
The mean of the t-distribution is zero and the variance .
k-2
The t-distribution is symmetrically and unimodal, the maximum is reached when the mean is 0
(quite similar to normal distribution. As k ® ¥ , the t-distribution is the normal distribution).

{ }
P - t a 2 , n -1 £ T £ t a 2 , n -1 = 1 - a
mˆ X - m
T= is the t-distribution with (n – 1) degree of freedom
sˆ X / n
t a 2 , n -1
is the upper 100(a/2)% point of the t- distribution with (n – 1) degree of freedom

-68-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

ìï mˆ - m X üï
P í- t a 2 , n -1 £ X £ t a 2 , n -1ý = 1 - a
ïî sˆ X n ïþ

ì sˆ sˆ ü
P ímˆ X - t a 2 , n -1 X £ m X £ mˆ X + t a 2 , n -1 X ý = 1 - a
î n nþ

- Definition:
If m̂ X and ŝ X are the mean and standard deviation of a random sample from a normal
distribution with unknown variance s 2X , the 100(1 – a)% confidence interval on m X is:
ì sˆ sˆ ü
- P ímˆ X - t a 2 , n -1 X £ m X £ mˆ X + t a 2 , n -1 X ý = 1 - a
î n nþ

where t a 2 , n -1
is the upper 100(a/2)% point of the t-distribution with (n–1) degrees of freedom.

EXAMPLE (6-3):
For the following samples drawn from a normal population:
7.31 10.80 11.27 11.91 5.51 8.00 9.03 14.42 10.24 10.91

Find 95% confidence interval for the mean if the variance of the population is unknown.

SOLUTION:
From the sample we have:
1 n
mˆ X = å x i = 9.94
n i =1
1 n
sˆ 2X = å ( x i - mˆ X ) 2 = 6.51
n - 1 i =1
From tables of t-distribution:
Number of degrees of freedom = n – 1 = 10 – 1 = 9 = ν
a = 0.05 ® a 2 = 0.025 ® t a 2 , 9 = 2.263

ì sˆ sˆ ü
P ímˆ X - t a 2 , n -1 X £ m X £ mˆ X + t a 2 , n -1 X ý = 1 - a
î n nþ

ì 6.51 6.51 ü
P í9.94 - 2.263 £ m X £ 9.94 + 2.263 ý = 0.95
î 10 10 þ

P{8.11 £ m X £ 11.77} = 0.95

-69-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

III. Confidence Interval on the Variance of a Normal Population: (Mean Known)


- When the population is normal, the sampling distribution of:
2
n sˆ 2X n
æ x - mX ö 1 n
c = 2 = å çç i
2
÷÷ ; sˆ 2X = å ( x i - m X ) 2
sX i =1 è sX ø n i =1
is chi-square with (n) degrees of freedom.
The confidence interval is developed as:
P{c 12- a 2 , n £ c 2 £ c a2 2 , n } = 1 - a
ì n sˆ 2 ü
P íc 12- a 2 , n -1 £ 2 X £ c 2a 2 , n -1 ý = 1 - a
î sX þ a/2 a/2
1–a
ìï n sˆ 2
n sˆ üï
2
P í 2 X £ s 2X £ 2 X ý = 1 - a
-c
2
ïî c a 2 , n c 1-a 2 , n ïþ 1-a/2 , n -c
2
a/2 , n

- Definition:
If ŝ 2X is the sample variance from a random sample of (n) observations from a normal
distribution with a known mean and an unknown variance s 2X , then a 100(1 – a)% confidence
interval on s 2X is:
n sˆ 2X n sˆ 2X
£ s 2
X £
c a2 2 , n c 12-a 2 , n

where c a2 2 , n and c 12-a 2 , n is the upper and lower 100(a/2)% point of the chi-square distribution
with (n) degrees of freedom, respectively.

EXAMPLE (6-4):
For the following samples drawn from a normal population:
7.31 10.80 11.27 11.91 5.51 8.00 9.03 14.42 10.24 10.91

Find 95% confidence interval for estimation of the variance if the mean of the population is
known to be 10.
SOLUTION:
From the sample we have:
1 n 1 n
mˆ X = å x i = 9.94 and sˆ 2X = å (x i - m X ) 2 = 5.866
n i =1 n i =1
2
From tables of χ -distribution:
Number of degree of freedom = n = 10 = ν
a = 0.05 ® a 2 = 0.025 è c 02.025 , 10 = 20.483 and c 20.975 , 10 = 3.247
ìï n sˆ 2X n sˆ 2X üï ì10 ´ 5.866 10 ´ 5.866 ü
Pí 2 £ sX £ 2
2
ý =1- a è Pí £ s 2X £ ý = 0.95
ïî c a 2 , n c 1- a 2 , n ïþ î 20.483 3.247 þ

{
P 2.863 £ s 2X £ 18.065 = 0.95 }

-70-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

IV. Confidence Interval on the Variance of a Normal Population: (Mean Unknown)


- When the population is normal, the sampling distribution of:
2
(n - 1) sˆ 2X æ xi - mX ö
n
1 n
c =
2

sX
2
= å ç
ç
i =1 è sX ø
÷
÷ ; s
ˆ 2
X = å
n - 1 i =1
( x i - mˆ X ) 2

is chi-square with (n – 1) degrees of freedom.


- Definition:
If ŝ 2X is the sample variance from a random sample of (n) observations from a normal
distribution with an unknown mean and an unknown variance s 2X , then a 100(1 – a)%
confidence interval on s 2X is:
(n - 1) sˆ 2X (n - 1) sˆ 2X
£ s 2
X £
c 2a 2 , n -1 c 12-a 2 , n -1

where c a2 2 , n -1 and c 12- a 2 , n -1 is the upper and lower 100(a/2)% point of the chi-square
distribution with (n – 1) degrees of freedom, respectively.

EXAMPLE (6-5):
For the following samples drawn from a normal population:
7.31 10.80 11.27 11.91 5.51 8.00 9.03 14.42 10.24 10.91

Find 95% confidence interval for estimation of the variance if the mean of the population is
unknown.
SOLUTION:
From the sample we have:
1 n
mˆ X = å x i = 9.94
n i =1
1 n
sˆ 2X = å
n - 1 i =1
( x i - mˆ X ) 2 = 6.51

From tables of χ 2 -distribution:


Number of degree of freedom = n = 10 – 1 = 9 = ν
a = 0.05 ® a 2 = 0.025

c 02.025 , 9 = 19.023 and c 20.975 , 9 = 2.7

ìï ( n - 1) sˆ 2X (n - 1) sˆ 2X üï
Pí 2 £ s 2X £ 2 ý =1- a
ïî c a 2 , n -1 c 1- a 2 , n -1 ïþ
ì 9 ´ 6.51 9 ´ 6.51ü
Pí £ s 2X £ ý = 0.95
î 19.023 2.7 þ
{
P 3.0799 £ s 2X £ 21.7 = 0.95}

-71-
ESTIMATION THEORY AND APPLICATIONS CHAPTER VI

V. Confidence Interval on a Binomial Proportion:


- Suppose that a random sample of size (n) has been taken from a large population and that
X ; (x £ n) observations in this sample belong to a class of interest. Then P̂ = x n is a point
estimator of the proportion of the population (p) that belongs to this class. Here (n) and (p) are
the parameters of a binomial distribution.
(X) is binomial with mean (np) and variance np(1 – p). Therefore,
n p (1 - p) p (1 - p)
P̂ = x n has a mean (p) and variance =
n2 n
- As was mentioned earlier (limiting case of the binomial distribution to the normal distribution)
p (1 - p)
the sampling distribution P̂ is approximately normal with mean (p) and variance .
n
(p is not too close to 0 or 1 and (n) is large; {n p > 5} and {n p (1 – p) > 5}.

- To find a 100(1 – a)% confidence interval on the binomial proportion using the normal
approximation we construct the statistic:
Z=
X - np
np (1 - p )
=
P̂ - p
p (1 - p ) 2 2
{
è P -z a £ Z £z a =1- a }
n
ì ü
ï P̂ - p ï
ï ï
P í- z a £ £z a ý =1- a
ï 2 p (1 - p ) 2
ï
ïî n ïþ
ì p (1 - p) p(1 - p) ü
P íP̂ - z a £ p £ P̂ + z a ý =1- a
î 2 n 2 n þ
The last equation expresses the upper and lower limits of the confidence interval in terms of
the unknown parameter.
p (1 - p)
- The solution is to replace (p) by P̂ in so that:
n
ìï P̂(1 - P̂) P̂(1 - P̂) üï
P íP̂ - z a £ p £ P̂ + z a ý =1- a
ïî 2 n 2 n ïþ

EXAMPLE (6-6):
In a random sample of 85 automobile engine crankshafts bearings, 10 have a surface finish
that is rougher than the specifications allow. A 95% confidence interval for (p) is:
x 10
z a = z 0.025 = 1.96 and P̂ = = = 0.12
2 n 85
ìï P̂ (1 - P̂ ) P̂ (1 - P̂ ) üï
P í P̂ - z a £ p £ P̂ + z a ý =1- a
ïî 2 n 2 n ïþ
ì 0 . 12 (1 - 0 . 12 ) 0 .12 (1 - 0 . 12 ) ü
P í 0 . 12 - 1 . 96 £ p £ 0 . 12 + 1 .96 ý = 0 . 95
î 85 85 þ
P {0 . 05 £ p £ 0 . 19 } = 0 . 95

-72-
ENGINEERING DECISION CHAPTER VII

C HA P T E R V I

E N G I N E ER I N G D EC I S I O N
ENGINEERING DECISION CHAPTER VII

§ Hypothesis Testing:
In the last chapter we illustrated how a parameter can be estimated (points or interval
estimation) from sample data. However, many problems require that we decide whether to
accept or reject a statement about some parameter. The statement is called a hypothesis, and
the decision-making procedure is called Hypothesis Testing.

Two types of error are possible in such a decision process:


1- We decide that the null hypothesis H0 is false when it is really correct
This is called a type I error and its probability is denoted by a
a is called the significance level or size of the test.

2- We decide that the null hypothesis H0 is correct when it is really false


This is called a type II error and its probability is denoted by b

- Definition:
The power of the test (1 – b) is the probability of accepting the alternative hypothesis when the
alternative hypothesis is true.

§ One-Sided and Two-Sided Hypothesis:


A test of hypothesis such as:
H0 : q = q0 Reject H0 Accept H0 Reject H0
Accept H1 q = q0 Accept H1
H1 : q ≠ q0
q ≠ q0 Acceptance q ≠ q0
Is called a two-sided test. Rejection Region region q̂
Rejection Region

H0 : is known as the null hypothesis.


H1 : is known as the alternative hypothesis.

Tests such as:


Reject H0
Accept H0
H0 : q = q0 Accept H1
q = q0 q > q0
H1 : q > q0

H0 : q = q0 Reject H0 Accept H0
Accept H1
H1 : q < q0 q > q0 q = q0

Are called one-sided tests.

- a is called the significance level or size of the test.


- The power of the test plotted against the true parameter value is called the Operating
Characteristic (OC) curve.

-73-
ENGINEERING DECISION CHAPTER VII

§ Hypothesis Testing on the Mean: Variance Known


- Suppose that we wish to test the hypothesis:
H0 : m = m0
H1 : m ≠ m0
Where m0 is a specified constant. We have a random sample X1, X2, ……, Xn from the
population (assumed normal). m̂ is normal with mean m 0 and variance s 2 n when H0 is
assumed true.
mˆ - m 0
- We use the test statistic: Z = ; Z is N(0 , 1) when H0 is assumed true.
s n

- If the level of significance is (a), N (0 , 1)


then the probability is (1 – a)
that the test statistic (Z) falls Critical Acceptance
between - z a and z a . region region
2 2
a/2 a/2 z
- Reject H0 if z > -z a or z < z a za/2
2 2 -z a/2
Fail to reject H0 if - z a < z < z a
2 2

- In the terms of m̂ , we reject H0 if: mˆ > m 0 + z a s n or mˆ < m 0 - z a s n


2 2

- Suppose that the null hypothesis is false


and that the true value of the mean is m = m 0 + d ,
when H1 is true, Z is normal with mean:
E(mˆ / H 1 ) - m 0 f(z / H0) f(z / H1)
E( Z / H 1 ) =
s n N (0 , 1) N ( d n s , 1)
m0 + d - m0 d Acceptance
E( Z / H 1 ) = = region
s n s n
æd n ö b z
and unit variance. Z : Nç ÷
ç s , 1÷ -z a/2 za/2
d n
è ø
s
The probability of type II error is the probability that (Z) will fall between - z a and z a .
2 2
This probability is:
æ d n ö÷ æ ö
b = F ççz a - - F ç-z a - d n ÷
è 2 s ÷ø ç
è 2 s ÷ø

Now if we want to test H0 : m = m0


H1 : m > m0 N (0 , 1)
Acceptance

mˆ - m 0
region

Z= , Z is N(0 , 1) when H0 is assumed true. Critical


s n region
If (a) is the level of significance, then a z
za
0
H0 is rejected if z > z a and accepted if z < z a

-74-
ENGINEERING DECISION CHAPTER VII

If H1 is true, that is m = m 0 + d , d > 0 , then

The type II error is the probability that z falls between - ¥ and z a .

This probability is:


æ d n ö÷
b = Fççz a -
è s ÷ø

N (0 , 1) N ( d n s , 1)

z
0 za
d n
s

EXAMPLE (7-1):
Aircrew space systems are powered by a solid propellant. The burning rate of this propellant
is an important product characteristic. Specifications require that the mean burning rate must
be 50 cm/s. We know that the standard deviation of burning rate is 2 cm/s. The experimenter
decided to specify a type I error probability of significance level of a = 0.05. He selects a

random sample of n = 25 and obtains a sample average burning rate of m̂ = 51.3 cm/s. What
conclusions should be drawn?

SOLUTION:
Test H0 : m = 50 cm/s , a = 0.05
H1 : m ≠ 50 cm/s
Rejected H0 if z > 1.96 or z < -1.96

For m̂ = 51.3 cm/s and s = 2 cm/s, then


mˆ - m 0 51.3 - 50
Z= = = 3.25
s n 2 25
Since 3.25 > 1.96 we reject H0 and we have strong evidence that the mean burning rate
exceeds 50 cm/s.

-75-
ENGINEERING DECISION CHAPTER VII

SUMMARY FOR HYPOTHESIS TESTING PROCEDURE


NULL HYPOTHESIS TEST STATISTIC ALTERNATIVE HYPOTHESIS CRITERIA FOR REJECTION
H0 : m = m0 H1 : m ¹ m 0 Z >z a
2
s 2 known mˆ - m 0
Z= H1 : m > m 0
s n Z >z a
N(0 , 1) H1 : m < m 0 Z < -z a

H0 : m = m0 H1 : m ¹ m 0 t > t a 2 , n -1
mˆ - m 0
s 2 unknown T=
sˆ / n H1 : m > m 0 t > t a 2 , n -1
student t-distribution
with (n – 1) degrees of freedom H1 : m < m 0 t < - t a 2 , n -1

H0 : s 2 = s 20 H 1 : s 2 ¹ s 20 c 2 > c 2a 2 , n -1 or c 2 < c 12- a 2 , n -1


(n - 1) sˆ 2
m unknown c2 =
s 02 H 1 : s 2 > s 20 c 2 > c 2a , n - 1
Chi-square distributions
with (n – 1) degrees of freedom H 1 : s 2 < s 02 c 2 < c 12-a , n -1

H0 : s 2 = s 20 H 1 : s 2 ¹ s 20 c 2 > c a2 2 , n or c 2 < c 12- a 2 , n


n sˆ 2
m known c2 = 2
s0 H 1 : s 2 > s 20 c 2 > c a2 , n
Chi-square distributions
with (n) degrees of freedom H 1 : s 2 < s 02 c 2 < c 12-a , n

H0 : p = p0 H1 : p ¹ p 0 Z >z a
X - np 0 P̂ - p 0 2
Z= =
np 0 (1 - p 0 ) p 0 (1 - p 0 ) H1 : p > p 0 Z >z a
n
N(0 , 1) H1 : p < p 0 Z < -z a

-76-
ENGINEERING DECISION CHAPTER VII

§ Decision Making for Two Samples:


- The previous chapter presented hypothesis tests and confidence intervals for a single
population parameter (the mean m , the variance s 2 , or the proportion p). Here we extend
those results to the case of two independent populations.
- Population (1) has mean m1 and variance s 12 , population (2) has mean m 2 and variance s 22 .
Inferences will be based on two random samples of sizes (n1) and (n2).
That is X11, X12, ……, X1n1 is a random sample of (n1) observations from population 1, and
X21, X22, ……, X2n2 is a random sample of (n2) observations from population 2.

s 12
s 22

m1 m2

§ Inferences for a Difference in Means: Variances Known


- Assumptions:
1- X11, X12, ……, X1n is a random sample from population 1.
2- X21, X22, ……, X2n is a random sample from population 2.
3- The two populations presented by X1 and X2 are independent.
4- Both populations are normal, or if they are not normal, the conditions for the central limit
theorem apply.
mˆ 1 - mˆ 2 - (m 1 - m 2 )
- The test statistic Z = has an N(0 , 1) distribution.
s 12 s 22
+
n1 n 2

§ Testing hypothesis on (m 1 – m 2): Variances Known


Null hypothesis: H0 : m1 – m2 = D0
mˆ 1 - mˆ 2 - (D 0 )
Test statistic: Z =
s 12 s 22
+
n1 n 2

Alternative Hypothesis Criteria for Rejection

H1 : m1 - m 2 ¹ D 0 Z > z a or Z < -z a
2 2

H1 : m1 - m 2 > D 0 Z >z a
H1 : m1 - m 2 < D 0 Z < -z a

-77-
ENGINEERING DECISION CHAPTER VII

- Definition: Confidence Interval on the Difference in Two Means: Variances Known.


If m̂ 1 and m̂ 2 are the means of independent random samples of sizes (n1) and (n2) with known
variances s 12 and s 22 , then a 100%(1 – a) confidence interval for (m1 – m2) is:
ìï s12 s 22 s 12 s 22 üï
ímˆ 1 - mˆ 2 - z a + £ m 1 - m 2 £ mˆ 1 - mˆ 2 + z a + ý
ïî 2 n1 n 2 2 n 1 n 2 ïþ
where z a is the upper a 2 % point of standard normal distribution.
2

§ Inferences for a Difference in Means of Two Normal Distributions: Variances Unknown


- Hypothesis tests for the difference in means:
CASE I: σ 12 = σ 22 = σ 2

- The pooled estimator of σ 2 denoted by s 2p is defined as:


(n 1 - 1) s 12 + (n 2 - 1) s 22
s =
2
p
n1 + n 2 - 2

mˆ 1 - mˆ 2 - (m 1 - m 2 )
- The statistic T = has a t-distribution with (n1 + n2 – 2) degrees of freedom
1 1
sP +
n1 n 2
when H0 is true.

- The Two-Sample Pooled t-test:


Null hypothesis: H0 : m1 – m2 = D0
mˆ 1 - mˆ 2 - ( D 0 )
Test Statistic: T=
1 1
sP +
n1 n 2

Alternative Hypotheses Criteria for Rejection


H1 : m1 - m 2 ¹ D 0 t > t a 2 , n1 + n 2 - 2 or t < t a 2 , n1 + n 2 - 2

H1 : m1 - m 2 > D 0 t > t a , , n1 + n 2 - 2

H1 : m1 - m 2 < D 0 t < - t a , , n1 + n 2 - 2

- Definition: Confidence Interval on the Difference in Means of Two Normal Distributions:


Variances Unknown and Equal.

If m̂1 , m̂ 2 , S12 , and S 22 are the means and variances of two random samples of sizes (n1) and
(n2) respectively from two independent normal populations with unknown but equal variances,
then a 100%(1 – a) confidence interval on the difference in means (m1 – m2) is:

ì 1 1 1 1 ü
ímˆ 1 - mˆ 2 - t a 2 , n1 + n 2 - 2 s P + £ m 1 - m 2 £ mˆ 1 - mˆ 2 + t a 2 , n + n - 2 s P + ý
î n1 n 2 1 2
n1 n 2 þ

-78-
ENGINEERING DECISION CHAPTER VII

CASE II: σ 12 ¹ σ 22
mˆ 1 - mˆ 2 - ( D 0 )
If H0 : m1 – m2 = D0 is true, then the test Statistic T * =
s 12 s2
+ 2
n1 n 2

Is distributed approximately as t with degrees of freedom given by:


2
æ s 12 s 22 ö
ç + ÷
çn ÷
è 1 n2 ø if H0 is true.
ν = (s 2
n1 )2
s2 n ( ) 2
-2
1
+ 2 2
n1 + 1 n2 + 1

- Definition: Confidence Interval on the Difference in Means of Two Normal Distributions:


Variances Unknown and Unequal.

If m̂1 , m̂ 2 , S12 , and S 22 are the means and variances of two random samples of sizes (n1) and
(n2) respectively from two independent normal populations with unknown and unequal
variances, then an approximate 100%(1 – a) confidence interval on the difference in means
(m 1 – m 2) is:

ìï S12 S22 S12 S22 üï


μ̂
í 1
ïî
- μ̂ 2 - t α 2 n1 n 2 1 2 1 2 t α 2 n1 + n 2 ýï
, n + £ μ - μ £ μ̂ - μ̂ + , n
þ

§ Inferences on the variances of two normal populations:


Next, we introduce tests and confidence intervals for two population variances. Both
populations are assumed normal.

- Definition:
Let X11, X12, ……, X1n1 be a random sample from a normal population with mean m1 and
variance s 12 , and let X21, X22, ……, X2n2 be a random sample from a second normal
population with mean m 2 and variance s 22 . Assume that both normal populations are
independent. Let S12 and S 22 be the sample variances, then the ratio:
S12 / s12
F= 2 2
S2 / s 2
has an F distribution with (n1 – 1) numerator degrees of freedom and (n2 – 1) denominator
degrees of freedom.

- Hypothesis testing procedure:


A hypothesis testing procedure for the equality of two variances is based on the following:
Null hypothesis: H0 : s 12 = s 22

S12
Test Statistic: F0 =
S 22

-79-
ENGINEERING DECISION CHAPTER VII

Alternative Hypotheses Rejection Criterion


H 1 : s 12 ¹ s 22 f 0 > f a 2 , n1 -1, n 2 -1 or f 0 < f1- a 2 , n1 -1, n 2 -1

H 1 : s 12 > s 22 f 0 > f a , n1 -1, n 2 -1

H 1 : s 12 < s 22 f 0 < f1- a , n1 -1, n 2 -1

- Definition: Confidence Interval on the Ratio of Variances of Two Normal Distributions.


If S12 and S 22 are the sample variances of random samples of sizes (n1) and (n2) respectively
from two independent normal populations with unknown variances s 12 and s 22 , then a
s2
100%(1 – a) confidence interval on the ratio 12 is:
s2
S12 s 12 S12
f 1- a 2 , n1 -1, n 2 -1 £ £ f a 2 , n1 -1,n 2 -1
S 22 s 22 S 22
where f a 2 , n1 -1,n 2 -1 and f 1- a 2 , n1 -1,n 2 -1
fX(x)
are the upper and lower a/2% points
of the F distribution with (n2 – 1)
numerator degrees of freedom and
(n1 – 1) denominator degrees of a/2 a/2
freedom respectively. 1–a
1
- Remark: f1- a 2 , u ,n =
f a 2 , u ,n
f1- α 2 , u, ν f α 2 , u, ν
§ Inferences on Two Population Proportions:
Now we consider the case where there are two binomial parameters of interest p1 and p 2 and
we wish to draw inferences about these proportions.

- Large Sample Test for H0: p 1 = p2


Suppose that the two independent random samples of sizes (n1) and (n2) are taken from two
populations, and let X1 and X2 represent the number of observations that belong to the class of
interest in the samples. Furthermore, suppose that the normal approximation is applied to each
population so that the estimators of the population proportions:
X X
P̂1 = 1 and P̂2 = 2 have approximate normal distributions.
n1 n2
- Hypothesis testing procedure:
Null hypothesis: H0 : p 1 = p 2
P̂1 - P̂2
Test Statistic: Z0 =
æ 1 1 ö
P̂ (1 - P̂ )çç + ÷÷
è n1 n 2 ø
X1 + X 2
P̂ =
n1 + n 2

-80-
ENGINEERING DECISION CHAPTER VII

Alternative Hypotheses Rejection Criterion


H 1 : p1 ¹ p 2 Z 0 > z α or Z 0 < - z α
2 2

H 1 : p1 > p 2 Z0 > z a
H 1 : p1 < p 2 Z 0 < -z a

- Confidence Interval for p 1 – p 2:


The confidence interval for p1 – p 2 can be found from the statistic:
P̂1 - P̂2 - ( p 1 - p 2 )
Z=
p 1 (1 - p 1 ) p 2 (1 - p 2 )
+
n1 n2
which is a standard normal r.v.
The 100%(1 – a) confidence interval on p 1 – p 2 is:

p̂ 1 (1 - p̂ 1 ) p̂ 2 (1 - p̂ 2 ) p̂ 1 (1 - p̂ 1 ) p̂ 2 (1 - p̂ 2 )
P̂1 - P̂2 - z a + £ p 1 - p 2 £ P̂1 - P̂2 + z a +
2 n1 n2 2 n1 n2

-81-

Вам также может понравиться