Вы находитесь на странице: 1из 6

Probability and Probability Distribution

Sample Space:- The set of all possible outcomes (event) for a random
experiment is called the sample space (or event space).
Event:- An event is the outcome of the experiment.
Eg. Throwing a die, the events are {1, , !, ", #, $%.
Experiment: - An experiment is the process of ma&ing an observation or ta&ing a
measurement which results in different possible outcomes.
Eg. Throwing a die.
Equally likely:- Each out'comes of an experiment has the same chance of
appearing as an( other.
Eg. )ccurrence of 1, , !, ", #, $. are e*uall( li&el( events.
Independent Event:- Two events are said to be independent if information about
one tells nothing about the occurrence of the other.
)r
)utcome b( one event does not affect, and is not affected b( the other event.
Eg. The outcomes of successive tosses of a coin are independent of its
preceding toss.
Complementary event:- +et A be an( event defined on a sample space ,. Then
the event containing all those sample points of , which do not belong to A is
called the complementar( event. -t is denoted b( .
C
A
Null Event:- An event having no sample points is called a null event is denoted
b(
.
Simple event:- An event consisting of onl( one sample point of a sample space
is called a simple event.
Eg. +et a dice be rolled once and A be the event that face number # is trumed
up, then a is a simple event.
Probability Classical or priori de!inition":- .robabilit( is the ratio of the
number of favourable cases to the total number of e*uall( li&el( cases. -f
probabilit( of occurrence of A is denoted b( .(A). Then b( this definition/'
cases likely equally of number Total
cases favourable of Number
) ( = A P
0
n
a
Statistical or Empirical de!inition o! Probability:- -f a random experiment
under consideration is represented n times and an event A occurs n(A) times.
N.P.Singh (RATM) .Statistics (!"#) "
Then the relative fre*uenc(
n
A n ) (
of the occurrence of a approaches a definite
limit . as the number of trial is increase to infinit(. This limit value of the relative
fre*uenc( of the occurrence of the event a is called the probabilit( of the
occurrence of event A. and is denoted b( .(A).
p
n
A n
Lim A P
n
= =

) (
) (
#imitations:
1. The assumption that the random experiment is being repeated under
identical conditions ma( not remain essentiall( identical because in the
present scientific era, the situations change fre*uentl(.
. The infinite number of random experiment or trials is a theoretical and
impractical concept.
!. The measurement of statistical probabilit( is not full( accurate but it is
accurate approximatel( because it depends on experience and trials.
$xiomatic probability:
+et , be a sample space of a random experiment and A is an event. Then
.robabilit(, .(A) of happening of the event A is a real number which is
associated with A b( a function rule . (.) such that the following axioms are
satisfied/
Axiom - /
$ " ) ( ! A P
for each event A of ,.
Axiom-- /
$ " ) ( = S P
for the certain event ,
Axiom --- / ( ) ( ) ( ) . .......... ........
% " % "
+ + = A P A P A A P
1or mutuall( exclusive events $....... $
% "
A A of the
sample space ,.
$ddition rule o! Probability:-
-f n event are mutuall( exclusive, then the probabilit( of happening of an( one of
them is e*ual to the sum of the probabilities of the happening of the separate
event. -n other words, if
. $ .......... $ $
& % " n
E E E E
be n events and
( ) ( ) ( )$ ......... $
% " n
E P E P E P
be their respective probabilities then
( ) ( ) ( ) ( ). ...... . ..........
% " & % " n n
E P E P E P E E E E P + + + = + + + +
or
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) B A P B P A P B A P
or
AorB P B P A P B A P
+ =
= + =
%ultiplication rule o! Probability:-
N.P.Singh (RATM) .Statistics (!"#) %
The multiplication rule states that if two events A and 2 are independent, then the
probabilit( of the occurrence of both of them (A and 2) is the product of the
individual probabilit( of A and 2.
( ) ( ) ( ) B P A P AandB P =
( ) ( ) ( ) B P A P B A P . =
Conditional Probability:-Two events A and 2 are said to be dependent when 2
can occur onl( when A is &nown to have occurred (or vice versa). The probabilit(
associated with such an event is called the conditional probabilit( and is denoted
b(
( ) B A P
or, -n other words probabilit( of A given 2 has occurred. -f two events
A and 2 are dependent, then the conditional probabilit( of 2 given A is/
( )
( )
( )
( )
( ) A P
B A P
A P
AB P
A B P

= =
( )
( )
( )
$
B P
B A P
B A P

=
Combination:-
3umber of wa(s of drawing r out of n.
( )
$
' '
'
r n r
n
C
r
n

=
4here . n r
&$'ES( )*E+,E%:
,tatement/' An event A, can be affected b( an( mutuall( exclusive and
exhaustive causes (events)
. $.......$ $
% " n
B B B
-f the prior probabilities
( ) ( ) ( )$ ......... $
% " n
B P B P B P
are &nown and conditional probabilities
( ) ( ) ( )
n
B A P B A P B A P ...$ $.........
% "
, are also &nown then a posterior probabilit(
( ) A B P
m
is given b(,
( )
( ) ( )
( ) ( )

=
=
n
m
m m
m m
m
B A P B P
B A P B P
A B P
"
4here
( ) A B P
m
means, 5probabilit( that event A occurs due to
$
m
B
4hen it is
&nown that A has occurred.6
.roof/'
+et
. $.......$ $
% " n
B B B
b( an( n mutuall( exclusive and exhaustive events
and a is an( other event on a random experiment.
N.P.Singh (RATM) .Statistics (!"#) &

( ) ( ) ( )
n
B A B A B A A = .........
% "
And
( ) ( ) ( ) ( )
n
B A P B A P B A P A P + + + = .........
% "
( ) ( )

=
=
n
m
m
B A P A P
"
) " (
Again from multiplication rule
( ) ( ) ( )
( ) ( ) ( ) ) & (
) % (
=
=
m m m
m m
B A P B P B A P
A B P A P B A P
1rom () and (!), we get.
( ) ( ) ( ) ( )
( )
( ) ( )
( ) A P
B A P B P
A B P
B A P B P A B P A P
m m
m
m m m
=
=
( ) ( )
( )

=
n
m
m
m m
B A P
B A P B P
"
from (1)
( )
( ) ( )
( ) ( )
m
n
m
m
m m
m
B A P B P
B A P B P
A B P

=
=
"
.roved.
)-eoretical Probability Distribution
&inomial Distribution:-
2inomial distribution is a discrete probabilit( distribution which is obtained when
the probabilit( p of the happening of an event is same in all the trials, and there
are onl( two events in each trial.
( ) (
r n r
r
n
q p c r P

=
. .. .......... & $ % $ " $ ! n r =
The set of the number of success (r) along with the corresponding probabilit( is
&nown as 2inomial .robabilit( 7istribution. The set of the number of success
along with the corresponding theoretical fre*uencies is called Theoretical
2inomial 7istribution with parameters n and p.
Assumptions of 2inomial 7istribution/'
1. The 2ernoulli trials are independent.
. The numbers (n) of trial are finite and fixed.
!. The trials are repeated.
N.P.Singh (RATM) .Statistics (!"#) )
". There are two mutuall( exclusive possible outcomes of each trial which
are referred to as 8success9 and 8failure9.
#. -n a single trial probabilit( of success is p and the probabilit( of failure is *:
where
. " = +q p

Constants o! &inomial Distribution:-
+et the binomial distribution be.
$ ) (
r n r
r
n
q p C r P

=
;ean 0
np
,tandard 7eviation 0
npq =
<ariance 0
npq
,econd central moment 0 npq iance = = =
%
%
var
Third central moment 0
) (
&
p q npq =
1ourth central moment 0
) * " ( & )+ % ( & " ,
% % %
)
pq npq q p n n pq npq + = + =
=arl .earson9s coefficients,
( )
$
%
"
npq
p q
=
.
"
npq
p q
=
$
* "
&
%
npq
pq
+ = .
* "
%
npq
pq
=
Poisson Distribution:-
+et > be a discrete random variable where probabilit( of r X = , is given b(/
'
.
) (
r
m e
r P
r m
= ,
( . $......... & $ % $ " = x
. ! m
4here
$ n
i.e. when the value of n is ver( large. ?ere e is the basis of natural
logarithm whose value is .@1AA.
.oisson distribution ma( be obtained as a limiting case of binomial probabilit(
distribution under the following conditions/
1. n, the number of trials are large.
. p, the constant probabilit( of success for each trial is small.
!.
$ m np =
is finite.
Constant o! Poisson Distribution:
;ean 0 = iance var
np m =
.
,econd central moment 0
np m iance = = var
Third central moment 0
np m iance = = var
N.P.Singh (RATM) .Statistics (!"#) #
1ourth central moment 0
%
&m m+
=arl .earson9s coefficients,
"
0
m
"
"
0
m
"
%
0
m
"
& +
%
0
m
"
Normal Distribution:
A random variable > with parameter

and
%
is said to follow a normal
distribution when its probabilit( distribution function is given b(/
%
%
"
%
"
) (


X
e r P
4here . x
C-aracteristics o! Normal Distribution:-
1. -t is a continuous probabilit( distribution. -t is also called Baussian
distribution.
. The graph of normal distribution is bell shaped having single pea&ed. -t is
unimodal.
!. ;ean of normal distribution lies at the center of its normal curve.
". ;ean0;edian0;ode, in a normal distribution, -t is also called a
s(mmetrical distribution.
,tandard normal distribution or standard normal variate/'
-t is useful to transform a normall( distributed variable into such a form that a
single table of areas under the normal curve would be applicable regardless of
the unit( of the original distribution.
+et > be a random variable distributed normall( with mean X and standard
deviation

. Then we define a new random variable C using transformation


techni*ue/
$

=
X X X
Z
N.P.Singh (RATM) .Statistics (!"#) *

Вам также может понравиться