Академический Документы
Профессиональный Документы
Культура Документы
OF
MTH-202
(Graphs theory and Probability)
Topic: Discuss the various distributions of continuous random
variable.
Submitted by:
MANMEET SINGH Submitted to:
Roll. No- RE2801B40 Lect. Rajpreet Kaur
Reg. No- 10801620
Course- B Tech.(IT)-M Tech.
Acknowledgement
Manmeet
Singh
Student
‘s sign
Table of Contents:
Introduction
• Probability distributions
• Properties
• Expected Value
• Variance
INTRODUCTION
Random variable:
In mathematics, random variables are used in the study of probability. They were developed to assist in
the analysis of games of chance, stochastic events, and the results of scientific experiments by capturing
only the mathematical properties necessary to answer probabilistic questions. Further formalizations have
firmly grounded the entity in the theoretical domains of mathematics by making use of measure theory.
The language and structure of random variables can be grasped at various levels of mathematical fluency.
Beyond an introductory level, set theory and calculus are fundamental. The concept of a random variable
is closely linked to the term "random variate": a random variate is a particular outcome of a random
variable. There are two types of random variables: discrete and continuous.
For any continuous random variable with probability density function f(x), we have that: This is a useful
fact.
Example: X is a continuous random variable with probability density function given by f(x) = cx for 0 £
x £ 1, where c is a constant. Find c.
If we integrate f(x) between 0 and 1 we get c/2. Hence c/2 = 1 (from the useful fact above!), giving c = 2.
Probability Distributions:
• A continuous random variable can assume any value in an interval on the real line or in a
collection of intervals.
• The function f(x) is the probability density function (or probability distribution function) of the
continuous random variable x.
• Unlike a discrete random variable, we can not simply plug values of the random variable into this
function and get probability information directly.
• For continuous random variables it is impossible to talk about the probability of the random
variable assuming a particular value.
• Instead, we talk about the probability of the random variable assuming a value within a given
interval.
• In order to determine the probability that a continuous random variable assumes a value in an
interval you must first draw the function f(x).
• The probability of a continuous random variable assuming a value within some given interval
from x1 to x2 is defined to be the area under the graph of the probability density function
between x1 and x2.
• The probability of a continuous random variable assuming a specific value is zero (there is no
area under any graph at an exact point).
Expected Value:
The expected value (or population mean) of a random variable indicates its average or central
value. It is a useful summary value (a number) of the variable's distribution.
Stating the expected value gives a general impression of the behaviour of some random variable
without giving full details of its probability distribution (if it is discrete) or its probability density
function (if it is continuous).
Two random variables with the same expected value can have very different distributions. There
are other useful descriptive measures which affect the shape of the distribution, for example
variance.
The expected value of a random variable X is symbolised by E(X) or µ.
If X is a discrete random variable with possible values x1, x2, x3, ..., xn, and p(xi) denotes P(X =
xi), then the expected value of X is defined by:
sum of xi.p(xi)
Where the elements are summed over all values of the random variable X.
If X is a continuous random variable with probability density function f(x), then the expected
value of X is defined by:
Integral of xf(x)dx
Example:
Discrete case: When a die is thrown, each of the possible faces 1, 2, 3, 4, 5, 6 (the xi's) has a
probability of 1/6 (the p(xi)'s) of showing. The expected value of the face showing is therefore:
Notice that, in this case, E(X) is 3.5, which is not a possible value of X.
Variance:
The (population) variance of a random variable is a non-negative number which gives an idea of
how widely spread the values of the random variable are likely to be; the larger the variance, the
more scattered the observations on average.
Stating the variance gives an impression of how closely concentrated round the expected value
the distribution is; it is a measure of the 'spread' of a distribution about its average value.
V(X) = E(X^2)-E(X)^2
Notes:
1. The larger the variance, the further that individual value of the random variable
(observations) tends to be from the mean, on average;
2. The smaller the variance, the closer that individual values of the random variable
(observations) tend to be to the mean, on average;
3. Taking the square root of the variance gives the standard deviation, i.e.:
4. The variance and standard deviation of a random variable are always non-negative.
The uniform random variable has the following probability density function:
=0 elsewhere
• Test scores
• IQ Scores
For a discrete random variable, the cumulative distribution function is found by summing up the
probabilities as in the example below.
For a continuous random variable, the cumulative distribution function is the integral of its
probability density function.
Example
Discrete case: Suppose a random variable X has the following probability distribution p(xi):
xi 0 1 2 3 4 5
p(xi) 1/32 5/32 10/32 10/32 5/32 1/32
This is actually a binomial distribution: Bi(5, 0.5) or B(5, 0.5). The cumulative distribution
function F(x) is then:
xi 0 1 2 3 4 5
F(xi) 1/32 6/32 16/32 26/32 31/32 32/32
More formally, the probability density function, f(x), of a continuous random variable X is the
derivative of the cumulative distribution function F(x): f(x) = d/dx F(x)
Since F(x) = P(X<=x) it follows that: integral of f(x)dx = F(b)-F(a) = P(a<X<b)
1. That the total probability for all possible values of the continuous random variable X is 1:
Integral of f(x)dx = 1
2. That the probability density function can never be negative: f(x) > 0 for all x.
Knowledge of the value of X does not effect the probability distribution of Y and vice versa.
Thus there is no relationship between the values of independent random variables.
For continuous independent random variables, their probability density functions are related by
f(x,y) = g(x).h(y)
Where,
g(x) and h(y) are the marginal density functions of the random variables X and Y respectively,
for all pairs (x,y).
*************************************************************************************