Вы находитесь на странице: 1из 10

Benghazi University

Faculty of Engineering
Electrical and Electronics Engineering Department

Telecommunications Lab. II
(EE 496)
EXPERIMENT NO.2

Limit Theorems, Transformation of RVs, Bivariate Distributions and


Joint Statistics.

Students Name: ABRAR ALI AHMED


Students Number: 2
Date of Experiment: 22 / 4 / 2019
Objective:
1- Verification of CLT and De-Moivre – Laplace Theorem.
2- Generate new distribution from known distributions
(Transformation of RVs).
3- Generate 2D Jointly Normal data and plot its density and
distribution functions.
4- Learn how to calculate Correlation, Covariance and Correlation
Coefficient.
Results and Discussion:
Part 1: Limit Theorems:
a) Central Limit Theorem (C.L.T):

Here we have n of independent random variables and the n is large, so


the summation of the RV converts to Gaussian RV.
The mean of y is the summation of the mean of each single RV.
The variance of y is the summation of the variance of each single RV.
When we change the value of n to 30, the distribution was closer to
Gaussian RV than when n=3, the fig is shown below.
When n=3

The mean and the variance changed corresponding to the value of n.


We can extract the mean from the figure, as we clearly see; it is the
midpoint of the distribution.
As shown on the figure, the mean does not effect on the shape, it is only
effect on the shifting property.
When n=30
b) De-Moivre - Laplace Theorem:

We changed the value of n to 200; the distribution was closer to


Gaussian.

C) Binomial to Poisson Approximation:

Here is we have X is a binomial distribution with large n,and with


(p=0.05)
Smaller the p closer the distribution will be to Poisson RV distribution.
Part2: Transformation of RVs
a) Normal to Lognormal:

The only transformation we can use to transform normal to lognormal is


exponential.
b) Standard Uniform to Exponential:

This part is transformation or generate random variable (exp) from


random variable. uniform
The g(x) equal to the inverse of the CDF of the desired distribution
(random variable).
When the input is uniform RV and output is exponential RV, then our
system (g(x)) is the inverse of the CDF of the exponential, so it is Log.
We use the inverse CDF method only when the transformation is for
standard normal input.
c) Two independent zero mean Normal to Rayleigh and Uniform:

There is two conditions to have Rayleigh gain and normal phase; they
should have zero mean and the same variance.
The input function is complex of two normal functions. The absolute
value of these two normal functions is Rayleigh function.
The phase angle of them are uniformly distributed, which mean the
phase shift can be randomly any number with equal probability.
Part 3: Bivariate Normal Distribution:
X1 and x2 are not necessary to have the same average and variance.
When x1 and x2 are Uncorrelated each D has A Gaussian pdf,
Uncorrelated in Gaussian means that they are independent.
When we changed the variance in covariance matrix, the pdf changed
and it was not Gaussian pdf, so When x1 and x2 are correlated the pdf
will not be Gaussian.
Part 4: Correlation, Covariance and Correlation Coefficient:
X and Y are independent, X and W are dependent, Y and W are
dependent.
Rxy= Cxy= Rho =
0.0203- 0.0740- 0.6689

Rxw= Cxw= Rho2 =


0.4400 3.5817 3.3910

Cov Matrix of X, Y
0.0740- 3.4340
3.8906 0.0740-

Cov Matrix of X, Y
0.0740- 3.4340
3.8906 0.0740-

CorrCoeff_Matrix_of_X_and_Y=

0.0203- 1.0000
1.0000 0.0203-
Cov Matrix of X, W
3.5820 3.4340
19.2923 3.5820

CorrCoeff_Matrix_of_X_and_W=

0.4401 1.0000
1.0000 0.4401
The range of correlation coefficient is between -1 and 1 ,it’s equal to 1
when x and y are totally correlated (in the case of part(4),the RV X with
itself or RV Y with itself) ,it’s equal to zero when X and Y are totally
uncorrelated (in the case of part(4) X and Y are uncorrelated).
Values of (correlation – covariance – correlation coefficient) between (X
and Y) and (X and W) are very close to values of example
Covariance matrix (n1, m1) gives the variance of the first random
variable, which is X.
Covariance matrix (n1, m2) and (n2, m1) gives the covariance between
the two random variables X, Y.
Covariance matrix (n2, m2) gives the variance of the second random
variable, which is Y.

Conclusion:
(Center limit theorem) used to approximate two or more random
variable to normal (Gaussian).
The central limit theorem state that the sum of any random variables
will eventually give a Gaussian distribution and the number of random
variables needed to sum to converge them to clearly Gaussian
distribution depends on that random variable so some random variable
converge rapidly to Gaussian than another.
Binomial distribution can be approximated with Gaussian distribution as
n goes to infinity with ƞ = np and δ = √npq.
We can the approximate the binomial to normal by (n and p)*(If n is
and p far about zero. and npq more than 1 large)*
For large n and small p, Binomial distribution can be approximated with
Poisson distribution with λ = np.
The uniform random variable used to generate other forms of random
variables by using different type of g(x) which depend on the desired
random variable.
The sum of two independent normal random variables is also normal.
However, if the two normal random variables are not independent, then
their sum is not necessarily normal.
Bivariate normal distributions:
The “regular” normal distribution has one random variable; A bivariate
normal distribution is made up of two independent random variables.
The two variables in a bivariate normal are both are normally
distributed, and they have a normal distribution when both are added
together. Visually, the bivariate normal distribution is a three-
dimensional bell curve.
In the post on Rayleigh channel model, we stated that a circularly
symmetric random variable is of the form , where real and
imaginary parts are zero mean independent and identically distributed
Gaussian random variables.
In communication, it is important to work with Rayleigh.
Expectation:
The expected value of a function h(X, Y) of the discrete RVs (X, Y)
can be found
Directly from the joint probability function of (X, Y).
The covariance matrix has information of the covariance relation
between the two random variables and the covariance relation of the
random variable with itself.
The covariance of X and Y is defined as:
Cov(X, Y ) = E[{X − E(X)}{Y − E(Y )}] = E(XY ) − E(X)E(Y )
Correlation is a statistical measure that indicates the extent to which
two or more variables fluctuate together. A positive correlation indicates
the extent to which those variables increase or decrease in parallel; a
negative correlation indicates the extent to which one variable increases
as the other decreases.
A correlation coefficient is a statistical measure of the degree to which
changes to the value of one variable predict change to the value of
another. When the fluctuation of one variable reliably predicts a similar
fluctuation in another variable, there is often a tendency to think that
means that the change in one causes the change in the other.
The correlation coefficient of X and Y is defined as:
ρ(X, Y ) = Cov(X, Y )/√ (Var(X). Var(Y ))
SKEWNESS: Measure of lack of symmetry in the pdf

Вам также может понравиться