Вы находитесь на странице: 1из 9

i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 842 — #1


i i

Appendix A

Miscellaneous Definitions and List


of Distributions
A.1 Indicator Function
The often used indicator symbol I{·} is defined as

1, if condition in {·} is true,
I{·} = (A.1)
0, otherwise.

In addition on occasion we will also utilise I[·].

A.2 Gamma Function


The standard gamma function Γ(α) is defined as

∞
Γ(α) = t α−1 e−t dt, α > 0. (A.2)
0

A.3 Discrete Distributions


A.3.1 POISSON DISTRIBUTION

A Poisson distribution function is denoted as Poisson(λ). The random variable N has a Poisson
distribution, denoted N ∼ Poisson(λ), if its probability mass function is

λk −λ
p(k) = Pr[N = k] = e , λ>0 (A.3)
k!

Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk,
First Edition. Marcelo G. Cruz, Gareth W. Peters, and Pavel V. Shevchenko.
© 2015 John Wiley & Sons, Inc. Published 2015 by John Wiley & Sons, Inc.
842

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 843 — #2


i i

A.3 Discrete Distributions 843

for all k ∈ {0, 1, 2, . . .}. Expectation, variance, and variational coefficient of a random variable
N ∼ Poisson(λ) are

1
E[N ] = λ, Var[N ] = λ, Vco[N ] = √ . (A.4)
λ

A.3.2 BINOMIAL DISTRIBUTION

The Binomial distribution function is denoted as Binomial(n, p). The random variable N has
a Binomial distribution, denoted N ∼ Binomial(n, p), if its probability mass function is
 
n
p(k) = Pr[N = k] = pk (1 − p)n−k , p ∈ (0, 1), n ∈ 1, 2, . . . (A.5)
k

for all k ∈ {0, 1, 2, . . . , n}. Expectation, variance, and variational coefficient of a random
variable N ∼ Binomial(n, p) are

1−p
E[N ] = np, Var[N ] = np(1 − p), Vco[N ] = . (A.6)
np

Remark A.1 N is the number of successes in n independent trials, where p is the probability of a
success in each trial.

A.3.3 NEGATIVE BINOMIAL DISTRIBUTION

A Negative Binomial distribution function is denoted as NegBinomial(r, p). The random vari-
able N has a Negative Binomial distribution, denoted N ∼ NegBinomial(r, p), if its probability
mass function is
 
r+k−1
p(k) = Pr[N = k] = pr (1 − p)k , p ∈ (0, 1), r ∈ (0, ∞) (A.7)
k

for all k ∈ {0, 1, 2, . . .}. Here, the generalized Binomial coefficient is


 
r+k−1 Γ(k + r)
= , (A.8)
k k!Γ(r)

where Γ(r) is the Gamma function.


Expectation, variance, and variational coefficient of a random variable
N ∼ NegBinomial(r, p) are

r(1 − p) r(1 − p) 1
E[N ] = , Var[N ] = , Vco[N ] =  . (A.9)
p p2 r(1 − p)

Remark A.2 If r is a positive integer, N is the number of failures in a sequence of independent


trials until r successes, where p is the probability of a success in each trial.

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 844 — #3


i i

844 Miscellaneous Definitions and List of Distributions

A.3.4 DOUBLY STOCHASTIC POISSON PROCESS (COX PROCESS)

Let (Ω, F , P) be a probability space with information structure (filtration) given by


F = {Ft , t ∈ [0, T ]}. Let Nt be a point process adapted to F . Let λt be a non-negative process
adapted to F such that
 t
λs ds < ∞, a.s. (A.10)
0

If for all 0 ≤ t1 ≤ t2 and u ∈ R one has



 
iu(Nt2 −Nt1 )
iu t2
E e Ft2 = exp e − 1 λs ds , (A.11)
t1

then Nt is called a Ft -doubly stochastic Poisson process with intensity λt where


Ft = σ {λs ; s ≤ t}. One has the following probabilities
   
k
t t2
exp − t12 λs ds t1 s
λ ds
Pr [ Nt2 − Nt1 = k| λs ; t1 ≤ s ≤ t2 ] = (A.12)
k!
and in addition one has for τk the length of time interval between the (k − 1)-th and the k-th
point the following distribution
  tk +t 
Pr [ τk > t| λs ; tk ≤ s ≤ tk + t] = exp − λs ds . (A.13)
tk

A.4 Continuous Distributions


A.4.1 UNIFORM DISTRIBUTION

A uniform distribution function is denoted as Uniform(a, b). The random variable X has a
uniform distribution, denoted X ∼ Uniform(a, b), if its probability density function is
1
f (x) = , a<b (A.14)
b−a
for x ∈ [a, b]. Expectation, variance, and variational coefficient of a random variable
X ∼ Uniform(a, b) are
a+b (b − a)2 b−a
E[X ] = , Var[X ] = , Vco[X ] = √ . (A.15)
2 12 3(a + b)

A.4.2 NORMAL (GAUSSIAN) DISTRIBUTION

A Normal (Gaussian) distribution function is denoted as Normal(μ, σ 2 ). The random variable


X has a Normal distribution, denoted X ∼ Normal(μ, σ 2 ), if its probability density function is
 
1 (x − μ)2
f (x) = √ exp − , σ 2 > 0, μ ∈ R (A.16)
2πσ 2 2σ 2

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 845 — #4


i i

A.4 Continuous Distributions 845

for all x ∈ R. The standard Normal distribution corresponds to Normal(0, 1) and is


denoted as Φ(·). Expectation, variance, and variational coefficient of a random variable
X ∼ Normal(μ, σ 2 ) are

E[X ] = μ, Var[X ] = σ 2 , Vco[X ] = σ/μ. (A.17)

A.4.3 INVERSE GAUSSIAN DISTRIBUTION

An Inverse Gaussian distribution function is denoted as InverseGaussian(μ, γ). The random


variable X has an Inverse Gaussian distribution, denoted X ∼ InverseGaussian(μ, γ), if its
probability density function is

 γ  12  
γ(x − μ)2
f (x) = exp − , x > 0, (A.18)
2πx 3 2μ2 x

where parameters μ > 0 and γ > 0. The corresponding distribution function is


        
γ x 2γ γ x
F (x) = Φ −1 + exp Φ − +1 , (A.19)
x μ μ x μ

where Φ(·) is the standard Normal distribution. Expectation and variance of


X ∼ InverseGaussian(μ, λ) are

μ3
E[X ] = μ, Var[X ] = .
γ

If X1 , . . . , Xn are independent and Xi ∼ InverseGaussian(μwi , γwi2 ), then


⎛  2 ⎞

n 
n 
n
Sn = Xi ∼ InverseGaussian ⎝μ wi , γ wi ⎠. (A.20)
i=1 i=1 i=1

A.4.4 LOGNORMAL DISTRIBUTION

A LogNormal distribution function is denoted as LogNormal(μ, σ 2 ). The random variable


X has a LogNormal distribution, denoted X ∼ LogNormal(μ, σ 2 ), if its probability density
function is
 
1 (ln x − μ)2
f (x) = √ exp − , σ 2 > 0, μ ∈ R (A.21)
x 2πσ 2 2σ 2

for x > 0. Expectation, variance, and variational coefficient of a random variable


X ∼ LogNormal(μ, σ 2 ) are

1 2 2 2

E[X ] = eμ+ 2 σ , Var[X ] = e2μ+σ (e σ − 1), Vco[X ] = eσ2 − 1. (A.22)

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 846 — #5


i i

846 Miscellaneous Definitions and List of Distributions

A.4.5 STUDENT’S t DISTRIBUTION

A t distribution function is denoted as T (ν, μ, σ 2 ). The random variable X has a t distribution,


denoted X ∼ T (ν, μ, σ 2 ), if its probability density function is

 −(ν+1)/2
Γ((ν + 1)/2) 1 (x − μ)2
f (x) = √ 1+ (A.23)
Γ(ν/2) νπ νσ 2

for σ 2 > 0, μ ∈ R, ν = 1, 2, . . ., and all x ∈ R. Expectation, variance, and variational


coefficient of a random variable X ∼ T (ν, μ, σ 2 ) are

E[X ] = μ if ν > 1,
ν
Var[X ] = σ 2
if ν > 2, (A.24)
ν−2

σ ν
Vco[X ] = if ν > 2.
μ ν−2

A.4.6 GAMMA DISTRIBUTION

A Gamma distribution function is denoted as Gamma(α, β). The random variable X has a
gamma distribution, denoted as X ∼ Gamma(α, β), if its probability density function is

x α−1
f (x) = exp(−x/β), α > 0, β > 0 (A.25)
Γ(α)β α

for x > 0. Expectation, variance, and variational coefficient of a random variable


X ∼ Gamma(α, β) are

E[X ] = αβ, Var[X ] = αβ 2 , Vco[X ] = 1/ α. (A.26)

A.4.7 WEIBULL DISTRIBUTION

A Weibull distribution function is denoted as Weibull(α, β). The random variable X has a
Weibull distribution, denoted as X ∼ Weibull(α, β), if its probability density function is

α α−1
f (x) = x exp(−(x/β)α ), α > 0, β > 0 (A.27)
βα

for x > 0. The corresponding distribution function is

F (x) = 1 − exp (−(x/β)α ) , α > 0, β > 0. (A.28)

Expectation and variance of a random variable X ∼ Weibull(α, β) are



E[X ] = βΓ(1 + 1/α), Var[X ] = β 2 Γ(1 + 2/α) − (Γ(1 + 1/α))2 .

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 847 — #6


i i

A.4 Continuous Distributions 847

A.4.8 INVERSE CHI-SQUARED DISTRIBUTION

An Inverse Chi-squared distribution is denoted as InvChiSq(ν, β). The random variable X has
an Inverse Chi-squared distribution, denoted as X ∼ InvChiSq(ν, β), if its probability density
function is
−1−ν/2  
(x/β) β
f (x) = exp − (A.29)
βΓ(ν/2)2ν/2 2x
for x > 0 and parameters ν > 0 and β > 0. Expectation and variance of
X ∼ InvChiSq(ν, β) are
β
E[X ] = , for ν > 2.
ν−2
2β 2
Var[X ] = for ν > 4
(ν − 2)2 (ν − 4)

A.4.9 PARETO DISTRIBUTION (ONE-PARAMETER)

A one-parameter Pareto distribution function is denoted as Pareto(ξ, x0 ). The random variable


X has a Pareto distribution, denoted as X ∼ Pareto (ξ, x0 ), if its distribution function is
 −ξ
x
F (x) = 1 − , x ≥ x0 , (A.30)
x0
where x0 > 0 and ξ > 0. The support starts at x0 , which is typically known and not consid-
ered as a parameter. Therefore, the distribution is referred to as a single-parameter Pareto. The
corresponding probability density function is
 −ξ−1
ξ x
f (x) = . (A.31)
x0 x0

Expectation, variance, and variational coefficient of X ∼ Pareto(ξ, x0 ) are

ξ
E[X ] = x0 if ξ > 1,
ξ−1
ξ
Var[X 2 ] = x02 if ξ > 2,
(ξ − 1)2 (ξ − 2)
1
Vco[X ] =  if ξ > 2.
ξ(ξ − 2)

A.4.10 PARETO DISTRIBUTION (TWO-PARAMETER)

A two-parameter Pareto distribution function is denoted as Pareto2 (α, β). The random variable
X has a Pareto distribution, denoted as X ∼ Pareto2 (α, β), if its distribution function is
 −α
x
F (x) = 1 − 1 + , x ≥ 0, (A.32)
β

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 848 — #7


i i

848 Miscellaneous Definitions and List of Distributions

where α > 0 and β > 0. The corresponding probability density function is

αβ α
f (x) = . (A.33)
(x + β)α+1

The moments of a random variable X ∼ Pareto2 (α, β) are

β k k!
E[X k ] = k , α > k.
i=1 (α − i)

A.4.11 GENERALIZED PARETO DISTRIBUTION

A GPD distribution function is denoted as GPD(ξ, β). The random variable X has a GPD
distribution, denoted as X ∼ GPD(ξ, β), if its distribution function is

1 − (1 + ξx/β)−1/ξ , ξ = 0,
Hξ,β (x) = (A.34)
1 − exp(−x/β), ξ = 0,

where x ≥ 0 when ξ ≥ 0 and 0 ≤ x ≤ −β/ξ when ξ < 0. The corresponding probability


density function is

− ξ1 −1
1
β (1 + ξx/β) , ξ = 0,
h(x) = 1 (A.35)
β exp(−x/β), ξ = 0.

Expectation, variance, and variational coefficient of X ∼ GPD(ξ, β), ξ ≥ 0, are

β n n! 1 β
E[X n ] = n , ξ< ; E[X ] = , ξ < 1;
k=1 (1 − kξ) n 1−ξ
β2 1 1
Var[X 2 ] = , Vco[X ] = √ , ξ< . (A.36)
(1 − ξ)2 (1 − 2ξ) 1 − 2ξ 2

A.4.12 BETA DISTRIBUTION

A Beta distribution function is denoted as Beta(α, β). The random variable X has a Beta
distribution, denoted as X ∼ Beta(α, β), if its probability density function is

Γ(α + β) α−1
f (x) = x (1 − x)β−1 , 0 ≤ x ≤ 1, (A.37)
Γ(α)Γ(β)

for α > 0 and β > 0. Expectation, variance, and variational coefficient of a random variable
X ∼ Beta(α, β) are

α αβ β
E[X ] = , Var[X ] = , Vco[X ] = .
α+β (α + β)2 (1 + α + β) α(1 + α + β)

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 849 — #8


i i

A.4 Continuous Distributions 849

A.4.13 GENERALIZED INVERSE GAUSSIAN DISTRIBUTION

A GIG distribution function is denoted as GIG(ω, φ, ν). The random variable X has a GIG
distribution, denoted as X ∼ GIG(ω, φ, ν), if its probability density function is
(ω/φ)(ν+1)/2 ν −xω−x−1 φ
f (x) = √ x e , x > 0, (A.38)
2Kν+1 (2 ωφ)

where φ > 0, ω ≥ 0 if ν < −1; φ > 0, ω > 0 if ν = −1; φ ≥ 0, ω > 0 if ν > −1; and
∞
1
Kν+1 (z) = uν e−z(u+1/u)/2 du.
2
0

Kν (z) is called a modified Bessel function of the third kind (see, e.g., Abramowitz and Stegun
1965, p. 375).
The moments of a random variable X ∼ GIG(ω, φ, ν) are not available in a closed form
through elementary functions but can be expressed in terms of Bessel functions:
 α/2 √
φ Kν+1+α (2 ωφ)
E[X α ] = √ , α ≥ 1, φ > 0, ω > 0.
ω Kν+1 (2 ωφ)

Often, using notation Rν (z) = Kν+1 (z)/Kν (z), it is written as


 α/2 
φ
α 
E[X α ] = Rν+k (2 ωφ), α = 1, 2, . . .
ω
k=1

∂ ν −(ωx+φ/x)
The mode is easily calculated from ∂x x e = 0 as

1 
mode(X ) = (ν + ν 2 + 4ωφ),

which differs only slightly from the expected value for large ν, that is,

mode(X ) → E[X ] for ν → ∞.

A.4.14 d -VARIATE NORMAL DISTRIBUTION

A d-variate Normal distribution function is denoted as Normal(μ, Σ), where


μ = (μ1 , . . . μd )T ∈ Rd and Σ is a positive definite matrix (d × d). The corresponding
probability density function is
 
1 1
f (x) = √ exp − (x − μ)T Σ−1 (x − μ) , x ∈ Rd , (A.39)
(2π)d/2 det Σ 2

where Σ−1 is the inverse of the matrix Σ. Expectations and covariances of a random vector
X = (X1 , . . . , Xd )T ∼ Normal(μ, Σ) are

E[Xi ] = μi , Cov[Xi , Xj ] = Σi,j , i, j = 1, . . . , d. (A.40)

i i

i i
i i

“Cruz_Driver1” — 2015/1/8 — 13:24 — page 850 — #9


i i

850 Miscellaneous Definitions and List of Distributions

A.4.15 d -VARIATE t-DISTRIBUTION

A d -variate t-distribution function with ν degrees of freedom is denoted as Td (ν, μ, Σ), where
ν > 0, μ = (μ1 , . . . , μd )T ∈ Rd is a location vector and Σ is a positive definite matrix
(d × d). The corresponding probability density function is

ν+d  − ν+d
Γ (x − μ)T Σ−1 (x − μ) 2

f (x) = 2 √ 1+ , (A.41)
(νπ)d/2 Γ ν2 det Σ ν

where x ∈ Rd and Σ−1 is the inverse of the matrix Σ. Expectations and covariances of a
random vector X = (X1 , . . . , Xd )T ∼ Normal(μ, Σ) are

E[Xi ] = μi , if ν > 1, i = 1, . . . , d ;
Cov[Xi , Xj ] = νΣi,j /(ν − 2), if ν > 2, i, j = 1, . . . , d . (A.42)

i i

i i

Вам также может понравиться