Академический Документы
Профессиональный Документы
Культура Документы
Appendix A
∞
Γ(α) = t α−1 e−t dt, α > 0. (A.2)
0
A Poisson distribution function is denoted as Poisson(λ). The random variable N has a Poisson
distribution, denoted N ∼ Poisson(λ), if its probability mass function is
λk −λ
p(k) = Pr[N = k] = e , λ>0 (A.3)
k!
Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk,
First Edition. Marcelo G. Cruz, Gareth W. Peters, and Pavel V. Shevchenko.
© 2015 John Wiley & Sons, Inc. Published 2015 by John Wiley & Sons, Inc.
842
i i
i i
i i
for all k ∈ {0, 1, 2, . . .}. Expectation, variance, and variational coefficient of a random variable
N ∼ Poisson(λ) are
1
E[N ] = λ, Var[N ] = λ, Vco[N ] = √ . (A.4)
λ
The Binomial distribution function is denoted as Binomial(n, p). The random variable N has
a Binomial distribution, denoted N ∼ Binomial(n, p), if its probability mass function is
n
p(k) = Pr[N = k] = pk (1 − p)n−k , p ∈ (0, 1), n ∈ 1, 2, . . . (A.5)
k
for all k ∈ {0, 1, 2, . . . , n}. Expectation, variance, and variational coefficient of a random
variable N ∼ Binomial(n, p) are
1−p
E[N ] = np, Var[N ] = np(1 − p), Vco[N ] = . (A.6)
np
Remark A.1 N is the number of successes in n independent trials, where p is the probability of a
success in each trial.
A Negative Binomial distribution function is denoted as NegBinomial(r, p). The random vari-
able N has a Negative Binomial distribution, denoted N ∼ NegBinomial(r, p), if its probability
mass function is
r+k−1
p(k) = Pr[N = k] = pr (1 − p)k , p ∈ (0, 1), r ∈ (0, ∞) (A.7)
k
r(1 − p) r(1 − p) 1
E[N ] = , Var[N ] = , Vco[N ] = . (A.9)
p p2 r(1 − p)
i i
i i
i i
A uniform distribution function is denoted as Uniform(a, b). The random variable X has a
uniform distribution, denoted X ∼ Uniform(a, b), if its probability density function is
1
f (x) = , a<b (A.14)
b−a
for x ∈ [a, b]. Expectation, variance, and variational coefficient of a random variable
X ∼ Uniform(a, b) are
a+b (b − a)2 b−a
E[X ] = , Var[X ] = , Vco[X ] = √ . (A.15)
2 12 3(a + b)
i i
i i
i i
γ 12
γ(x − μ)2
f (x) = exp − , x > 0, (A.18)
2πx 3 2μ2 x
μ3
E[X ] = μ, Var[X ] = .
γ
1 2 2 2
E[X ] = eμ+ 2 σ , Var[X ] = e2μ+σ (e σ − 1), Vco[X ] = eσ2 − 1. (A.22)
i i
i i
i i
−(ν+1)/2
Γ((ν + 1)/2) 1 (x − μ)2
f (x) = √ 1+ (A.23)
Γ(ν/2) νπ νσ 2
E[X ] = μ if ν > 1,
ν
Var[X ] = σ 2
if ν > 2, (A.24)
ν−2
σ ν
Vco[X ] = if ν > 2.
μ ν−2
A Gamma distribution function is denoted as Gamma(α, β). The random variable X has a
gamma distribution, denoted as X ∼ Gamma(α, β), if its probability density function is
x α−1
f (x) = exp(−x/β), α > 0, β > 0 (A.25)
Γ(α)β α
A Weibull distribution function is denoted as Weibull(α, β). The random variable X has a
Weibull distribution, denoted as X ∼ Weibull(α, β), if its probability density function is
α α−1
f (x) = x exp(−(x/β)α ), α > 0, β > 0 (A.27)
βα
i i
i i
i i
An Inverse Chi-squared distribution is denoted as InvChiSq(ν, β). The random variable X has
an Inverse Chi-squared distribution, denoted as X ∼ InvChiSq(ν, β), if its probability density
function is
−1−ν/2
(x/β) β
f (x) = exp − (A.29)
βΓ(ν/2)2ν/2 2x
for x > 0 and parameters ν > 0 and β > 0. Expectation and variance of
X ∼ InvChiSq(ν, β) are
β
E[X ] = , for ν > 2.
ν−2
2β 2
Var[X ] = for ν > 4
(ν − 2)2 (ν − 4)
ξ
E[X ] = x0 if ξ > 1,
ξ−1
ξ
Var[X 2 ] = x02 if ξ > 2,
(ξ − 1)2 (ξ − 2)
1
Vco[X ] = if ξ > 2.
ξ(ξ − 2)
A two-parameter Pareto distribution function is denoted as Pareto2 (α, β). The random variable
X has a Pareto distribution, denoted as X ∼ Pareto2 (α, β), if its distribution function is
−α
x
F (x) = 1 − 1 + , x ≥ 0, (A.32)
β
i i
i i
i i
αβ α
f (x) = . (A.33)
(x + β)α+1
β k k!
E[X k ] = k , α > k.
i=1 (α − i)
A GPD distribution function is denoted as GPD(ξ, β). The random variable X has a GPD
distribution, denoted as X ∼ GPD(ξ, β), if its distribution function is
1 − (1 + ξx/β)−1/ξ , ξ = 0,
Hξ,β (x) = (A.34)
1 − exp(−x/β), ξ = 0,
β n n! 1 β
E[X n ] = n , ξ< ; E[X ] = , ξ < 1;
k=1 (1 − kξ) n 1−ξ
β2 1 1
Var[X 2 ] = , Vco[X ] = √ , ξ< . (A.36)
(1 − ξ)2 (1 − 2ξ) 1 − 2ξ 2
A Beta distribution function is denoted as Beta(α, β). The random variable X has a Beta
distribution, denoted as X ∼ Beta(α, β), if its probability density function is
Γ(α + β) α−1
f (x) = x (1 − x)β−1 , 0 ≤ x ≤ 1, (A.37)
Γ(α)Γ(β)
for α > 0 and β > 0. Expectation, variance, and variational coefficient of a random variable
X ∼ Beta(α, β) are
α αβ β
E[X ] = , Var[X ] = , Vco[X ] = .
α+β (α + β)2 (1 + α + β) α(1 + α + β)
i i
i i
i i
A GIG distribution function is denoted as GIG(ω, φ, ν). The random variable X has a GIG
distribution, denoted as X ∼ GIG(ω, φ, ν), if its probability density function is
(ω/φ)(ν+1)/2 ν −xω−x−1 φ
f (x) = √ x e , x > 0, (A.38)
2Kν+1 (2 ωφ)
where φ > 0, ω ≥ 0 if ν < −1; φ > 0, ω > 0 if ν = −1; φ ≥ 0, ω > 0 if ν > −1; and
∞
1
Kν+1 (z) = uν e−z(u+1/u)/2 du.
2
0
Kν (z) is called a modified Bessel function of the third kind (see, e.g., Abramowitz and Stegun
1965, p. 375).
The moments of a random variable X ∼ GIG(ω, φ, ν) are not available in a closed form
through elementary functions but can be expressed in terms of Bessel functions:
α/2 √
φ Kν+1+α (2 ωφ)
E[X α ] = √ , α ≥ 1, φ > 0, ω > 0.
ω Kν+1 (2 ωφ)
∂ ν −(ωx+φ/x)
The mode is easily calculated from ∂x x e = 0 as
1
mode(X ) = (ν + ν 2 + 4ωφ),
2ω
which differs only slightly from the expected value for large ν, that is,
where Σ−1 is the inverse of the matrix Σ. Expectations and covariances of a random vector
X = (X1 , . . . , Xd )T ∼ Normal(μ, Σ) are
i i
i i
i i
A d -variate t-distribution function with ν degrees of freedom is denoted as Td (ν, μ, Σ), where
ν > 0, μ = (μ1 , . . . , μd )T ∈ Rd is a location vector and Σ is a positive definite matrix
(d × d). The corresponding probability density function is
ν+d − ν+d
Γ (x − μ)T Σ−1 (x − μ) 2
f (x) = 2 √ 1+ , (A.41)
(νπ)d/2 Γ ν2 det Σ ν
where x ∈ Rd and Σ−1 is the inverse of the matrix Σ. Expectations and covariances of a
random vector X = (X1 , . . . , Xd )T ∼ Normal(μ, Σ) are
E[Xi ] = μi , if ν > 1, i = 1, . . . , d ;
Cov[Xi , Xj ] = νΣi,j /(ν − 2), if ν > 2, i, j = 1, . . . , d . (A.42)
i i
i i