Вы находитесь на странице: 1из 7

1 Introduction to credibility theory

Consider an insurer with a block of automobile insurance business. Under its current ratings system, automobile
insurance policies are classied according to the following criteria:
1. number of drivers
2. number of vehicles
3. gender of each driver
4. kilometers driven per year (approximately)
5. whether the vehicle(s) are driven to work
The insurer assumes that policies with identical answers to the above ve questions belong to the same rating class
(i.e. represent similar risks based on its ratings system). Among the policies in a specic rating class, two policies have
been charged the so-called manual premium (i.e. premium specied in the insurance manual for a policy with similar
characteristics) for the last three years which for simplicity we assume to be $1500 per year.
You also have in your historical database the amount paid by the insurer for the three years of coverage for these
two policies:
Amount paid by the insurer
Policy 1 Policy 2
Year 1 0 500
Year 2 200 4000
Year 3 0 2500
.
The chief actuary has asked your recommendation for the premium to be charged to those two policies for Year 4.
What is your recommendation?
1
2 Review - conditional probability and conditional expectation
2.1 Conditional probability & distributions
We consider two random variables (rvs) X and Y with joint probability mass function (pmf) or joint density function
f
X,Y
(x, y). It is known that
the marginal pmf or density function of X
f
X
(x) =
_
all y
f
X,Y
(x, y) dy,
or
f
X
(x) =

all y
f
X,Y
(x, y) .
the marginal pmf or density function of Y
f
Y
(y) =
_
all x
f
X,Y
(x, y) dx,
or
f
Y
(y) =

all x
f
X,Y
(x, y) .
the conditional pmf or density function of X |Y = y is
f
X|Y
(x|y ) =
f
X,Y
(x, y)
f
Y
(y)
.
Remark 1 If the rvs X and Y are independent,
f
X,Y
(x, y) = f
X
(x) f
Y
(y) .
As an immediate consequence, we have
f
X|Y
(x|y ) =
f
X
(x) f
Y
(y)
f
Y
(y)
= f
X
(x) .
Using the conditional distribution of X |Y , the marginal pmf or density function of X can be expressed as
f
X
(x) =
_
all y
f
X|Y
(x|y ) f
Y
(y) dy,
or
f
X
(x) =

all y
f
X|Y
(x|y ) f
Y
(y) . (1)
Often f
X|Y
(x|y ) is a known parametric probability distribution with a given set of parameters. Under the represen-
tation (1), the marginal distribution of X is called a mixed distribution (or mixture).
Example 2 ( n-point mixture) Assume that Y is a discrete rv with support {y
i
}
n
i=1
with f
Y
(y
i
) = p
i
(p
1
+...+p
n
= 1).
Then,
f
X
(x) =
n

i=1
f
X|Y
(x|y
i
) f
Y
(y
i
)
=
n

i=1
p
i
f
X|Y
(x|y
i
) .
2
Example 3 ( negative binomial) Suppose that
X | = is Poisson distributed with mean
is gamma distributed with density function
f

() =
()
1
e

()
, > 0,
where , > 0 with
() =
_

0

1
e

d.
Aside: Using integration by parts,
( + 1) = () , for > 0.
For a positive integer,
( + 1) = ()
= ( 1) ( 1)
= ( 1) ... (1) (1)
= !
Find the marginal distribution of X.
f
X
(x) =
_

0
e

x
x!
()
1
e

()
d
=

()
1
x!
_

0

x+1
e
(+1)
d
=

()
1
x!
(x +)
( + 1)
x+
_

0
( + 1)
x+

x+1
e
(+1)
(x +)
d
. .
=1
=
(x +)
()
1
x!

( + 1)
x+
=
(x +)
()
1
x!
_

+ 1
_

_
1
+ 1
_
x
=
_
x + 1
x
__

+ 1
_

_
1
+ 1
_
x
.
With this pmf, X is known to be a negative binomial rv.
2.2 Conditional expectation
In this section, we assume that X |Y = y is a continuous rv with a valid density function f
X|Y
( |y ) (if X |Y is discrete,
replace all the integral signs by summation signs). The conditional moment of X |Y is given by
E [X |Y = y ] =
_
all x
xf
X|Y
(x|y ) dx.
3
Note that E [X |Y ] is a random variable (in Y ). The mean of E [X |Y ] is
E [E [X |Y ]] =
_
all y
E [X |Y = y ] f
Y
(y) dy
=
_
all y
_
all x
xf
X|Y
(x|y ) f
Y
(y) dxdy
=
_
all y
_
all x
xf
X,Y
(x, y) dxdy
=
_
all x
x
__
all y
f
X,Y
(x, y) dy
_
dx
=
_
all x
xf
X
(x) dx
= E [X] .
Thus,
E [X] = E [E [X |Y ]] .
Example 4 ( negative binomial) Suppose that
X | = is Poisson distributed with mean
is gamma distributed with density function
f

() =
()
1
e

()
, > 0.
Then,
E [X] = E [E [X |]]
= E []
=

.
Along the same lines, for a function h(X, Y ) that satises some mild integrability conditions,
E [h(X, Y )] = E [E [h(X, Y ) |Y ]] .
For instance, we choose h(X, Y ) = (X E [X |Y ])
2
. Then,
E [h(X, Y ) |Y ] = E
_
(X E [X |Y ])
2
|Y
_
= V ar (X |Y )
= E
_
X
2
|Y

(E [X |Y ])
2
.
It follows that
E [V ar (X |Y )] = E
_
E
_
X
2
|Y

(E [X |Y ])
2
_
= E
_
E
_
X
2
|Y

E
_
(E [X |Y ])
2
_
= E
_
X
2

E
_
(E [X |Y ])
2
_
. (2)
4
Also, by denition, the variance of the rv E [X |Y ] is given by
V ar (E [X |Y ]) = E
_
(E [X |Y ])
2
_
E [E [X |Y ]]
2
= E
_
(E [X |Y ])
2
_
E [X]
2
. (3)
Adding up (2) and (3), one nds that
E [V ar (X |Y )] + V ar (E [X |Y ]) = E
_
X
2

E [X]
2
= V ar (X) .
Example 5 ( negative binomial) Suppose that
X | = is Poisson distributed with mean
is gamma distributed with density function
f

() =
()
1
e

()
, > 0.
Then,
V ar (X) = E [V ar (X |)] + V ar (E [X |])
= E [] + V ar ()
=

2
=
( + 1)

2
.
Example 6 ( compound Poisson) Suppose that
X =
_

N
i=1
Y
i
, N > 0
0, N = 0
,
where N is Poisson distributed with mean and Y
1
, Y
2
, ... is a sequence of iid rvs with mean and variance
2
. All
these rvs are independent. We say that X is a compound Poisson rv. We have
E [X] = E [E [X |N ]]
= E [N]
= ,
and
V ar (X) = E [V ar (X |N )] + V ar (E [X |N ])
= E
_
N
2

+ V ar (N)
=
2
E [N] +
2
V ar (N)
=
_

2
+
2
_
.
Example 7 ( normal-normal) Suppose that X | = Normal(, v) and Normal(, a). Find the marginal
distribution of X.
5
Solution: We identify the distribution of X via its moment generating function. Indeed,
M
X
(t) = E
_
e
tX

= E
_
E
_
e
tX
|

= E
_
e
t+
t
2
2
v
_
= e
t
2
2
v
E
_
e
t

= e
t
2
2
v
e
t+
t
2
2
a
= e
t+
t
2
2
(a+v)
.
By the uniqueness of moment generating function, one concludes that X Normal(, a + v).
2.3 Concept of unbiaised estimation
Suppose that one has selected a model, i.e. a density function or pmf f (x; ) which depends on a parameter . Suppose
that X
1
, ..., X
n
are n iid rvs with density function or pmf f (x; ) and dene the random vector X = (X
1
, ..., X
n
). Our
objective is to nd an estimator of using the random sample X.
We say that an estimator

=

(X) is unbiaised of if
E
_

_
= . (4)
Example 8 Let X
1
, ..., X
n
be iid rvs with the exponential distribution
f (x; ) = e
x
, x > 0.
Clearly, the sample mean X = (X
1
+ ...X
n
) /n is an unbiaised estimator for the mean 1/
E
_
X

= E
_
X
1
+ ...X
n
n
_
=
1
n
_
n
1

_
=
1

.
While (4) seems intuitively to be a reasonable property for an estimator

to have, there are some drawbacks.
Unbiaisedness is not preserved under parameter transformation, i.e. if (4) is satised, 1/X is not an unbiaised
estimator of
E
_
1
X
_
= ,
for instance.
In some situations, (4) is satised but results in a silly estimator.
Example 9 Let X be Poisson distributed with mean . Then,
E
_
(1)
X
_
= e
(11)
= e
2
,
6
which implies that (1)
X
is an unbiaised estimator of e
2
. However, the estimator (1)
X
is such that
(1)
x
=
_
1, x even
1, x odd
,
which is no where close to e
2
. A better estimator would be e
2X
(even if biaised).
Despite some shortcomings, unbiaisedness is generally a reasonable property for an estimator to have. Thus, in
credibility theory, we will construct nonparametric unbiaised estimators of some quantities of interest. For instance,
suppose that X
1
, ..., X
n
are all iid rvs with mean and variance
2
. Dene
X =
1
n
n

i=1
X
i
.
In this context, it can be shown that X is an unbiaised estimator of the mean and

n
i=1
(XiX)
2
n1
is an unbiaised
estimator of the variance
2
.
7

Вам также может понравиться