Вы находитесь на странице: 1из 62

Credibility Theory

Valuation Actuary Symposium


TS 28
Stuart Klugman
Drake University and Society of Actuaries
September 18, 2007

Stuart Klugman ()

Credibility Theory

September 18, 2007

1 / 62

Table of Contents
1

Credibility for Valuation Actuaries

A History of Credibility

Types of Credibility
Limited Fluctuation Credibility
Greatest Accuracy Credibility
Credibility Example

Credibility for Mortality Ratios


A Limited Fluctuation Approach
An Example
Conclusions
A Non-credibility Approach
Stuart Klugman ()

Credibility Theory

September 18, 2007

2 / 62

Credibility for Valuation Actuaries

Credibility for Valuation Actuaries

Stuart Klugman ()

Credibility Theory

September 18, 2007

3 / 62

Credibility for Valuation Actuaries

Why do you care?

The NAIC proposed AG for VACARVM uses "credibility"


in several places:
The deterministic assumptions to be used for
projections are to be the actuarys Prudent Best
Estimate. This means that they are to be set at the
conservative end of the actuarys condence interval
as to the true underlying probabilities for the
parameter(s) in question, based on the availability
of relevant experience and its degree of credibility.
Document the mathematics used to adjust mortality
based on credibility and summarize the result of
applying credibility to the mortality segments.
Stuart Klugman ()

Credibility Theory

September 18, 2007

4 / 62

Credibility for Valuation Actuaries

Why do you care?

More from the NAIC AG


The report section shall show any experience data
used to develop the assumptions and describe the
source, relevance and credibility of that data. If
relevant and credible data was not used, the report
section should discuss how the assumption is
consistent with the requirement that the assumption
is to be on the conservative end of the plausible
range of expected experience. The expected
mortality curves are then adjusted based on the
credibility of the experience used to determine the
expected mortality curve.
Stuart Klugman ()

Credibility Theory

September 18, 2007

5 / 62

Credibility for Valuation Actuaries

Why are you here?

None of the above statements denes credibility.


None of the above statements advocates a
particular credibility method.

Stuart Klugman ()

Credibility Theory

September 18, 2007

6 / 62

Credibility for Valuation Actuaries

Isnt there an ASOP on credibility?

There is ASOP 25 - Credibility Procedures Applicable to


Accident and Health, Group Term Life, and
Property/Casualty Coverages. It does oer some
denitions:

Denition
Credibility is a measure of the predictive value in a
given application that the actuary attaches to a
particular body of data.

Denition
Full credibility is the level at which the subject
experience is assigned full predictive value based on a
selected condence interval.
Stuart Klugman ()

Credibility Theory

September 18, 2007

7 / 62

Credibility for Valuation Actuaries

ASOP

Some more from the ASOP:


The purpose of credibility is to blend information
from the subject experience and related experience.
There are various methods of credibility and several
should be considered. A good method is reasonable,
not materially biased (more from me later), is
practical, and balances responsiveness and stability.

Stuart Klugman ()

Credibility Theory

September 18, 2007

8 / 62

Credibility for Valuation Actuaries

ASOP

Credibility requires informed judgment and is not a


precise mathematical process.
Classical credibility (sometimes called limited
uctuation) is acceptable.
Empirical credibility (use the data but no underlying
model) is acceptable.

Stuart Klugman ()

Credibility Theory

September 18, 2007

9 / 62

Credibility for Valuation Actuaries

ASOP

Bayesian credibility (sometimes called greatest


accuracy, uses a prior distribution and least squares
(more from me later)) is acceptable.
Partial credibility can use the square root or
n/(n+k) rules.
Once again, observe that there are no specic guidelines
and no formulas.

The purpose of this session is to ll these


gaps and leave you with something you
can use.
Stuart Klugman ()

Credibility Theory

September 18, 2007

10 / 62

A History of Credibility

A History of Credibility

Stuart Klugman ()

Credibility Theory

September 18, 2007

11 / 62

A History of Credibility

Introduction

Credibility as we know it today dates at least to 1914


(Mowbray, PCAS) and by 1918 (Whitney, PCAS) both
methods used today were developed in the context of
workers compensation insurance. Both begin with an
exposure measure (such as lives or dollars) that is a
proxy for the volume of data). Then:
Limited uctuation - Somehow, determine when the
exposure is enough to let the data speak for itself. If
not, weight the data using the square root of the
ratio of actuarial exposure to exposure needed for
full credibility.
Stuart Klugman ()

Credibility Theory

September 18, 2007

12 / 62

A History of Credibility

Introduction

Greatest accuracy - The weight is n/(n + k ) where


n is the exposure and k is somehow determined.
Full credibility is never achieved.
In both cases, the complement of the weight is applied
to what you would use if you had no data.

Stuart Klugman ()

Credibility Theory

September 18, 2007

13 / 62

A History of Credibility

Little progress until 1968

Both methods were used, but attempts to provide


statistical justication were lacking. In 1950, Arthur
Bailey wrote (paraphrased by me):
While there seems to be some hazy logic behind the
method, it is too obscure to understand. The trained
statistician cries "absurd!" Actuaries admit they have
gone beyond anything proven mathematically. The only
thing they can do is demonstrate that in actual practice,
it works.

History has shown that those who use credibility, even if


they dont understand it very well, do better than those
who dont.
Stuart Klugman ()

Credibility Theory

September 18, 2007

14 / 62

A History of Credibility

What does "do better" mean?

In any statistical estimation problem doing well, better,


or best refers to the quality of the process employed, not
a particular number obtained a particular time. A typical
measure of quality is mean squared error. This is the
squared dierence between the estimated and true
values, averaged over all possible outcomes.
Reminder:

Mean squared error = Variance + bias2

Stuart Klugman ()

Credibility Theory

September 18, 2007

15 / 62

A History of Credibility

Isnt your speaker a trained statistician?

Yes, and my rst encounter with credibility caused me to


cry "absurd!"
You and I learned that when estimating the mean, use
the sample mean. It is unbiased, consistent, sometimes
e cient. The credibility estimator is clearly biased
(despite the ASOP). So how come it works?
By surrendering some bias - variance is reduced and
hence it is possible to reduce mean squared error. This
can only work if we do this more than once so that
overall the biases will cancel.

Stuart Klugman ()

Credibility Theory

September 18, 2007

16 / 62

A History of Credibility

Modern results

In 1968 Hans Bhlmann (ASTIN Bulletin) used a


Bayesian least-squares argument to derive the n/(n + k )
version of greatest accuracy credibility.
There has never been a legitimate derivation of limited
uctuation credibility (but remember, it works).

Stuart Klugman ()

Credibility Theory

September 18, 2007

17 / 62

Types of Credibility

Types of Credibility

Stuart Klugman ()

Credibility Theory

September 18, 2007

18 / 62

Types of Credibility

A List of Denitions/Types

Credibility is the act of assigning the degree to


which you can rely on a particular data set.
The source of limited uctuation credibility
Usually involves tolerance and degree of condence
Ad-hoc procedure for partial credibility

Credibility is a statistical estimation procedure


featuring
Either multiple estimations or a reliance on external
information (or both)
An objective function
A possible restriction on allowable solutions

Stuart Klugman ()

Credibility Theory

September 18, 2007

19 / 62

Types of Credibility

Limited Fluctuation Credibility

Types of Credibility
Limited Fluctuation

Stuart Klugman ()

Credibility Theory

September 18, 2007

20 / 62

Types of Credibility

Limited Fluctuation Credibility

Full Credibility

Denition
Full Credibility is assigned to a data set when the
probability that the relative error in the estimate is less
than r is at least p.
This is a condence interval approach in that it assigns
full credibility when a 100p% condence interval for the
relative error has a width that is less than 2r. For
example, we may want to be 90% condent that the
relative error is less than 5%. Then p = 0.9 and
r = 0.05.
Note - the exposure method (dollars, lives, etc) is not
relevant. What you use for an estimator is relevant.
Stuart Klugman ()

Credibility Theory

September 18, 2007

21 / 62

Types of Credibility

Limited Fluctuation Credibility

An Example

Example
We want to estimate the mortality rate, q for a group of
lives ages 50-59. We have one year experience on each of
1,000 lives. For each life,j, we have recorded the amount
of insurance, bj . We have also recorded the outcome,
dj = 1 if the life died, dj = 0 if the life lived. We choose
1000
the dollar weighted estimate, q = 1000
j =1 bj dj j =1 bj . Is
this fully credible with p = 0.9 and r = 0.05?
We need to calculate Pr(j(q
it is greater than 0.9.
Stuart Klugman ()

q )/q j < 0.05) and see if

Credibility Theory

September 18, 2007

22 / 62

Types of Credibility

Limited Fluctuation Credibility

Three assumptions

2
3

There are enough terms in the sum for the Central


Limit Theorem to hold,
The amounts of insurance are not random, and
The lives are independent and have the same value
of q.

Stuart Klugman ()

Credibility Theory

September 18, 2007

23 / 62

Types of Credibility

Limited Fluctuation Credibility

Example, continued

Then q has a normal distribution with mean and


variance:
1000
1000
j =1 bj E (dj )
j =1 bj q
E (q ) =
= 1000
=q
1000
j =1 bj
j =1 bj
1000
1000
j =1 bj2 Var (dj )
j =1 bj2
Var (q ) =
=
2
2 q (1
1000
1000
j =1 bj
j =1 bj

Stuart Klugman ()

Credibility Theory

q ).

September 18, 2007

24 / 62

Types of Credibility

Limited Fluctuation Credibility

Example, continued

Let 2 be the variance and then,


Pr

q
q

< 0.05

= Pr( 0.05q < q


= Pr

q < 0.05q )

0.05q
0.05q
<Z <

where Z has the standard normal distribution. We then


look up this probability and see if it exceeds 0.9.

Stuart Klugman ()

Credibility Theory

September 18, 2007

25 / 62

Types of Credibility

Limited Fluctuation Credibility

Example, concluded

To calculate the probability, we need the value of q. It is


customary to use q.
Suppose the outcome was as
follows:
There were 200 policies with b = 10, 000 and 3 died,
300 with b = 25, 000 and 7 died, 400 with b = 50, 000
and 8 died, and 100 with b = 100, 000 and 3 died.
Then we have q = 905, 000/39, 500, 000 = 0.022911
and 2 = 2.2075 1012 (0.022911)(0.977089)/(3.95
107 )2 = 3.1673 10 5 . The probability statement
becomes Pr( 0.20355 < Z < 0.20355) = 0.1613 < 0.9
and the data are not fully credible.
Stuart Klugman ()

Credibility Theory

September 18, 2007

26 / 62

Types of Credibility

Limited Fluctuation Credibility

Partial credibility

The weight, Z , is conventionally determined as follows.


There are several justications for this result, but all are
unsatisfactory.
1

Determine the minimum exposure needed for full


credibility.
The weight is the square root of the ratio of the
actual exposure to the exposure from step 1.

Stuart Klugman ()

Credibility Theory

September 18, 2007

27 / 62

Types of Credibility

Limited Fluctuation Credibility

Example

Example
Determine the weight for the previous example
The goal is to get the right hand term in the probability
statement to be 1.645. The equation to solve is:
0.05q
1.645 =
=h

Stuart Klugman ()

0.05q nj=1 bj
q (1

Credibility Theory

q ) nj=1 bj2

i1/2 .

September 18, 2007

28 / 62

Types of Credibility

Limited Fluctuation Credibility

Example, continued

The sums go to n because this does not represent our


sample, it represents a sample that would deserve full
credibility. We cannot work with this because the
b-values are not known for a general sample. Assume
they are proportional to those in our sample. Dividing by
1,000 and substituting the sample value of q,
0.05(0.022911)(39500n)
[0.022911(0.977189)(2.2075 109 )n]1/2
= 0.0064365n1/2
n = 65, 318.

1.645 =

Stuart Klugman ()

Credibility Theory

September 18, 2007

29 / 62

Types of Credibility

Limited Fluctuation Credibility

Example, continued

This leads to Z = (1000/65318)1/2 = 0.1237. There is


an easier way to do this. The answer is simply
0.20355/1.645. Also note that had we measured
exposure as dollars, the result would also be the same.
Our estimate now has a weight. What about the
complement? Limited uctuation credibility says to give
it to the quantity you would use if you had no data.
Maybe that is an industry table, maybe it is your
company experience on a similar group. The method
oers no advice.

Stuart Klugman ()

Credibility Theory

September 18, 2007

30 / 62

Types of Credibility

Limited Fluctuation Credibility

Limited uctuation credibility

PROS:
Good for experience rating, where there is a default
premium.
Simple to implement and understand.
CONS:
Reects only reliability of data, not of base rate.
May not have an obvious base rate.
No sound statistical justication.

Stuart Klugman ()

Credibility Theory

September 18, 2007

31 / 62

Types of Credibility

Greatest Accuracy Credibility

Types of Credibility
Greatest Accuracy

Stuart Klugman ()

Credibility Theory

September 18, 2007

32 / 62

Types of Credibility

Greatest Accuracy Credibility

Statistical model

Each person, group, block (whichever applies) has a


distribution that is governed by a parameter .
The parameter varies randomly from group to
group.
Based on n observations from a group, let m
be an
estimator of the mean.
Pick m
to minimize E [m
m( )]2 where m( ) is
the true mean when the parameter is and the
expectation is taken over the data and .

Stuart Klugman ()

Credibility Theory

September 18, 2007

33 / 62

Types of Credibility

Greatest Accuracy Credibility

Myths

Myth #1 - This is a Bayesian analysis.


No, the distribution of is not a prior distribution. It is
not an opinion. It is a true (though unobservable)
distribution.
Myth #2 - This is a linear analysis where the answer
must be Z x + (1 Z ). The form of the answer
depends on the distributions over which the expectation
in step 4 is taken.

Stuart Klugman ()

Credibility Theory

September 18, 2007

34 / 62

Types of Credibility

Greatest Accuracy Credibility

Bhlmann credibility

This is the linear version. Start by assuming the answer


must be of the form in Myth #2. Then the value of Z
that minimizes squared error is
n
n+k
E [Var (X j )]
k =
Var [E (X j )]

Z =

Notes: As n increases Z increases. As the numerator of k


increases, there is less credibility because the sample data
is less reliable. As the denominator of k increases, there
is more credibility because is less likely to be useful.
Stuart Klugman ()

Credibility Theory

September 18, 2007

35 / 62

Types of Credibility

Greatest Accuracy Credibility

Bhlmann credibility

The numerator of k is called the process variance. It is


essentially the same as the variance used in limited
uctuation credibility and plays the same role.
The denominator is the new part. It measures how one
person/group/block diers from others. When data
comes from only one group, there may be no way to
estimate this quantity.

Stuart Klugman ()

Credibility Theory

September 18, 2007

36 / 62

Types of Credibility

Credibility Example

Types of Credibility
An Example

Stuart Klugman ()

Credibility Theory

September 18, 2007

37 / 62

Types of Credibility

Credibility Example

A baseball example

Greatest accuracy credibility cannot be done in the


context of the previous example. The reason is that we
have no information about how q varies from group to
group. So, I will provide a non-actuarial example to
illustrate this method.

Example
As of May 30, 2006, 77 National League batters had 175
or more plate appearances. Their batting averages
ranged from Miguel Cabrera (.346) to Clint Barmes
(.191). Estimate the 77 season-ending averages and
compare the answers to the end-of-season actual
averages.
Stuart Klugman ()

Credibility Theory

September 18, 2007

38 / 62

Types of Credibility

Credibility Example

Results using no credibility

The traditional estimate is to use the sample mean


(current batting average) to estimate the nal average.
Use a weighted squared error measure to see how we did.
The weights are the nal number of at-bats.
For this estimate, the result is 31.418.
Of 40 who were above average to start, 30 had their
averages drop. Of 37 who were below average to start,
27 had their averages increase.

Stuart Klugman ()

Credibility Theory

September 18, 2007

39 / 62

Types of Credibility

Credibility Example

Results using limited uctuation credibility

Using the sample mean and variance, 700 at-bats are


needed for full credibility.
The weighted squared error is 16.887.
Gaming the system, we could identify the standard for
full credibility that would give the best result. It is 1,016
at-bats and the squared error is 16.546.

Stuart Klugman ()

Credibility Theory

September 18, 2007

40 / 62

Types of Credibility

Credibility Example

Results using greatest accuracy credibility

A beta distribution was used to model how batting


averages vary from player to player. Method of moments
estimation set k as 653. For that value the weighted
squared error is 18.492. There are other (including
nonparametric) methods for estimating k.
The optimal (post-results) value of 16.564 is achieved at
k = 245.

Stuart Klugman ()

Credibility Theory

September 18, 2007

41 / 62

Types of Credibility

Credibility Example

Conclusions

Using credibility helped considerably


The two methods performed about equally well
We could use greatest accuracy credibility because
we had information about how batting averages vary
from player to player.

Stuart Klugman ()

Credibility Theory

September 18, 2007

42 / 62

Credibility for Mortality Ratios

Credibility for Mortality Ratios

Stuart Klugman ()

Credibility Theory

September 18, 2007

43 / 62

Credibility for Mortality Ratios

The problem

For each of i = 1, . . . , g ages or groups, you have


collected ng observations. For the jth observation from
the ith group you have bij is the amount insured, fij is
the fraction of year the person could have been observed,
and dij = 1 if the person dies. You also have qis which is
the standard table rate for group i and you do not have
qi which is the true rate.

Stuart Klugman ()

Credibility Theory

September 18, 2007

44 / 62

Credibility for Mortality Ratios

The estimator

Let
n

g
g
g
g
i =1 j =1 bij fij dij
i =1 j =1 bij fij dij
m
= g
=
ng
e
i =1 j =1 bij fij qis

be the estimated mortality ratio. The denominator, e, is


the known expected number of deaths. With no data
would set m = 1 and use the standard table. How
credible is m?

Stuart Klugman ()

Credibility Theory

September 18, 2007

45 / 62

Credibility for Mortality Ratios

A Limited Fluctuation Approach

Credibility for Mortality Ratios


A Limited Fluctuation Approach

Stuart Klugman ()

Credibility Theory

September 18, 2007

46 / 62

Credibility for Mortality Ratios

A Limited Fluctuation Approach

First two moments

We rst need the moments of the estimator:


= E (m
)=E
g

g
i =1

!
ng
b
f
d
j =1 ij ij ij
e

ng

i =1 j =1 bij fij qi
=
e
2 = Var (m
) = Var
g

Stuart Klugman ()

i =1 qi (1

g
i =1

!
ng
b
f
d
j =1 ij ij ij
e

qi ) (bij fij )2
.
e2
Credibility Theory

September 18, 2007

47 / 62

Credibility for Mortality Ratios

A Limited Fluctuation Approach

Estimating the moments

It is less volatile to use the standard table values as


adjusted by the morality ratio to estimate each qi :
n

m
gi=1 qis j =g 1 bij fij
=
=m

e
g
is (1 mq
is ) (bij fij )2
i =1 mq
2
=
e2
Alternatives would be to use the sample qi values are the
standard table qis values without adjustment.

Stuart Klugman ()

Credibility Theory

September 18, 2007

48 / 62

Credibility for Mortality Ratios

A Limited Fluctuation Approach

Obtaining Z

As noted earlier, the key is to standardize the variable


and thus look at (recalling that r is the error tolerance)
z =

rm

If z is below 1.645 (for 90% condence), then


Z = z /1.645 else it is 1. Then the nal value for the
mortality ratio is
Zm
+ (1

Stuart Klugman ()

Credibility Theory

Z)

September 18, 2007

49 / 62

Credibility for Mortality Ratios

An Example

Crediblity for Mortality Ratios


An Example

Stuart Klugman ()

Credibility Theory

September 18, 2007

50 / 62

Credibility for Mortality Ratios

An Example

The data

11,370 males from ages 70 through 100 who had


purchased charitable trust annuities were studied (thanks
to Don Behan for supplying the data, some
approximations were made for simplication). The data
supplied counted life-years and number of deaths at each
age. Because individual data was not available, values of
b and f in the formulas were both set to 1. There were
782.67 expected deaths (US 2000 annuity table) and 744
actual deaths for a mortality ratio of 0.9506.

Stuart Klugman ()

Credibility Theory

September 18, 2007

51 / 62

Credibility for Mortality Ratios

An Example

Limited utuation calculation

The variance is 0.0011. Set the full credibility standard


as being within 5% of the true ratio 95% of the time.
The standardized variable
is
p
z = 0.05(0.9506)/ 0.0011 = 1.432. This is below 1.96
and therefore full credibility cannot be granted.
The partial credibility factor is Z = 1.423/1.96 = 0.7306
and then the credibility estimate for the mortality ratio is
0.7306(0.9506) + 1

Stuart Klugman ()

0.7306 = 0.9638.

Credibility Theory

September 18, 2007

52 / 62

Credibility for Mortality Ratios

Conclusions

Crediblity for Mortality Ratios


Conclusions

Stuart Klugman ()

Credibility Theory

September 18, 2007

53 / 62

Credibility for Mortality Ratios

Conclusions

The Good

This method is easy to use. It is likely the data


required are available and if not, a reasonable
approximation may be.
It is not terribly arbitrary (as long as bounds on the
tolerance and probability are agreed upon).
It is credibility and thus "it works."

Stuart Klugman ()

Credibility Theory

September 18, 2007

54 / 62

Credibility for Mortality Ratios

Conclusions

The Bad

It is not as scientic as greatest accuracy. Some


purists may object.
It is somewhat arbitrary.
Why does 1 Z get to multiply 1?

Stuart Klugman ()

Credibility Theory

September 18, 2007

55 / 62

Credibility for Mortality Ratios

Conclusions

The Ugly

Maybe this is not a credibility problem after all.


We are trying to estimate only one thing. We do
not care if over all companies the error is reduced,
only the error for our company.
Wont there be times when 1 is not the right
starting point? Maybe our block is better (or worse)
for reasons that are independent of the data
(marketing, underwriting distrinctions, etc.)

Stuart Klugman ()

Credibility Theory

September 18, 2007

56 / 62

Credibility for Mortality Ratios

Conclusions

Can greatest accuracy be used?

Maybe - I am told that a stardard table is being prepared


based on experience from about 40 companies. That
may make it possible to estimate the variance needed to
place in the denomiator of k.
Stay tuned.

Stuart Klugman ()

Credibility Theory

September 18, 2007

57 / 62

Credibility for Mortality Ratios

A Non-credibility Approach

Crediblity for Mortality Ratios


A Non-credibility Approach

Stuart Klugman ()

Credibility Theory

September 18, 2007

58 / 62

Credibility for Mortality Ratios

A Non-credibility Approach

What is the problem we really want to solve?

Problem
What mortality ratio should we use when data are
limited?
How about a two-step process?
1
2

If m
1.96 includes 1, use the standard table.
if above,
If the interval is below 1, use m
+ 1.96,
use m
1.96

Note - this is a nod to credibility - if there is evidence


that the true ratio is not 1, do not move all the way to
the observed ratio.
Stuart Klugman ()

Credibility Theory

September 18, 2007

59 / 62

Credibility for Mortality Ratios

A Non-credibility Approach

More on this idea

Step 1 is simply a standard test of the null hypothesis


that the true ratio is 1. Regardless of the amount of
data, if there is no evidence that the true ratio is not 1,
it makes sense to use the standard table.
Step 2 notes that either if there is a lot of data (in which
case will be small) or if the observed ratio is a long
way from 1, then an adjustment is appropriate.

Stuart Klugman ()

Credibility Theory

September 18, 2007

60 / 62

Credibility for Mortality Ratios

A Non-credibility Approach

Example re-visited

The upper bound of the condence interval is


p
0.9506 + 1.96 0.0011 = 1.0156
and therefore there is no reason to deviate from the
standard table.

Stuart Klugman ()

Credibility Theory

September 18, 2007

61 / 62

Credibility for Mortality Ratios

A Non-credibility Approach

The bottom line

You know more about credibility than when you


came in.
You know that for setting mortality assumptions for
principles based reserves, it is likely you are not
faced with a credibility problem.
But you are faced with a reliability of data problem.

Stuart Klugman ()

Credibility Theory

September 18, 2007

62 / 62

Вам также может понравиться