Академический Документы
Профессиональный Документы
Культура Документы
April 1999
Editors Note
CreditMetrics News
Bank of America
Bank of Montreal
Barclays Capital
Deutsche Bank
Risk-return Reporting
4
Risk-return reports are planned for the next version of CreditManager. Currently, it is possible to extract return information, and create these reports in a separate application.
KMV Corporation
UBS AG
Bank of Tokyo-Mitsubishi
MBIA Inc.
CIBC World Markets
Moodys Investors Service
Arthur Andersen
Deloitte Touche Tohmatsu
International
Ernst & Young
KPMG
Oliver, Wyman & Co. LLC
Pricewaterhouse Coopers LLP
CreditMetrics
April 1999
Monitor
page 2
Editors Note
Christopher C. Finger
RiskMetrics Group
chris.finger@riskmetrics.com
With this issue, we present our first CreditMetrics Monitor as The RiskMetrics Group. While it is a
first issue in a sense, it is also a last, as we will no longer be publishing our research in CreditMetrics
or RiskMetrics Monitors. Our next research publication will be the inaugural issue of the RMG Journal. The RMG Journal will encompass both market and credit risk research, and continue the mix of
short, practical articles with longer research pieces. While it is likely that articles on credit risk and
CreditMetrics will appear in most issues, we plan to have occasional special issues devoted solely to
these themes.
Our Users Corner touches on a number of features which are often requested by CreditManager
users, and are actually possible with the current version, but which have not been explicitly incorporated into the software. The first article demonstrates how users might create risk-return reports using
CreditManager outputs along with a simple spreadsheet program, and provides an outline for how
this type of analysis will be implemented in the next version. The second article describes a common
synthetic Collateralized Loan Obligation (CLO) and utilizes CreditManager to perform a simple
worst case analysis for investors in the various tranches of this security. Finally, the third article discusses the use of multiple databases in CreditManager, a feature which may be exploited to improve
the performance of the application, and to help organize and share portfolio data.
All three of our longer articles investigate the structure of correlations in credit portfolio models. In
the first, we examine the CreditMetrics Monte Carlo approach. We observe that once the indices that
drive the portfolio have been determined, the movements of the individual obligors are conditionally
independent. Since there is a wealth of assumptions and techniques applicable only to portfolios of
independent assets, we are able to divide our Monte Carlo process. We first simulate the indices, then
rely on other analytical techniques to obtain the portfolio distribution conditional on the index values.
This has the effect of significantly reducing the dimensionality of the portfolio problem. We show
that this framework allows for closed form solutions in simple cases, and significantly improves simulation performance in more complex cases.
In the second article, Li of the RiskMetrics Group considers basket credit derivatives, a class of structures which includes the CLO treated in the Users Corner. The structures Li considers all depend on
the value of a portfolio of credit instruments; in this way, they differ from the simple credit derivatives we have treated previously, the value of which depended only on the credit standing of two
names. The author demonstrates that simple basket structures can be evaluated using the current
CreditMetrics framework. He points out that to value more complex structures, it is necessary to
model the timing of default events, and to extend the model to multiple horizons, while preserving
the existing correlation structure. Using modeling techniques from the insurance literature, he builds
the needed extensions, calibrates them to the current CreditMetrics framework, and illustrates how
to value the more complex basket structures.
In our last article, Nagpal and Behar of Standard & Poors take a new approach to constructing the
distribution of a portfolio of defaultable instruments. Rather than proposing a model for how defaults
occur, they begin with the available default data - default probabilities and joint default probabilities
(or default correlations) for groups of credits - and define arguably the simplest possible model that
is consistent with this data. This leads them to a family of decompositions which provide a relatively
inexpensive (in computational terms) way to obtain the entire portfolio distribution. Since a unique
decomposition is not specified, the authors present an example to illustrate the methods sensitivity
to the choice of decompositions; the example also shows that the CreditMetrics model consistent
with the same data gives results within this range of sensitivities. The authors finish with a rigorous
proof of sufficient conditions for their decomposition to exist.
CreditMetrics
April 1999
Monitor
page 3
CreditMetrics News
Sarah Jun Xie
Risk Metrics Group
sarah.xie@riskmetrics.com
In the fall of 1998, the RiskMetrics Group (RMG) was spun off from J.P. Morgan. J.P. Morgan and
Reuters hold minority shares of RMG. RMG, known as the Risk Management Products and Research
Group while at J.P. Morgan, is responsible for the creation and development of benchmark risk management products including RiskMetrics, CreditMetrics, and DataMetrics. Ownership of the CreditMetrics, CreditManager, RiskMetrics, and FourFifteen brands has been transferred to RMG, and all
future enhancements to the methodology, data, and software will be by the new venture. All of the
CreditMetrics co-sponsor agreements are now also with RMG.
CreditMetrics
April 1999
Monitor
page 4
Risk-return Reporting
Christopher C. Finger
RiskMetrics Group
chris.finger@riskmetrics.com
One of the most common and most natural requests for future releases of the CreditManager product
is for the facility to compare the current risk outputs with an expected return measure. While this is
planned for future releases, it is possible to perform some risk-return analysis in the current version.
This article describes how a user might obtain return information using the current software to complement the existing risk outputs.1
Risk-return analysis at the exposure level takes its most classical form in mean-variance portfolio
theory. This theory, as expressed originally by Markowitz in the 1950s, investigates the portfolio
preferences that all investors should have, given only that they would prefer higher returns and lower
risk. For a given set of investable assets, Markowitz presents an optimal set of relative holdings, or
the efficient portfolio. Generally speaking, all investors should prefer such a portfolio, differing
only in how much or how little of this efficient portfolio they wish to hold.2
However tempting it is to apply the Markowitz theory to credit, and to insist that any portfolio should
be rebalanced to be efficient, to do so would be highly impractical. First, to arrive at an efficient asset
allocation, it is necessary to have unlimited flexibility to buy and sell exposures. Such flexibility is
rarely allowed with credit portfolios, as market illiquidity and client relationships put significant constraints on the portfolio manager. Second, while an application of the Markowitz theory might offer
an optimal trade-off between return and portfolio standard deviation, it is not likely to offer an optimal trade-off between return and capital. Despite these caveats, a loose application of the theory to
credit is helpful. Though the efficient portfolio may be unattainable, it serves as an ideal reference
point from which the portfolio manager can measure a need to shed exposure or to invest more. Furthermore, although optimizing standard deviation will not lead to an optimal capital level, for most
large portfolios, any reduction in standard deviation will produce a reduction in capital.
To analyze the risk-return profile of a portfolio, the user must begin with the inputs, namely, the risk
and return of the components of the portfolio. As CreditManager automatically produces the risk
measures, it remains only to specify the expected return for each exposure in the portfolio. While this
can be a difficult and subjective task for equity portfolios, there is a simple and objective definition
of return which can be used within the CreditMetrics model. We know intuitively that the expected
return on bonds and other fixed income exposures should account for the interest received; the cost
of funding the exposure; and the expected loss due to potential credit events. CreditManager defines
the mean value of an exposure at the risk horizon to be equal to the exposures current value, plus
cash (interest) received, plus changes due to rolling down the yield curve, plus changes due rating
changes and default. Thus, to obtain expected returns, a user need only export the exposures current
value and mean value, and then account for the funding cost and compute in a separate application:
mean value current value
----------------------------------------------------------------- cost of funding .
current value
Alternately, to account only for default losses, a user might define the expected return as
This article is partly based upon discussions with John Veidis at Fuji Bank, New York.
An accessible reference to this theory is Elton and Gruber, Modern portfolio theory and investment analysis, 4th ed., John
Wiley and Sons, Inc., 1991.
CreditMetrics
April 1999
Monitor
page 5
Risk-return Reporting
cash received expected loss due to default
--------------------------------------------------------------------------------------------------------- cost of funding .
current value
In either case, CreditManager makes the requisite information to compute expected returns available
for export. To export this information, the user runs a non-standard scatter report, and selects the
statistics discussed above. A sample definition screen for such a report is displayed below. Note that
at the bottom of the screen, the statistics needed to compute expected return have been selected, along
with the risk statistic to which the return will be compared.
CreditMetrics
April 1999
Monitor
page 6
Risk-return Reporting
The following serves as an example of the process if we were to run this report on the CreditManager
sample portfolio:
Assuming a flat funding cost of 5% for each exposure, we compute each exposures expected return.
In turn, we plot the expected returns against each exposures contribution to the risk of the portfolio
(that is, the exposures marginal standard deviation relative to its size), as shown in Chart 1. Notably,
the returns are tightly clustered, while the risk contributions are more spread out. In addition, the
chart illustrates little relation between the size of exposures and their risk return trade-off; there are
large exposures near the bottom right of the chart (worse risk-return performance) as well as small
exposures closer to the top left (better risk-return performance). Both of these observations suggest
that there exists room for further reduction of risk and that, by reallocating our investments within
the existing exposures, risk would be further reduced without severely impacting the portfolio return.
Chart 1
Risk-return for CreditMetrics sample portfolio.
Exposures grouped by current value.
Exp. return
1.25%
1%
0.75%
0.5%
<200k
200k-500k
0.25%
500k-2m
>2m
Marg. st. dev.
0%
0.5%
1%
1.5%
2%
2.5%
For the purposes of contrasting the sample portfolio, we construct an improved portfolio using the
same exposures, but setting the relative holdings in these exposures proportionally to their ratio of
expected return to marginal standard deviation.3 Intuitively, the idea is simply to invest more heavily
in instruments with greater return on risk. Performing the same analysis as above produces the results
in Chart 2. In general, the risk-return trade-off for the individual exposures is similar to that in the
sample portfolio, but the large investments in the improved portfolio are concentrated in those exposures with the best risk-return trade-off.
This does not produce the optimal portfolio in the Markowitz sense, but it does, in general, produce a better one, that
is one with a higher ratio of return to risk.
CreditMetrics
April 1999
Monitor
page 7
Risk-return Reporting
Chart 2
Risk-return for improved portfolio.
Exposures grouped by current value.
Exp. return
1.25%
<200k
200k-500k
1%
500k-2m
>2m
0.75%
0.5%
0.25%
0.5%
1%
1.5%
2%
2.5%
While we can be reasonably sure that the improved portfolio has a lower standard deviation than
the original one, it is not certain that we have reduced any of the percentile losses, or capital, of the
original portfolio. We would expect that the capital amounts are closely associated with standard deviations, and that by reducing the portfolio standard deviation, we have reduced the worst case loss
comparably. This is in fact the case; the portfolio statistics are presented in Table 1.
Table 1
Statistics for sample and improved portfolios.
Results expressed as percentages of current portfolio value.
Sample
Improved
Return less funding
0.82
0.84
Standard deviation
0.46
0.25
0.63
0.29
2.06
0.99
4.73
3.10
CreditMetrics
April 1999
Monitor
page 8
This paper illustrates how CreditManager is applied to perform a Worst Loss analysis on the underlying reference portfolio of J.P. Morgans BISTRO. The BISTRO is a synthetic Collateralized Loan
Obligation (CLO), one of a growing number of structured credit risk products being developed by
banks to address Loan Portfolio Management issues. Traditionally, once transactions are originated
for the credit portfolio, banks have adopted a buy and hold approach due to the illiquid secondary
markets for such positions. More recently however, banks have recognized that holding credit positions to maturity result in risk/return inefficiencies from burdensome regulatory capital requirements
and relationship constraints. Solutions to eliminating these inefficiencies have come in the form of
products such as the BISTRO.
In a standard CLO, the originating bank assigns its drawn/funded loans to a Special Purpose Vehicle
(SPV), which, in turn issues several classes of credit-tranched notes to capital market investors. Losses realized on loan transactions are passed to investors via the tranches which represent ownership
of the transactions. Synthetic CLOs, on the other hand, make use of credit derivatives contracts to
transfer the credit risk of a loan portfolio, rather than through the sale of transactions (as the standard
CLO structure). In this way, only the risk, but not the ownership of the underlying exposures is transferred.
Arguably, J.P. Morgans BISTRO has been the most active synthetic CLO issue. It is aimed at institutional spread investors. The BISTRO SPV (Trust) offers two levels of credit-tranched notes in addition to having an equity reserve account. Investors proceeds are used to purchase Treasury Notes
paying a fixed coupon and maturing on the BISTRO maturity date. At the same time, the BISTRO
SPV enters into a credit default swap with Morgan Guaranty Trust (MGT) referencing the underlying
credits in a pool of companies, each with a specified notional amount. Under the terms of the swap,
MGT pays the BISTRO Trust a fixed semi-annual payment comprising the spread between the Treasury coupons and the coupons promised on the issued BISTRO Notes. In return, at the Notes maturity the trust compensates MGT for losses experienced as a result of credit events. Investors are not
in a first-loss position with respect to this portfolio of credit risk. Payments are made by the BISTRO
Trust only after, and to the extent that, losses due to credit events have exceeded the first-loss threshold (the equity reserve account). Credit events are based on ISDA credit swap definitions including
bankruptcy, failure to pay, cross acceleration, restructuring or repudiation. Losses are computed either by a computation of final work-out value for companies emerging from bankruptcy prior to maturity or by soliciting bids from the market for senior unsecured obligations, and are allocated to the
two tranches according to seniority. To date, there have been three outstanding issuances of BISTROs. This analysis focuses on the BISTRO Trust 1997-1000. Table 1, Table 2, and Chart 1 provide
summary information on this issue
Table 1.
BISTRO Trust 1997-1000 Tranches.
Description
Super-Senior Tranche
Amount (US$M)
Coupon
8,993
Rating
Not Securitized
Senior Notes
460
6.35%
Aaa (Moodys)
Subordinated Notes
237
9.50%
Ba2 (Moodys)
32
Not Securitized
CreditMetrics
April 1999
Monitor
page 9
US$9.72B
Reference Credits 307 Senior Unsecured Obligations of US, European, and Canadian Companies
Maturity
Collateral
Chart 1
BISTRO Structure and Cash Flow.
BISTRO
SPV
Senior
Tranche
Investors
6.35% coupon
SeniorTranche
+ Face amount
- realized losses
at maturity
Investors proceeds
used to buy T-Notes
Subordinated
Tranche
9.50% coupon
Subordinated
Tranche
Investors
Reference Pool
307 Companies
US$9.72B Notional
+ Face amount
- realized losses
at maturity
5.625% coupon
First-Loss
Equity Reserve
(0.33%)
Treasury Notes
(Collateral)
matures with
BISTRO
An institutional investor in the BISTRO Notes has exposure to the portfolio of underlying reference
credits. The exposure over the term (4 years remaining) of the BISTRO is examined using existing
functionality in CreditManager. The methodology to do this involves a few simple steps:
1. Data on the underlying credits is obtained from the Credit Derivatives Group at J.P. Morgan.
The data set includes reference names, notional amounts, credit ratings, and country and
industry classifications. Credit ratings, and country and industry classifications are necessary inputs for creating the Obligors import file.
2. A 4-year default/transition probability matrix is created manually in CreditManager using
data from Moodys credit research reports (or S&P).
3. In preparing the Exposures file for import, the following inputs are used:
(a) Asset Type is set to Credit Def. Swap since the BISTRO can be viewed as a basket of
default swaps on the underlying reference credits.
(b) Maturity of the underlying reference asset is set equal to the maturity of the BISTRO.
CreditMetrics
April 1999
Monitor
page 10
$9.72B
$24M
$32M
Percentile Loss
10
0.1s
$42M
$92M
$267M
$648M
$34M
$84M
$259M
$640M
CreditMetrics
April 1999
Monitor
page 11
See for example Carty, Lea and Dana Lieberman, Corporate Bond Defaults and Default Rates 1938-1995, Moodys
Investors Service, 1996.
CreditMetrics
April 1999
Monitor
page 12
Some CreditManager 1.0 users found that monolithic databases containing an unwieldy number of
records of obligors and exposures limited the flexibility and speed of the application. Unlike its predecessor, Version 2.0 gives the user the ability to divide a large database into several components. Thus,
the speed of the obligor and exposure editors is increased dramatically and the organization, management, and sharing of portfolio data is vastly improved.
When using multiple databases in CreditManager 2.0, the user can analyze individual databases by
using a standard configuration tool available on the CreditManager installation CD-ROM. Thus, it is
possible to share exposure and obligor data among multiple portfolios without needing to repeatedly
export and import large amounts of data from one portfolio to another. A shared copy of a central
master database can be accessed by all users within an organizations network to work with the same
base portfolios. Version 2.0 also allows users to keep saved copies of an existing database, to which
the user can revert to undo changes.
CreditMetrics
April 1999
Monitor
page 13
The above is a shot of the first pop-up screen. The location of the new database is to be entered in the
Database Path field, as shown. When the user clicks OK, a confirmation message will pop up,
asking the user to confirm the change to the CreditManager Registry entries. Once the user has confirmed the change, she can start CreditManager 2.0 and it will use the new database1.
The user can empty this new database, import load obligors, exposures and data, and, at any time,
revert to the default database by closing CreditManager 2.0 and then using the Reconfigure Tool. This
procedure will work for database files located on a both network server, as well as a users local hard
disk.
A problem can occur if the files within the new database are pre-defined read-only. CreditManager 2.0 will detect such files
upon launch and produce an error message. This problem can occur when a database is archived to a CD-ROM and then copied onto a Windows NT file system. Changing the file access permissions to read-write will solve the problem.
CreditMetrics
April 1999
Monitor
page 14
It is well known that the CreditMetrics model relies on Monte Carlo simulation to calculate the full
distribution of portfolio value. Taking this approach, independent scenarios are generated in which
the future credit rating of each obligor in the portfolio is known and correlations are reflected so that
highly correlated obligors, for example, default in the same scenario more frequently than less correlated obligors. In each scenario, the credit rating of the obligors determines the value of the portfolio; accumulating the value of the portfolio in each scenario allows us to estimate descriptive
statistics for the portfolio, or even to examine the shape of the distribution itself.
While the CreditMetrics Monte Carlo approach is attractive for its flexibility, it suffers from relatively slow convergence. Any statistic obtained through Monte Carlo is subject to simulation noise, but
this noise is slower to disappear in our model than it is in the case of models such as RiskMetrics,
where the distributions of individual assets are continuous. We shall see that by performing simulations as we do currently, we fail to take full advantage of the models structure. Specifically, we will
see that once we condition on the industry factors that drive the model, all defaults and rating changes
are independent. Though prior studies (Kolyoglu and Hickman (1998), Finger (1998), and Gordy
(1998)) have addressed this fact, they have focused more on using this to facilitate comparisons between CreditMetrics and other models. Intuitively, conditional independence is a useful feature,
since there is a wealth of machinery upon which we may call to aggregate independent risks. In this
article, we will illustrate how to view the CreditMetrics model in a conditional setting, and use conditional independence to exploit a variety of established results. This will provide us with a toolbox
of techniques to improve on the existing Monte Carlo simulations.
We begin by illustrating the conditional approach with a simple example. Next, we apply three different techniques to either approximate or compute directly the conditional portfolio distribution, and
show how these techniques may be used to enhance our Monte Carlo procedure. Finally, we relax the
assumptions of the simple example, and show how these techniques may be applied in the general
case.
Z i = w Z + 1 w i ,
where Z is the (normalized) return on a common market index, i is the idiosyncratic movement for
this obligor, and w is the common weight of each obligor on the market index. We assume, as always,
that Z, 1, 2, , N are independent, normally distributed random variables, with mean zero and
1
Thus, regardless of the number of loans N , the total size of the portfolio is one. The size of the loans is arbitrary, but this
formulation will aid our exposition later.
Throughout, we will use to denote the cumulative distribution function (CDF), and the density function for the standard normal distribution.
CreditMetrics
April 1999
Monitor
page 15
wZ
i < --------------------- .
2
1w
Since i follows the standard normal distribution, the probability, given Z , that Eq. [2] occurs is given by
[3]
w Z
p ( Z ) = --------------------- .
1 w2
Equivalently, we could build the correlation matrix for the Z i and generate scenarios using the Cholesky decomposition
of this matrix. We choose the other method here mostly for ease of exposition.
CreditMetrics
April 1999
Monitor
page 16
Unconditional
Cond., Z=-2
Cond., Z=2
0.4
0.3
0.2
0.1
Market factor
-3
-2
-1
The strength of the dependence on Z is a function of the index weight w . When w is close to one
(meaning asset correlations are high), the conditional default probability is most affected by the market; when w is lower, more of the obligor randomness is due to the idiosyncratic term, and the conditional default probability is less affected. At the extreme, when w is zero, there is no dependence
on the market, and the conditional default probability is always just p , regardless of the value of Z .
See Chart 2.
Before moving on to our new portfolio approaches, we point out that the conditional framework provides us with a convenient way to decompose and compute the variance of the portfolio. In general,
we may decompose the variance of any random variable as
[4]
For our case, the conditional mean of the portfolio value V (recall that each loan is either worth zero
or 1 N ) is ( 1 p ( Z ) ) , the variance of which is just the variance of p ( Z ) . Since the expectation of
2
2
p ( Z ) is p , the variance is E [ p ( Z ) ] p . To compute the first term, we evaluate the integral4
See Vasicek (1997) for this and other details of the distribution of p ( Z ) . In the article, this distribution is referred to as
the normal inverse.
CreditMetrics
April 1999
Monitor
page 17
[5]
E[p(Z ) ] =
( z ) p ( z ) dz =
w z 2
(
z
)
-------------------- dz = 2 ( , ;w 2 ) ,
1 w2
where 2 ( ., . ;w 2 ) is the bivariate normal CDF with correlation w 2 . Thus, the first term in Eq. [4],
the variance of the conditional portfolio mean, is equal to ( 2 ( , ;w 2 ) p 2 ) . We may think of this
as the portfolio variance that is due to moves in the market factor.
Chart 2
Conditional default probability as a function of the market factor.
Cond. def. prob.
7%
w=0%
w=20%
6%
w=10%
w=30%
5%
4%
3%
2%
1%
Market factor
-3
-2
-1
To compute the second term in Eq. [4], we utilize the conditional independence of the individual obligors. In particular, given Z , defaults are independent, and occur with probability p ( Z ) . Thus, the
conditional variance of the value of individual loans is p ( Z ) ( 1 p ( Z ) ) N 2 .5 Since the loan values
are conditionally independent, the conditional portfolio variance is the sum of the individual conditional variances, or p ( Z ) ( 1 p ( Z ) ) N . Taking expectations and using Eq. [5] again, we see that the
mean of the portfolio conditional variance is ( p 2 ( , ;w 2 ) ) N . Putting everything together, we
see that the portfolio variance is given by
[6]
( 2 ( , ;w 2 ) p 2 ) + ( p 2 ( , ;w 2 ) ) N
idiosyncratic variance
CreditMetrics
April 1999
Monitor
page 18
80%
60%
40%
20%
N=100
N=200
N=500
N=1000
w
0%
10%
20%
30%
40%
50%
60%
This subsection draws significantly from Vasicek (1997), which provides a more rigorous treatment of the limit arguments
than is presented here.
CreditMetrics
April 1999
Monitor
page 19
[7]
1 w (1 v )
Pr { V < v } = Pr { 1 p ( Z ) < v } = ------------------------------------------------------------ .
w
2
Differentiating the expression above with respect to v , we obtain the probability density for the portfolio:
[8]
1 w2 1 ( 1 v )
------------------------------------------------------------
1
w
------------------- --------------------------------------------------------------------- .
w
( 1 ( 1 v ) )
w2
25
20
15
10
90%
92%
94%
96%
98%
Portfolio value
100%
CreditMetrics
April 1999
Monitor
page 20
Cumulative probability
100%
80%
60%
40%
20%
80%
85%
90%
95%
Portfolio value
100%
Returning to the two drawbacks of the standard Monte Carlo framework, we see that this method certainly addresses the first (computational complexity). On the other hand, while the avoidance of any
Monte Carlo techniques guarantees that the method is not subject to simulation error, there is a significant chance of error due to the methods strong assumptions. This model error is certainly enough
to suggest not using this method for capital calculations, yet it is difficult to ignore the models simplicity.
One potential application of the LLN method is for sensitivity testing. Since the computation of percentiles is so straightforward, it is simple to investigate the effect of increasing correlation levels or
default likelihoods. Furthermore, even with more complicated portfolio distributions, it is possible
to use the LLN method for quick sensitivity testing by first calibrating the two parameters, p and w
(we have assumed away any dependence on N ), such that the mean and variance of the LLN methods distribution match the mean and variance of the more complicated distribution. It is then reasonable to treat w as an effective correlation parameter, and investigate the sensitivity of percentile
levels to changes in this parameter. In this way, we use the distribution of ( 1 p ( Z ) ) as a simple two
parameter family which is representative of a broad class of credit portfolios. In principle, this is similar to Moodys use of the diversity score in their ratings model for collateralized bond and loan
obligations (see Gluck (1998)).
We have seen that the conditional approach can provide useful, quick, but rough approximations of
the credit portfolio distribution. While this is helpful, we would like to exploit the technique for
something more advanced than back of the envelope calculations, and in the next section, move to
less rigid assumptions and a more involved technique.
CreditMetrics
April 1999
Monitor
page 21
95%
90%
85%
80%
Conditional mean
SD bands, N=100
SD bands, N=1000
75%
Market factor
-2
-1.5
-1
-0.5
0.5
The advantage of the CLT assumption is that conditional on the market factor, the portfolio distribution is tractable. Specifically, given Z , we know the portfolios conditional mean, 1 p ( Z ) , and conditional variance, p ( Z ) ( 1 p ( Z ) ) N , and can then write the conditional probability that the
portfolio value V is less than some level v by
CreditMetrics
April 1999
Monitor
page 22
v (1 p(Z ))
Pr Z { V < v } = ----------------------------------------------------- ,
p ( Z ) ( 1 p ( Z ) ) N
where Pr Z denotes the conditional probability given Z . To compute the unconditional probability
that the portfolio value falls below v , we take the expectation of the expression in Eq. [9]:
[10]
Pr { V < v } = E [ Pr Z { V < v } ] =
v ( 1 p( z) )
-
( z ) -------------------------------------------------p ( z ) ( 1 p ( z ) ) N
dz .
Although this integral is intractable analytically, it can be evaluated numerically by standard techniques. Thus, we can compute the entire portfolio distribution, which we present along with the LLN
results in Chart 7 and Chart 8. We also present selected percentile values in Table 1. In both the chart
and the table, we present CLT results for two choices of portfolio size N ; recall that in the LLN method, we do not account for N , and therefore our results will be the same for all portfolio sizes.
Chart 7
Portfolio density function for CLT and LLN methods.
p=5%, w=50%.
Relative frequency
25
CLT, N=50
CLT, N=200
20
LLN
15
10
90%
92%
94%
96%
98%
Portfolio value
100%
CreditMetrics
April 1999
Monitor
page 23
Cumulative probability
4%
CLT, N=50
CLT, N=200
LLN
3%
2%
1%
Portfolio value
60%
65%
70%
Table 1
Percentile values for CLT and LLN methods.
p=5%, w=50%.
Percentile
CLT, N=50
CLT, N=200
50%
10%
5%
1%
10bp
4bp
97.2%
86.7%
81.5%
68.9%
51.4%
45.1%
97.2%
87.4%
82.5%
70.5%
53.8%
47.6%
75%
80%
LLN
97.1%
87.7%
82.9%
71.1%
54.6%
48.5%
We make two general observations. First, for larger values of N , the CLT and LLN methods give
more similar results; this is sensible, since it is at these values that the LLN assumption of zero conditional variance is more accurate. Second, the discrepancies between the two methods tend to be
greater at more extreme percentile levels. This is also to be expected, as the more extreme percentiles
correspond to market factor realizations where the conditional default probability, and therefore also
the conditional portfolio variance, is greater than at less extreme percentiles.
In the end, the CLT approach is a step up in complexity from the LLN case, in that we need to evaluate the integral in Eq. [10] to obtain the cumulative probability distribution for the portfolio. However, the complexity in the computation of this integral is still far less than that of the standard Monte
Carlo approach. Assuming we used the same number of sample points (say 1,000) in our evaluation
of Eq. [10] as we use scenarios in the Monte Carlo procedure8, the number of points necessary for
Monte Carlo is still N times greater than that for the CLT integration. Thus, where the Monte Carlo
CreditMetrics
April 1999
Monitor
page 24
E Z e t V = E Z exp [ t ( V 1 + + V N ) ] = E Z ( e t V 1 e t V N ) .
This actually is an extreme case, as we can obtain the same precision in the evaluation of Eq. [10] with a hundred sample
points as we would for the Monte Carlo procedure with several thousand sample points.
The author wishes to thank Andrew Ulmer for help with the computations for this method.
10
It is only necessary that there exists an interval containing 0 such that the equality holds for all t in this interval.
11
12
For integer-valued random variables (for example, the number of obligors which default in a given period), it is more
common in practice to use the probability generating function, E X , from which the probability distribution is simpler
to obtain. This is the approach used by Nagpal and Bahar (1999). It would be reasonable to use this approach here, but we
use the MGF approach to ease our generalizations later.
CreditMetrics
April 1999
Monitor
page 25
E Z e t V = E Z e t V1 EZ e t V2 E Z e t VN = ( E Z e t V1 ) N .
To complete the calculation of the conditional portfolio MGF, we recall that, conditional on Z , V 1
takes on the value 1 N with probability 1 p ( Z ) and 0 with probability p ( Z ) . Thus the conditional
MGF for V 1 is
[13]
E Z e t V1 = ( 1 p ( Z ) ) e t / N + p ( Z ) e t 0 = e t / N [ 1 p ( Z ) ( 1 e t / N ) ] ,
E Z e t V = e t [ 1 p ( Z ) ( 1 e t / N ) ] N .
Finally, to obtain the unconditional MGF, we take the expectation of Eq. [14] and obtain
[15]
E et V = et
( z ) [ 1 p ( z ) ( 1 e t / N ) ]N dz .
As was the case with Eq. [10], we must rely on numerical techniques to evaluate this integral. Note
that in this setting, the CreditMetrics model looks similar to the CreditRisk+ model, with the only
notable difference being the shape of the distribution of the conditional default probability p ( Z ) .13
To obtain the portfolio distribution from Eq. [15], we apply the standard Fast Fourier Transform
(FFT) inversion (see Press et al (1992)). This technique requires sampling Eq. [15] at values of t
spaced equally around the unit circle in the complex plane, that is, at t = 2ik m , for
k = 0, 1, , m 1 , where for fastest results, m is an integer power of 2 .14 Applying this technique
requires more operations than does the CLT approach, but there are no approximation errors other
than the possible error in evaluating Eq. [15] and the similar error resulting from using a finite number of sample points for the FFT.
We present the results for all three techniques in Chart 9 and Table 2. There is very little discrepancy
between the three methods, and at more extreme percentiles, the MGF approach seems to agree more
closely with the LLN. Checking our results with 10,000 Monte Carlo simulations, we see that it is
impossible to conclude that any of the three methods gives incorrect results. Thus, even if the choice
between the methods is still unclear, we can be certain that all three provide significant improvements
over the slower Monte Carlo approach.
13
14
CreditMetrics
April 1999
Monitor
page 26
MGF
CLT
20
LLN
15
10
90%
92%
94%
96%
Table 2
Percentile levels using MGF, CLT, and LLN methods.
p=5%, w=50%, N=200.
Percentile
MGF
CLT
50%
97.0%
97.2%
10%
87.4%
87.4%
5%
82.5%
82.5%
1%
70.6%
70.5%
0.10%
54.4%
53.8%
0.04%
48.6%
47.6%
98%
Portfolio value
100%
LLN
97.1%
87.7%
82.9%
71.1%
54.6%
48.5%
To this point, we have been able to avoid Monte Carlo simulations entirely, and to exploit the structure of the CreditMetrics model in a simple case to analytically estimate the portfolio distribution.
Unfortunately, as we stray from our simple case (in particular, as we allow more market factors), the
analytical techniques are no longer practical. However, we may still exploit the conditional framework by using the techniques we describe above, and sampling the factors through a Monte Carlo
procedure. This is the subject of the next section.
CreditMetrics
April 1999
Monitor
page 27
Z i = w i, 1 Z 1 + w i, 2 Z 2 + 1 wi, 1 w i, 2 i ,
[17]
1
r
s
i = p i , r = 2, , m .
s = r
Analogously to the previous section, we observe that given the values of Z 1 and Z 2 , the obligor transitions are conditionally independent. Let p ir ( Z 1, Z 2 ) denote the conditional probability that the i th
obligor migrates to rating r , given a realization of Z 1 and Z 2 . Noting that the condition that Z i is
less than a threshold ir becomes
[18]
15
ir w i, 1 Z 1 wi, 2 Z 2
i < ---------------------------------------------------------- ,
2
2
1 w i, 1 wi, 2
Both Gordy (1998) and Kolyoglu and Hickman (1998) point out that the assumption of independence is not restrictive,
since two correlated market factors may be expressed as linear combinations of two independent factors.
CreditMetrics
April 1999
Monitor
page 28
[19]
im w i, 1 Z 1 w i, 2 Z 2
p im ( Z 1, Z 2 ) = ------------------------------------------------------------ ,
2
2
1 w i, 1 w i, 2
[20]
ir wi, 1 Z 1 w i, 2 Z 2
ir + 1 w i, 1 Z 1 wi, 2 Z 2
p ir ( Z 1, Z 2 ) = ---------------------------------------------------------- ----------------------------------------------------------------- , r = 1, , m 1 .
2
2
2
2
1 w w
1 w w
i, 1
i, 2
i, 1
i, 2
With the conditional transition probabilities in hand, we have all of the information necessary to describe the conditional portfolio distribution. For instance, the conditional mean is simply
N
[21]
m ( Z 1,
Z2)
pir ( Z 1, Z 2 ) vir .
i = 1r = 1
The conditional variance and conditional MGF calculations are also straightforward:
N
[22]
2 ( Z 1, Z 2 ) =
[23]
E Z 1, Z 2
et V
EZ , Z
1
i=1
t Vi
p ir ( Z 1, Z2 ) e
t v ir
i = 1r = 1
We are now ready to discuss conditional Monte Carlo methods for the general case.
16
This procedure was discussed for the one factor case by Belkin, Suchower, and Forrest (1998).
CreditMetrics
April 1999
Monitor
page 29
[24]
Pr { V < v } = E [ Pr Z 1, Z 2 { V < v } ] =
v m ( z 1, z 2 )
( z1 ) ( z2 ) -----------------------------2( z , z )
1
dz 1 dz2 .
With only two market factors, it may still be practical to evaluate this integral numerically; however,
as the number of factors increases, Monte Carlo methods become more attractive. The Monte Carlo
procedure is as follows:
1. Generate n pairs ( z 11, z 12 ), ( z 21, z22 ), , ( zn1, z n2 ) of independent, normally distributed random variates.
2. For each pair ( z j1, z j2 ) , compute the conditional mean m j = m ( z j1, z j2 )
j2 = 2 ( z j1, zj2 ) .
3. Estimate the portfolio mean by m =
n
[25]
1
2 = ---
n
j m j n
and variance
)2 + 1
( mj m
---
n
j=1
j2 .
j=1
[26]
v m
1
Pr { V < v } = --- --------------j .
n
2
j
Note that in step 3, we do not rely on any distributional assumptions, and admit only the error from
our Monte Carlo sampling. In step 4, we utilize the CLT assumption, and our estimation is subject to
error based on this as well as our sampling.
CreditMetrics
April 1999
Monitor
page 30
[27]
et V
= ( z1 ) ( z2 )
pir ( z1, z2 ) e
i = 1r = 1
t v ir
dz 1 dz 2 .
We implement a Monte Carlo approach to sample pairs ( zj1, z j2 ) as before, evaluate the conditional
MGF (the term in parentheses) in Eq. [27] for each pair, and aggregate the results to obtain an estimate of E e t V . Obtaining the MGF for an adequate set of t values allows us to apply the FFT inversion technique described earlier. This technique gives us the entire portfolio distribution, with
errors due only to sampling, but not to any assumptions.
Example
To illustrate the performance of our three proposed methods, we analyze an example portfolio. The
portfolio is comprised of exposures to 500 obligors, the ratings of which are distributed as in Table 3.
The exposures are well diversified by size (no single exposure accounts for more than 1% of the total)
and depend on two market factors; the dependence on the factors is such that the average asset correlation between obligors is about 30%. The portfolio has a mean value of 248.9 and a standard deviation of 2.56.
Table 3
Distribution of obligors across ratings.
Example portfolio.
Percent of
Default
obligors
prob. (%)
Rating
AAA
AA
A
BBB
BB
B
10
20
30
30
6
4
0.02
0.04
0.05
0.17
0.98
4.92
We apply the three methods using 625 trials in each case; for the MGF method, we use 256 values of
t .17 In addition, we perform a standard Monte Carlo simulation using 50,000 scenarios. The results
are presented in Charts 10 and 11 and Table 4.
17
CreditMetrics
April 1999
Monitor
page 31
40%
30%
20%
10%
Portfolio value
240
242
244
246
248
250
252
254
Chart 11
CDF for example portfolio.
Cumulative probability
6%
CLT
5%
MGF
Monte Carlo
4%
3%
2%
1%
Portfolio value
220
225
230
235
240
245
CreditMetrics
April 1999
Monitor
page 32
36
46
263
1793
Our first observation is that all of the results are quite similar. In particular, there appears to be very
little difference between the CLT and MGF results, suggesting that the assumption of conditional
normality is appropriate, at least for this example. This is an encouraging result and would provide
significant memory and computational savings if it holds generally. Even more encouraging is that
with only 625 samples of the market factors, we obtain (with CLT and MGF) a good estimate of even
the four basis point (one in 2,500) percentile. Precision in the more extreme percentiles is a drawback
for the LLN method (and standard Monte Carlo), as we use order statistics as estimates 18. Thus, to
capture the four basis point percentile, we would need more than 2,500 trials; to capture the 1 basis
point percentile, we would need over 10,000. If we still need to use as many trials as in the standard
Monte Carlo case, there is little reason to use conditional simulations. For this reason, LLN in the
general case is of little use.
With regard to the cost of the computations, the times in Table 4, while not representing optimized
implementations, serve as good indicators of the relative benefits of performing conditional simulations. Using CLT or MGF, we obtain the same precision as with Monte Carlo, with up to a factor of
forty improvement in speed. However, as the number of market factors increases, so does the sensitivity of the MGF and CLT results to the sample factor points. Further investigation is necessary into
how many market factors can be accommodated using these methods, and whether sophisticated
sampling techniques might allow these methods to be extended to more factors.
In addition, further investigation into other sensitivities of the conditional methods is necessary. In
particular, we have assumed that the portfolios do not have significant obligor concentrations (such
as the case where one obligor in a portfolio of 200 accounts for 10% of the total exposure). The LLN
and CLT methods especially would be subject to more error in the presence of significant concentrations. Nevertheless, it is likely that for all but the most pathological cases, the error in our simulation
methods are significantly less than the errors due to uncertainty in the model parameters.
18
For example, the estimate for the 1st percentile using Monte Carlo (and 50,000 trials) is the 500th smallest value.
CreditMetrics
April 1999
Monitor
page 33
References
Belkin, Barry, Stephan Suchower, and Lawrence Forest. A One-Parameter Representation of Credit
Risk and Transition Matrices, CreditMetrics Monitor, Third Quarter, 1998.
DeGroot, Morris H. Probability and Statistics, 2nd ed. (Reading, MA: Addison-Wesley Publishing
Company, 1986).
Finger, Christopher C. Sticks and Stones, The RiskMetrics Group, LLC, 1998.
Gluck, Jeremy. Moody's Ratings of Collateralized Bond and Loan Obligations, Conference on
Credit Risk Modelling and the Regulatory Implications, Bank of England, 1998.
Gordy, Michael B. A Comparative Anatomy of Credit Risk Models, Federal Reserve Board FEDS
1998-47, December, 1998.
Koyluoglu, Ugur, and Andrew Hickman. Reconcilable Differences, Risk, October 1998.
Nagpal, Krishan M. and Reza Bahar. An Analytical Approach for Credit Risk Analysis Under Correlated Defaults, CreditMetrics Monitor, April, 1999.
Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C, 2nd ed. (Cambridge: Cambridge University Press, 1992).
Vasicek, Oldrich. The Loan Loss Distribution, KMV Corporation, 1997.
CreditMetrics
April 1999
Monitor
page 34
The rapidly growing credit derivative market has created a new set of financial instruments which
can be used to manage the most important dimension of financial risk - credit risk. In addition to the
standard credit derivative products, such as credit default swaps and total return swaps based upon a
single underlying credit risk, many new products are now associated with a portfolio of credit risks.
A typical example is the product with payment contingent upon the time and identity of the first or
second-to-default in a given credit risk portfolio. Variations include the instruments with payment
contingent upon the cumulative loss before a given time in the future. The equity tranche of a CBO/
CLO is yet another variation, where the holder of equity tranche incurs the first loss. Deductibles and
stop-loss in insurance products could also be incorporated into the basket credit derivatives structure.
As the CBO/CLO market continues to expand, the demand for basket credit derivative products will
most likely continue to grow. For simplicitys sake, we refer to all of these products as basket credit
derivatives.
In the last issue of the CreditMetrics Monitor, Finger (1998) studies how to incorporate the building
block of credit derivatives, credit default swaps, into the CreditMetrics model. This article further
explores the possibility of incorporating more complicated basket credit derivatives into the CreditMetrics framework.
The actual practice of basket product valuation presents a challenge. To arrive at an accurate valuation, we must study the credit portfolio thoroughly. CreditMetrics provides a framework to study the
credit portfolio over one period. In this paper, we extend the framework to cover multi-period problems using hazard rate functions. The hazard rate function based framework allows us to model the
default time accurately - a critical component in the valuation of some basket products, such as first
or second-to-default credit derivatives.
To properly evaluate a credit portfolio, it is critical to incorporate default correlation. Surprisingly,
existing finance literature fails to adequately define or discuss default correlation, defining it only as
being based on discrete events, which dichotomize according to survival or non-survival at a critical
period (one year, for example). We extend the framework by defining the correlation as the correlation between two survival times.
Additionally, we will explicitly introduce the concept of copula function which, in the current CreditMetrics framework, has only been used implicitly. We will give some basic properties and common
copula functions. Using a copula function and the marginal distribution of survival time of each credit in a credit portfolio, we will build a multivariate distribution of the survival times. Using this distribution, we can value any basket credit derivative structure.
The remainder of this article is organized as follows: First, we present a description of some typical
basket credit derivative instruments. Second, we explore the characterization of default using hazard
rate functions. Third, we correlate credit risks using copula functions. Finally, we present numerical
examples of basket valuation.
Instrument Description
There are a variety of basket type credit derivative products, which we classify into the following
three broad categories.
CreditMetrics
April 1999
Monitor
page 35
[1]
0,
if L t D,
P =
L t D , if L t > D.
We could also have a stop-loss type payoff function, in which the total loss up to a given amount is
paid by the contract purchaser. In this or the basic case, the payment is made at the end of the period
and the payment only depends upon the cumulative loss at period end. Thus, this type of product is
similar to European options.
Type II. Payment is associated with the cumulative loss across time
For this type of contract the payment is associated with the evolution of loss function across time.
For example, the contract could be structured in such a way that the payment starts if the cumulative
loss of a given credit portfolio becomes larger than a lower limit L , and the payment continues after
this point whenever new loss occurs until the cumulative loss reaches a upper limit H . Suppose we
have a portfolio which has a loss of 10, 13, 2, 0, 19 (in millions) in its first 5 years, and a lower limit
of $15 million and an upper limit of $45 million. The payment of the loss protection would be 0, 8,
8, 0, 14.
Table 1
An Example of Loss and Payment of a Type II Product.
Year
1
Loss
10
13
19
Cumulative loss
10
23
31
31
50
Payment
14
The loss and payment of this product is shown in Table 1, which illustrates the necessity of tracking
the loss function. L t at different times in the future for this product type.
Type III. Payment is associated with the time and identity of the default.
The payment of this product depends upon the time and identity of the first few defaults of a given
credit portfolio. For example, the first-to-default contract pays an amount either prefixed or associated with the identity of the first defaulted asset in a given credit portfolio. A portfolio manager
chooses a portfolio of three credit risks of face values $100, $150 and $200 million respectively, and
buys credit protection against the first-to-default of the portfolio. If one of the credit defaults, she
CreditMetrics
April 1999
Monitor
page 36
Credit Protection
Seller
Protection buyer
Par less recovery amount
following the first default
Reference Assets
AA, BB, CC
It is generally best to use credit risks with similar credit ratings and notional amounts. Otherwise, a very weak credit risk
or a credit with large notional amount would dominate the basket pricing.
CreditMetrics
April 1999
Monitor
page 37
We choose the current time as the time origin to allow use of current market information to build
credit curves. The time scale is defined in terms of years for continuous models, or number of periods
for discrete models. For defaults, we must select from the various definitions provided by the rating
agencies and the International Swap Dealers Association.
Survival Function
Consider a credit A the survival time of which is T A . For notational simplicity we omit the use of
the subscript A .The probability distribution function of the survival time T can be specified by the
distribution function
[2]
F ( t ) = Pr { T t } ,
which gives the probability that default occurs before t . The corresponding probability density function is f ( t ) = dF ( t ) dt . Either distribution specification can be used, whichever proves more convenient. In studying survival data it is useful to define the survival function
[3]
S ( t ) = 1 F ( t ) = Pr { T > t } ,
giving the upper tail area of the distribution, that is, the probability that the credit survives to time t .
Usually, F ( 0 ) = 0 , which implies S ( 0 ) = 1 since survival time is a positive random variable. The
distribution function F ( t ) and the survival function S ( t ) provide two mathematically equivalent
ways of specifying the distribution of the survival time. There are, of course, other alternative ways
of characterizing the distribution of T . The hazard rate function, for example, gives the credits default probability over the time interval [ x, x + t ] if it has survived to time x
[4]
F ( x + t ) F ( x )
f ( x )t
Pr { x < T x + t T > x } = ----------------------------------------- -------------------- .
1 F( x)
f(x) - ,
h ( x ) = ------------------1 F(x)
1 F(x)
CreditMetrics
April 1999
Monitor
page 38
f(x)
S ( x )
h ( x ) = -------------------- = ------------ .
1 F(x)
S(x )
The survival function can then be expressed in terms of the hazard rate function using
t
[7]
S ( t ) = exp h ( s ) ds .
0
Using the hazard rate function, we can calculate different default and survival probabilities. For example, we can calculate the conditional probability that a credit survives to year x and then defaults
during the following t years as follows:
t
[8]
tqx
= 1 exp h ( x + s ) ds .
0
If t = 1 the series tqx for x = 0, 1, n gives the conditional default probability of a credit in the
next year if the credit survives to the year x .3 The probability density function of survival time of a
credit can also be expressed using the hazard rate function as follows:
t
[9]
f ( t ) = h ( t ) exp h ( x + s ) ds .
0
A typical assumption is that the hazard rate is a constant, h , over certain period, such as [ x, x + 1 ] .
In this case, the density function is
[10]
f ( t ) = he
ht
which shows that the survival time follows an exponential distribution with parameter h . Under this
assumption, the survival probability over the time interval [ x, x + t ], for 0 < t 1 is
t
[11]
tpx
= exp h ( s ) ds = e
0
ht
= ( px ) ,
where p x is the probability of survival over one year period. This assumption can be used to scale
down the default probability over one year to a default probability over a time interval less than one
year.
The aforementioned result can be found in survival analysis books, such as Bowers (1986).
CreditMetrics
April 1999
Monitor
page 39
n + 1qx
= nq x + np x q x + n ,
which simply states that the probability of default over time interval [0, n+1] is the sum of the probability of default over the time interval [0, n], plus the probability of survival to the end of n th year,
and then default during the following one year. Using Eq. [12] we have the marginal conditional default probability:
[13]
n + 1qx nqx
-,
q n + 1 = -------------------------1 nqx
which results in the marginal conditional default probabilities in year 2, 3, 4, 5 as 7.12%, 7.05%,
6.36% and 5.90%. If we assume a piece wise constant hazard rate function over each year, then we
can obtain the hazard rate function using Eq. [8]. The hazard rate function is given in Chart 2.
CreditMetrics
April 1999
Monitor
page 40
7.5%
7%
6.5%
6%
Years
0
This hazard rate function is a decreasing function of time, which implies that the further into future,
the lower the marginal default probability. Intuitively, B rated credit tends to be upgraded rather than
to be downgraded, as grade B is the lowest grade in Carty and Liebermans (1997) cumulative default
rate table.
Mertons option-based structural models could also produce a term structure of default rates. Using
the above, we can derive the hazard rate function.
Alternatively, we can take the implicit approach by using market observable information, such as the
asset swap spreads or risky corporate bond prices. This is the approach used by most credit derivative
trading desks. The extracted default probabilities reflect the market-agreed perception today about
the future default tendencies of the underlying credit. For details about this approach, we refer to Li
(1998).5
The hazard rate function is usually called a credit curve due to the analogy between the yield curve and the hazard rate
function.
CreditMetrics
April 1999
Monitor
page 41
We see that a copula function is just a cumulative multivariate uniform distribution function. For given univariate marginal distribution functions F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) , the function
[15]
F ( x 1, x 2, , x m ) = C ( F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) ) ,
which is defined using a copula function C , results in a multivariate distribution function with
univariate marginal distributions as specified.
This property can be shown as follows:
[16]
= Pr { F 1 ( U 1 ) x 1, F 2 ( U 2 ) x 2, , F m ( U m ) x m }
= Pr { X 1 < x 1, X 2 < x 2, , X m < x m }.
C ( F 1 ( ), F 2 ( ), , F i ( x i ), , F m ( ) )
= Pr { X 1 < , X 2 < , , X i < x i, , X m < }
= Pr { X i < x i }.
Sklar (1959) established the converse. He showed that any multivariate distribution function F can
be written in the form of a copula function. He proved the following: if F ( x 1, x 2, , x m ) is a joint
multivariate distribution function with univariate marginal distribution functions F 1 ( x 1 ), F 2 ( x 2 ) ,
, F m ( x m ) , then there exists a copula function C ( u 1, u 2, , u m ) such that
The function also contains correlation information which we do not explicitly express here for simplicitys sake.
CreditMetrics
April 1999
Monitor
page 42
F ( x 1, x 2, , x m ) = C ( F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) ) .
If each F i is continuous, then C is unique.7 Thus, copula functions provide a unifying and flexible
way to study multivariate distributions.
For simplicitys sake, we discuss only the properties of bivariate copula function C ( u, v, ) for uniform random variables U and V defined over the area { ( u, v ) 0 < u 1, 0 < v 1 } , where is a correlation parameter. We call simply a correlation parameter since it does not necessarily equal the
usual correlation coefficient defined by Pearson, nor Spearmans Rho, nor Kendalls Tau.8 The bivariate copula function has the following properties:
Since U and V are positive random variables, C ( 0, v, ) = C ( u, 0, ) = 0 .
Since U and V are bounded above by 1, then the marginal distributions can be obtained as
follows C ( 1, v, ) = v, C ( u, 1, ) = u .
For independent random variables U and V , C ( u, v, ) = uv .
Frechet (1951) showed there exist upper and lower bounds for a copula function:
[19]
max { 0, u + v 1 } C ( u, v ) min { u, v } .
[20]
u
v
1
( e 1 )( e 1 )
C ( u, v ) = --- ln 1 + -------------------------------------------- , < < .
e 1
C ( u, v ) = 2 ( ( u ), ( v ), ) ,
1
where 2 is the bivariate normal distribution function with correlation coefficient , and is the
inverse of a univariate normal distribution function. As we shall see later, this is the copula function
used in CreditMetrics.
CreditMetrics
April 1999
Monitor
page 43
If 0 we have
[23]
C ( u, v ) = ( 1 + )uv ( u 1 + v ) ( u 1 + v ), 0 ,
where
[24]
( x ) = 1, if x > 0,
0, if x 0.
is an indicator function.
s = 12 [ C ( u, v ) uv ] du dv ,
[26]
= 4 C ( u, v ) dC ( u, v ) 1 .
Comparison between results using different copula functions should be based on either a common
Spearman's Rho or on Kendalls Tau.
Further examination of copula functions can be found in a survey paper by Frees and Valdez (1988).
CreditMetrics
April 1999
Monitor
page 44
Pr { Z 1 < Z A, Z 2 < Z B } =
Z A ZB
2 ( x, y, ) dx dy
= 2 ( Z A, Z B, ) ,
where 2 ( x, y, ) is the standard bivariate normal density function with a correlation coefficient , and 2 is the bivariate cumulative normal distribution function.
If we use a bivariate normal copula function with a correlation parameter , and denote the survival
times for B and A as T B and T A , the joint default probability of A and B, which is the probability
that T A < 1, T B < 1 , can be calculated using
[28]
Pr { T A < 1, T B < 1 } = 2 ( ( F A ( 1 ) ), ( F B ( 1 ) ), ) ,
we see that Eq. [27] and Eq. [28] give the same joint default probability over one year period if
= .
We can conclude that CreditMetrics uses a bivariate normal copula function with the asset correlation
as the correlation parameter in the copula function. Thus, to generate survival times of two credit
risks, we use a bivariate normal copula function with correlation parameter equal to the CreditMetrics asset correlation. We note that this correlation parameter is not the correlation coefficient between the two survival times T A and T B . The correlation coefficient between T A and T B is much
smaller than the asset correlation. Conveniently, the marginal distribution of any subset of an n dimension normal distribution is still a normal distribution. Using asset correlation, we can construct
high dimension normal copula functions to model credit portfolio.
Valuation Method
Suppose now that we are to study an n credit portfolio problem. For each credit i in the portfolio,
we have constructed a credit curve or a hazard rate function h i for its survival time T i either based
on historical default experience or using implicit approach. We assume that the distribution function
of T i is F i ( t ) . Using a copula function C , we also obtain the joint distribution of the survival times
T 1, T 2, , T n as follows:
[30]
F ( t 1, t 2, t n ) = C ( F 1 ( t 1 ), F 2 ( t 2 ), , F m ( t m ) ) .
CreditMetrics
April 1999
Monitor
page 45
F ( t 1, t 2, t n ) = n ( ( F ( t 1 ) ), F ( t 2 ) ), , ( F ( t n ) ) ) ,
Y 1 = ( F ( T 1 ) ), Y 2 = ( F ( T 2 ) ),, Y n = ( F ( T n ) ) .
Obtain T 1, T 2, , T n using T i = F ( ( Y i ) ), i = 1, 2, , n .
Given full information on the default time and identity, we can track on any loss function, such as
the cumulative loss over a certain period. We can also price the first or second-to-default structure
since the default times T 1, T 2, , T n can be easily sorted and ranked.
In the special case of independence among T 1, T 2, , T n , the first-to-default can be valued analytically. Let us denote the survival time for the first-to-default of n credits as T , i.e.
[33]
T = min { T 1, T 2, , T n } .
h T = h 1 + h2 + + hn ,
assuming a contract which pays one dollar when the first-default of the n credits occurs within 2
rT
years, and the yield is a constant r . The present value of the contract is Z = 1 e . The survival
hT t
time for the first-to-default has a density function f ( t ) = h T e
, so the value of the contract can
be calculated as
2
[35]
V =
1 e
0
2
rT
f ( t ) dt
1 e
0
rT
hT e
hT t
hT
dt = -------------- ( 1 exp [ 2 ( r + h T ) ] ).
r + hT
However, Eq. [34] will not necessarily hold if T 1, T 2, , T n are not independent. For example, consider the bivariate exponential distribution, the joint survival function of which is given by
[36]
where 1, 2, 12 > 0 are three parameters. The marginal survival functions for T 1, T 2 are
CreditMetrics
April 1999
Monitor
page 46
12
-,
Cov [ T 1, T 2 ] = -------------------------------------------------------------------------------------( 1 + 2 + 12 ) ( 1 + 12 ) ( 2 + 12 )
which implies that T 1, T 2 are not independent if 12 0 . It can also be shown that the hazard rate for
the minimum of T 1, T 2 is 1 + 2 + 12 , instead of 1 + 2 in the case of independence.
Hence for a credit portfolio with a large number of correlated credit risks, we still resort to the Monte
Carlo simulation scheme we outlined in this section.
Numerical Examples
The first example illustrates how to value a Type (I) first loss credit derivatives using the CreditManager application. The second shows how to value a first-to-default structure on a portfolio of five
credits.
Example 1
CreditManager uses a simulation approach to obtain the distribution of a credit portfolio value at the
end of a time horizon, such as one year. Using this distribution, we can assess the possible values of
the portfolio in the future. Using more detailed reports we can also express credit risk broken down
by country, industry, maturity, rating, product type, or any other category of credit exposure. Thus,
managers can identify different pockets, or concentration of risk within a single portfolio, or across
an entire firm and take appropriate action to actively manage the portfolio. As the credit derivative
market develops, portfolio managers can use the new credit derivative instruments to accomplish
their goals.
As an example, we run a simulation on the CreditManager sample portfolio. The simulation summary
is given in Chart 3. We see that the sample portfolio is a pretty good credit portfolio in the sense
that the distribution of the portfolio value is very much concentrated on the mean and has a relatively
smaller standard deviation. Regardless, if the portfolio manager still wants to buy a credit derivative
to protect his portfolio from declining to a level below C , how much should be paid for the protection?
The payoff function of the structure is
[39]
if X > C,
P = 0,
C
X
,
if X C,
where X is the portfolio value at the end of one year. The expected value of P is
CreditMetrics
April 1999
Monitor
page 47
E[P] =
0 ( C x )f ( x ) dx,
where f ( x ) is the probability density function of the sample portfolio value at the end of 1 year.
Chart 3
Simulated distribution of the CreditManager sample portfolio.
To calculate the payment premium, we need to evaluate only Eq. [40] and discount the result to the
present. For simplicitys sake, we assume the discount factor is 1.0. CreditManager can export the
density function of the simulated portfolio value into a spreadsheet, from which we can evaluate
Eq. [40].
If the portfolio manager wants to cover the portfolio value from falling below 10th percentile loss
from the mean value of 128.41 million, the premium she needs to pay is approximately $62,547. The
premium to protect other percentile losses can be calculated similarly; the results are depicted in
Chart 4.
Example 2
The second example shows how to value a first-to-default contract. We assume we have a portfolio
of n credits. For simplicitys sake, we assume each credit has a constant hazard rate of h for
0 < t < . From Eq. [10] we know the density function for the survival time is f ( t ) = h exp [ ht ] .
This shows that the survival time is exponentially distributed with mean E [ T ] = 1 h . We also assume that the n credits have a constant pairwise asset correlation .9
To have a positive definite correlation matrix, the constant correlation coefficient has to satisfy the condition
> 1 ( n 1 ) .
CreditMetrics
April 1999
Monitor
page 48
2%
4%
6%
8%
10%
The contract is a two-year transaction which pays one dollar if the first default occurs during the first
two years. We also assume a constant interest rate of 10%. If all the credits in the portfolio are independent, the hazard rate of the minimum survival time is T n = nh and the contract value is given by
Eq. [35].
We choose the following basic set of parameters: n = 5, h = 0.1, = 0.25, r = 0.1 .
First, we examine the impact of the number of assets on the value of the first-to-default contract. If
there is only one asset, the value of the contract should be 0.1648. As the number of assets increases,
the chance that there is one default within the two years also grows, as does the value of the first-todefault contract. Chart 5 shows how the value of the first-to-default changes along with the number
of the assets in the portfolio. We see that the value of the first-to-default contract increases at a decreasing rate. When the number of assets increases to 15, the value of the contract becomes 0.7533.
From Chart 5 we also see that the impact of the number of assets on the value of the first-to-default
decreases as the default correlation increases.
Second, we examine the impact of correlation on the value of the first-to-default contract of 5 assets.
If = 0 , the expected payoff function, based on Eq. [35], should give a value of 0.5823. Our simulation of 50,000 runs gives a value of 0.5830. If all 5 assets are perfectly correlated, then the firstto-default of 5 assets should be the same as the first-to-default of 1 asset since any one default induces
all others to default. In this case the contract should worth 0.1648. Our simulation of 50,000 runs produces a result of 0.1638. Chart 6 shows the relationship between the value of the contract and the
constant correlation coefficient. We see that the value of the contract decreases as the correlation increases. We also examine the impact of correlation on the value of the first-to-default of 20 assets in
Chart 6. As expected, the first-to-default of 5 assets has the same value of the first-to-default of 20
assets when correlation approaches 1.
CreditMetrics
April 1999
Monitor
page 49
0.8
0.6
0.4
0.2
Correlation= 25%
Correlation= 50%
Assets
0
10
15
20
Chart 6
Value of first-to-default as a function of correlation level.
5 and 20 asset baskets.
Value
1
0.8
0.6
0.4
0.2
5 assets
20 assets
0%
20%
40%
60%
80%
Correlation
100%
CreditMetrics
April 1999
Monitor
page 50
References
Gupton, Greg M., Christopher C. Finger, and Mickey Bhatia. CreditMetrics -- Technical Document,
New York: Morgan Guaranty Trust Co., 1997.
Bower, Newton et al. Actuarial Mathematics, The Society of Actuaries, 1986.
Carty, Lea and Dana Lieberman. Historical Default Rates of Corporate Bond Issuers, 1920-1996,
Moodys Investors Service, January 1997.
Finger, Christopher. Credit Derivatives in CreditMetrics, CreditMetrics Monitor, 3rd Quarter
1998.
Sklar, A. Fonction de Repartition a n Dimensions et Leurs Marges, Publications de LInstitute
Statistique de LUniversite de Paris, 8:229-231, 1959.
DallAglio, G. Frechet Classes and Compatibility of Distribution Functions, Symp. Math., 9:131150, 1972.
Frees, W. Edward and Emiliano A. Valdez. Understanding Relationship Using Copulas, North
American Actuarial Journal, Vol. 2, Num. 1, 1-25, 1998.
Lehmann, E. L. Some Concepts of Dependence, Annals of Mathematical Statistics, 37, 1137-1153,
1966.
Li, David X. Constructing a Credit Curve, Credit Risk, A Risk Special Report, 40-44, 1998.
CreditMetrics
April 1999
Monitor
page 51
Reza Bahar
Standard and Poors
rbahar@mcgraw-hill.com
Accurate and efficient ways of modeling credit risk are extremely important for capital
allocation and portfolio management. Credit events often exhibit significant correlations
and ignoring them in analysis such as VaR, could produce misleading answers. In this
paper we develop a framework in which correlated credit events such as defaults can be
studied and analyzed in a closed form fashion without the need of simulations. Under
some mild assumptions, the proposed approach allows us to obtain explicitly the loss
distribution of a portfolio of assets using the default probabilities of each asset and default
correlation between every pair of assets. Since there are usually an infinite number of loss
distributions which are consistent with the given default rates and correlations, we also
provide a computationally simple approach to obtain a family of possible loss
distributions which are consistent with the given default data. Such a parameterization
may be useful to obtain the range of possible losses a portfolio could experience under the
given assumptions on default rates and correlations. From a theoretical perspective, the
proposed approach provides a relatively simple approach to obtain probability of all
outcomes involving discrete random variables from the knowledge of only the first and
second order statistics.
Introduction
In a portfolio approach to modeling credit risk, it is important to take into account the correlated
nature of credit events. Correlations between credit events are difficult to obtain empirically due to
the events infrequency. Even when reliable information about the correlations is available, obtaining
a multi-asset type portfolios loss distribution (i.e., the probability of each possible loss amount) can
be quite daunting.
There are a number of credit risk management approaches. Two are publicly available:
CreditMetrics, developed by JP Morgan and CreditRisk+, developed by Credit Suisse. The
CreditMetrics methodology is a simulation based approach where loss distribution is obtained using
default rates and default correlations. A change in each assets rating (credit quality) is modeled
using a (Gaussian) random variable. A default takes place if this variables value falls below a
probability-contingent threshold. If the default events between two assets are correlated, then an
appropriate level of correlation is incorporated in the distribution of the two random variables
representing the given assets. Thus, the a priori information on default rates and all pairwise
correlations is used in determining the covariance matrix of the random variables representing credit
migrations. The overall loss distribution is obtained from Monte Carlo simulation of random
variables with this covariance. All this can become computationally demanding as the number of
credits increase. In using CreditRisk+, we assume that the default rates are themselves random with
the volatilities of the default rates adjusted in a manner that captures the effect of correlations and the
background factors. Thus, when using CreditRisk+, we try to capture the effect of using default
correlations by using suitable default rate volatilities applied in a sector analysis approach.
In this paper, we suggest an approach to arriving at a multi-asset portfolios loss distribution where
default probabilities and correlations between pairs of credits are directly incorporated into the
analysis without approximations. The exposures are assumed to be equal to the face amount. If there
is a default, then the full face amount is assumed to be lost and the dependence of defaults upon
interest rates or other market observable parameters is not addressed. The advantages of the proposed
approach are (i) explicit solution to the problem where there is no need for simulations and (ii) low
computational complexity where the overall loss distribution is obtained by combining loss
distributions from multiple scenarios with independent defaults. The number of scenarios with
CreditMetrics
April 1999
Monitor
page 52
[1]
F(z) =
( 1 pi + pi z
ei
) = 0 + 1z
m1
+ + kz
mk
i=1
Then for the given portfolio, under the assumption of independence of default events, the probability
of losing 0 is 0 and the probability of losing m i is i for i = 1, , k . The above, stated as a
polynomial multiplication problem, is a standard convolution problem and provides a closed form
expression for the loss distribution of a portfolio of exposures, provided all default events are
independent. The CreditRisk+ model is based upon similar convolution ideas, yet it allows for
greater generality by allowing default rates to be themselves stochastic.
CreditMetrics
April 1999
Monitor
page 53
[3]
Pr { default } =
i}
i=1
[4]
Pr { joint default } =
i}
i=1
where in the last equation we have assumed that default events are independent under each scenario.
Thus the two scenarios viewed together result in precisely the given probability of defaults and
probability of pairwise defaults.
Since the default events are independent under each scenario, the PGF (defined above) for a portfolio
of 5 bonds of $1 for Scenario 1 is
[5]
F 1 ( z ) = ( 1 0.01 + 0.01z )
= 9.51 10
+ 4.8 10
z + 9.7 10
z + 9.8 10
z + 4.95 10
z + 10
10
z .
The PGF for Scenario 1 describes probability of losing each possible loss amount - for example the
2
probability of losing $0 is 95.1% while the probability of losing $2 is 9.7 10 %. Similarly for the
same portfolio, the PGF under Scenario 2 is
CreditMetrics
April 1999
Monitor
page 54
z + 2.54 10
+ 1.33 10
z + 3.93 10
z
4
z + 2.43 10
z .
Now noting that the probability of the Scenarios 1 and 2 are 0.8 and 0.2 respectively, the PGF for the
portfolio is
[7]
z + 5.86 10
+ 6.5 10
z + 8.25 10
z + 4.94 10
z .
Thus, for the example portfolio, the probabilities of losing 0,1,2,3,4 and 5 are 93.25%, 6.5%, 0.242%,
3
5
7
5.86 10 %, 8.25 10 %, and 4.94 10 % respectively. If the default events were independent,
3
the probabilities of losing 0, 1, 2, 3, 4, and 5 would be 93.2%, 6.62%, 0.188%, 2.67 10 %,
5
8
1.89 10 %, and 5.38 10 %.
REMARK 1: The number of scenarios to be considered does not depend on the number of exposures.
If there were 10 exposures instead of the assumed 5, the only difference would be that in the PGFs,
the exponent term would be changed from 5 to 10. Thus, even though the proposed approach takes
into account the correlation between every pair of exposures, there is no exponential growth with
respect to exposures (in terms of outcomes to be considered). From an implementation point of view,
the same tools used for the convolution in the independent case can be used for addressing the effect
of correlations.
REMARK 2: Specifying the default probabilities of each exposure and the correlation of defaults
between every pair does not uniquely specify the probabilities of all events. We observe that, if only
default probabilities (first moment information) and correlations between default events (second
moment information) are known, the probabilities of possible outcomes are not unique when the
number of exposures is greater than two.1 Indeed, even in the proposed approach we can construct
any number of scenarios which, when viewed as a whole, result in the desired default rates and
default correlation. For example let , p 1 and p2 be positive real numbers between 0 and 1 that
satisfy the following:
[8]
[9]
Assuming that such a solution exists, consider the following mutually exclusive scenarios - (i)
Scenario 1 occurs with probability , default events are independent and have probability of p 1 , (ii)
Scenario 2 occurs with probability 1 , default events are independent and have probability of p 2 .
The pair of scenarios match the given default probabilities and joint default probabilities (or
equivalently, default correlations) by virtue of the fact that , p 1 and p 2 satisfy Eq. [8] and Eq. [9].
Non uniqueness is inferred by the fact that we have three unknowns for two equations. And, apart
from the values chosen in the example, = 0.754 , p 1 = 0.0186 , and p 2 = 0 also satisfy the
equations above and are between 0 and 1. , p 1 , and p 2 are chosen to be between 0 and 1 because
For a small finite set of extreme cases, defining the first two moments uniquely specifies the probability density function.
CreditMetrics
April 1999
Monitor
page 55
q p = p 1 + ( 1 )p 2 [ p 1 + ( 1 )p 2 ] = ( 1 ) ( p 1 p 2 ) 0 for all [ 0, 1 ].
Our overall objective is to obtain loss distribution for the portfolio based upon the data described
above, or, to obtain the probability associated with every possible loss amount. We propose
considering several different scenarios, each involving independent defaults under possibly different
default rates, which, when viewed together, match the given default probabilities and correlations.
This is summarized in the following Lemma which describes the conditions that scenarios must
satisfy so that they generate default rates and correlations consistent with the given data.
Lemma (The General Decomposition): Suppose there exist:
1. An integer M > 0 ,
2. Real numbers p ij [ 0, 1 ]for i = 1, , M and j = 1, , N, and
3. Real numbers i [ 0, 1 ] for i = 1, , M
CreditMetrics
April 1999
Monitor
page 56
[11]
i pij =
p j for all j = 1, , N .
i=1
[12]
i pij pik =
q jk for all j, k = 1, , N .
i=1
[13]
= 1.
i=1
[14]
Pr { loss=x } =
i Pr { loss=x in Scenario i }.
i=1
CreditMetrics
April 1999
Monitor
page 57
The rationale for imposing this constraint comes from the following interpretation of the
decomposition idea presented in the Lemma: The M scenarios in the decomposition can be thought
of as M possible states of the market. The probability of the market being in state i is i and the
default probabilities conditioned on the market being in state i are the default probabilities under the
i th scenario. Thus, even though the M scenarios in the Lemma are mathematical constructs, the
decomposition idea of the Lemma can be viewed in terms of M possible states of the market. If these
scenarios are considered to be linked to market fluctuations, then they should also represent other
properties of the states of market that are not captured by the first and second moments of the default
rates. For example, the default rates should be in their historically observed range, which is precisely
the constraint described in Eq. [15].
Later, we present results that give sufficient conditions for the existence of a solution to the
decomposition idea presented in the Lemma, where all the scenarios also satisfy the constraint
described in Eq. [15].
Other criteria could make the default rates under different scenarios be more representative of the
historical experience. We will not discuss the issue of non-uniqueness further, other than to mention
that it is an important issue which needs to be addressed in greater detail. We would need to
incorporate additional criteria beyond constraints such as Eq. [15] to capture additional properties of
the default events.
An Example
In this section, we use an example to show how the decomposition idea presented in the Lemma is
applied to obtain the loss distribution for a portfolio composed of several asset types. The default
probabilities under different scenarios are obtained using Theorems 1 and 2. The intermediate steps
of the example presented here are given in the Appendix.
Consider a portfolio of 100 assets, each worth $1. We will ignore the effect of discounting and
assume no recovery so that in any default, $1 is lost. The portfolio is made up of assets of ratings
CreditMetrics
April 1999
Monitor
page 58
[16]
p =
p BB
pB
0.01
= 0.05 .
0.1
[17]
c B, BB
c B, B
In Eq. [17], for example, c B, BBB denotes the default correlation between a B and a BBB asset. We can
verify that the matrix is of rank two. Later, we show that the number of independent scenarios needed
for the proposed approach is twice the rank of the matrix of default correlations. Thus in this case,
four scenarios with independent defaults would be enough to generate the given default rates and
correlations.
Based on the default probabilities and correlations, we can obtain the pairwise default probabilities
q BBB, BBB q BBB, BB q BBB, B
[18]
Q =
q B, BB
q B, B
= 10
where, for example, q B, BBB denotes the probability of a pairwise default of a B and a BBB asset.
In general, the asset types may depend not only on the rating but industry and geographical region as well.
All the numbers chosen in this example are for illustrative purposes only and do not necessarily correspond to historically
observed values.
CreditMetrics
April 1999
Monitor
page 59
[19]
p min =
0.035
0.002
=
p
,
0.125 .
0.02
max
0.25
0.04
Here vectors p min and p max represent the minimum and maximum desired default probabilities.
Eq. [19], for example, implies that we would like the default probabilities for BB assets in all scenarios to be between 0.02 and 0.125.
Applying the steps outlined in Theorem 1, the details of which are in the Appendix, we obtain the
four scenarios which when viewed as mutually exclusive outcomes, match the given default data. Let
us denote this decomposition as Decomposition 1. The probabilities and the default rates under the
four scenarios of Decomposition 1 are given in Table 1.
Table 1
Scenario Probabilities for Decomposition 1.
Default Probability
Scenario
Probability
BBB
BB
0.3765
0.0020
0.0410
0.0880
0.1235
0.0344
0.0769
0.1370
0.3513
0.0100
0.0200
0.0448
0.1487
0.0100
0.1210
0.2300
We can now verify that the default and scenario probabilities in Table 1 satisfy equations Eq. [11],
Eq. [12], and Eq. [13]. Moreover, the default probabilities are always between the asset types
minimum and maximum prescribed range for all scenarios.
We can now obtain the loss distribution under each scenario, assuming independent defaults, and
then use Eq. [14] to obtain the desired portfolio loss distribution. Chart 1 provides the resulting loss
distribution as well as the distribution under independent defaults. For comparison, the chart also
includes the loss distribution obtained using the simulation approach proposed in CreditMetrics.
Chart 2 contains a comparison of cumulative probability functions. Note that the charts reflect the
commonly observed property of the loss distribution when defaults are positively correlated - the
positive correlations increase both the probability of no loss and the probability of large losses.
CreditMetrics
April 1999
Monitor
page 60
Decomposition 1
CreditMetrics
20%
Independent defaults
15%
10%
5%
Loss amount
0
10
Chart 2
Portfolio loss cumulative probability function.
Cumulative probability
100%
80%
60%
40%
Decomposition 1
CreditMetrics
Independent defaults
20%
Loss amount
0
10
CreditMetrics
April 1999
Monitor
page 61
Probability
BBB
BB
0.3311
0.0000
0.0390
0.0849
0.1689
0.0296
0.0716
0.1297
0.2304
0.0100
0.000
0.0082
0.2696
0.0100
0.0930
0.1789
Note that the default probabilities of asset types BBB and BB in Scenarios 1 and 3 respectively, are 0
- an extreme value for the default probability. In this sense, this decomposition is an extreme case of
all possible decompositions.
Using the algorithm outlined in Theorem 2, we can obtain the default probabilities for another set of
four scenarios that match the given default data. The intermediate steps are again in the Appendix.
These results are presented in Table 3.
Table 3
Scenario Properties for Decomposition 3.
Default Probability
Scenario
1
Probability
BBB
BB
0.6040
0.7040
1.0000
2.78 10
0.4994
0.0097
0.0496
0.0995
0.0044
0.0100
0.5405
1.0000
0.4956
0.0100
0.0456
0.0920
Note that in odd scenarios (Scenarios 1 and 3), B assets have default probability of 1 - an extreme
value. Thus, as with Decomposition 2, Decomposition 3 represents an extreme case of acceptable decompositions.
CreditMetrics
April 1999
Monitor
page 62
80%
60%
40%
Decomposition 2
CreditMetrics
Decomposition 3
20%
Loss amount
0
10
CreditMetrics
April 1999
Monitor
page 63
[20]
c 11 c 1N
: : 0.
c 1N c NN
[21]
where
1 0
q 11 q 1N
p1
= : ( p 1 p N ) + : :
: :
0
q 1N q NN
pN
c 11 c 1N 1 0
:
: :
: :
N c 1N c NN 0
: = pp + U i U i ,
i=1
N
CreditMetrics
April 1999
Monitor
page 64
[22]
p1
u i1
U i = : , p = : and j =
u iN
pN
pj ( 1 pj ) .
Note that in this decomposition, the vectors U i are not unique for N 2 .They can be obtained from
approaches such as the Cholesky decomposition or the singular value decomposition. Additionally,
the off-diagonal correlation terms need not be positive for the matrix of correlations to be positive
semidefinite.
The matrix of correlations is positive semidefinite if: (i) diagonal elements are positive, and (ii) offdiagonal elements are sufficiently small compared to the diagonal elements. Due to the effect of
common economic conditions, the default correlations for assets within the same type are usually
positive, implying that the diagonal elements are usually positive. Moreover, the correlation between
two different asset types is usually smaller than correlation within those asset types - for example
performances of two banks are likely to be more correlated to each other than performances of a bank
and an aerospace company. This implies that off-diagonal elements of the matrix of correlations are
smaller than the corresponding diagonal elements. These observations suggest that the positive semidefiniteness of the matrix of correlations may not be a restrictive assumption in most practical
situations.
In every case, we are interested in finding only those decompositions in which the default
probabilities for assets of type j are in the interval [ pjmin , p jmax ] for all scenarios. Let us define
vectors p min and p max as follows:
[23]
p min
p 1max
p 1min
=
: , p max =
: , where 0 p jmin < p j < p jmax 1for j = 1, , N .
p Nmin
p Nmax
To obtain the sufficient conditions for the existence of decompositions as stated in the Lemma, all
elements of vectors p jmin and p jmax should be set to 0 and 1, respectively.
Assumption 2 below depends on the particular choice of U i chosen above. The following definitions
of i and i for i = 1, , m , are used in Assumption 2:4
[24]
[25]
In Eq. [24] and Eq. [25], the relationship x y for vectors x and y means that all elements of the vector x y are nonnegative.
CreditMetrics
April 1999
Monitor
page 65
[26]
1.
-------------
i=1
Assumption 2, loosely speaking, restricts the level of correlation present in the pairwise default
probabilities. The correlations are small if the vectors U i are small. If the vectors U i are small in
comparison to p , then from Eq. [24] and Eq. [25] we would expect i, i >> 1 and thus Assumption
2 to be satisfied. In the authors experience, this assumption holds for most typical default data if we
take p jmin and p jmax as vectors of 0s and 1s, respectively.
An immediate consequence of Assumption 2 is that there exist 1, m ( 0, 1 ) such that
[27]
1
-------------- i for i = 1, , m , and
i i
m
[28]
= 1 where i ( 0, 1 ).
i=1
We now present the main results of this section. Theorem 1 shows that if the two assumptions
described above hold, there exist 2m scenarios of independent defaults, which when viewed
together, produce the desired default data and the default probabilities in each scenario are in the
desired range. Moreover, as shall be evident later, the decomposition described in the following
Theorem is an extreme decomposition. This means that in some of the scenarios, the default
probabilities of some asset types are at their extreme permissible values.
Theorem 1 (Sufficient Conditions for Decomposition & Extreme Decomposition I): Let Assumptions 1 and 2 hold. Let 1, , m be chosen as in equations Eq. [27] and Eq. [28] and let
i ( 0, 1 ) be defined as below
[29]
1
i = -------------------2 for i = 1, , m
1 + ii
[30]
if i = 2s 1 , where s = 1, , m.
i = s s
s ( 1 s ) if i = 2s, where s = 1, , m.
CreditMetrics
April 1999
Monitor
page 66
[31]
p ij
p j s u sj if i = 2s 1, where s = 1, , m,
=
u sj
p j + ---------- if i = 2s, where s = 1, , m.
s s
[32]
= 1.
i=1
2m
[33]
i pij
= p j for j = 1, N.
i=1
2m
[34]
i p ij p ik
= q jk for j, k = 1, , N.
i=1
[35]
Pr { loss=x } =
i Pr { loss = x in scenario i }
i=1
CreditMetrics
April 1999
Monitor
page 67
[36]
c 11 c 1N
: : =
c 1N c NN
*0
0*
0
0
: :
0
: :
0 *
where * indicates a non-zero submatrix. In such a case, the loss distribution of the overall portfolio
can be obtained by convolving the loss distributions of several smaller portfolios that are independent
of each other. For such a problem, there may be no decomposition if we consider the entire portfolio
of various asset types, even though there may be admissible decompositions for each of the groups
considered separately. This is because the Assumption 2 may not hold if all asset types are considered
together but would be satisfied if we consider each group separately. In these situations, it may be
computationally simpler to partition the portfolio into smaller independent groups. The loss distribution of each of the smaller groups can be obtained using the approach outlined in this paper while the
overall loss distribution can be obtained from convolving the loss distributions of the independent
smaller portfolios.
REMARK 3: The scenarios described in Theorem 1 are, in a sense, an extreme solution to the decomposition problem. This is because for all odd scenarios there is at least one asset type for which
the default probability p ij is at its extreme value of p jmin or p jmax .This follows from the definitions
of i in Eq. [24] and default probabilities in part (c) of Theorem 1, based upon which we note that
for all odd scenarios ( when i = 2s 1 ) the default probability of at least one asset type must be at its
extreme value. More specifically, given the definition of i in Eq. [24], there must be at least one
j = 1, , N for which p j i u ij is at its extreme value of p jmin or p jmax .
REMARK 4: The scenario probabilities described in Theorem 1 involve i but not i .
Analogously, there is another extreme solution for scenarios where some of the default probabilities
are at an extreme value of their permissible range, but where the scenario probabilities are described
in terms of i (instead of i ) and the extreme values of default rates arise from equation Eq. [25]
(instead of Eq. [24]). This extreme solution to the decomposition problem does not provide any new
results on existence of the proposed decompositions - indeed it can be thought of as another way of
showing that the Assumptions 1 and 2 are sufficient for the existence of desired decompositions. Notably, the scenarios described in Theorems 1 and 2 together form two of the extreme cases of decompositions with desired properties. The loss distributions obtained from the scenarios described in
Theorems 1 and 2 may be useful for the following reasons: (i) they provide a measure of the range
of possible loss distributions for the given portfolio and default assumptions, and (ii) they show the
sensitivity of the loss distribution to the choice of parameters in obtaining the scenario probabilities.
Theorem 2 (Extreme Decomposition - II): Let Assumptions 1 and 2 hold. Let 1, , m be chosen
as in Eq. [27] and Eq. [28], and let i ( 0, 1 ) be defined as:
[37]
1
i = -------------------2 for i = 1, , m
1 + ii
CreditMetrics
April 1999
Monitor
page 68
[38]
k if i = 2s 1 , where s = 1, , m,
i = s s
s ( 1 k s ) if i = 2s, where s = 1, , m.
[39]
p + u if i = 2s 1 , where s = 1, , m,
s sj
j
p ij =
1
p j ---------- if i = 2s, where s = 1, , m.
s s
For the 2m scenarios defined above, the conclusions (1) through (3) of Theorem 1 hold.
Proof is given in the Appendix.
We note that i are defined differently in the two Theorems 1 and 2.
We can view Theorem 2 as a dual of Theorem 1, as it provides a solution using i instead of i .
Theorem 1 and 2 not only show that Assumptions 1 and 2 are sufficient for existence of required
decompositions but also provide two of the extreme decompositions. Previously, we showed that in
the odd scenarios described in Theorem 1, the default probabilities of at least one asset type are at an
extreme value. Similarly from definitions of i in Eq. [25] and default probabilities in part (c) of
Theorem 2, we note that for all odd scenarios ( when i = 2s 1 ) , the default probability of at least
one asset type must be at its extreme value. As illustrated in the example previously, the extreme
values of the default probabilities in scenarios generated using decompositions in Theorems 1 and 2
are usually different values. For example, in Decomposition 2 presented in the example (generated
using Theorem 1), the default probabilities are 0 in some scenarios for some asset types, while in
Decomposition 3 (generated using Theorem 2), the default probabilities are 1 in some scenarios for
some asset types.
The results above specify precisely the default probabilities associated with all the scenarios. As
discussed before, the given problem may have many solutions. Thus, in order to incorporate
additional properties of the default events, we should have a simple way of generating other
solutions. Theorem 3 gives a simple parameterization of solutions. The parameterization may not
cover all admissible solutions, but it is sufficiently broad to cover a wide range. The main advantage
of the parameterization is that it transforms the N + N ( N + 1 ) 2 nonlinear constraints present in the
general formulation (Eq. [11] and Eq. [12]) into one linear constraint (Eq. [40] below) - but with
some additional inequality constraints. The Theorem shows that if the 2m positive real numbers
1, , N, 1, , N are chosen to satisfy certain constraints, then the scenarios as chosen in
CreditMetrics
April 1999
Monitor
page 69
[40]
= 1,
i=1
[42]
u ij
p jmax p j + -------- p jmin for j = 1, , N .
i i
Define 1, , m ( 0, 1 ) by
[43]
1
i = -------------------2 for i, , m
1 + i i
Let us now define 2m scenarios as in parts (a) to (c) of Theorem 1 where s, s and s are replaced
by s, s and s respectively. Then the new 2m scenarios satisfy items 1 to 3 of Theorem 1.
Equivalently, all statements of Theorem 1 hold if s, s and s are replaced by s, s and s
respectively, provided these new variables satisfy Eq. [40], Eq. [41], and Eq. [42].
Proof is given in the Appendix.
REMARK 1: It can be shown that Assumption 2 provides the necessary and sufficient condition for
existence of positive real numbers 1, , m, 1, , m that satisfy the constraints Eq. [40], Eq. [41]
and Eq. [42]. Thus the verification of the existence of the desired real numbers i, i can be carried
out using equation Eq. [26].
REMARK 2: Theorem 3 provides a parameterization of scenarios that match the given default information. The parameterization involves 2m variables 1, , m, 1, , m but since one of the constraints (Eq. [40]) is an equality constraint, we notice that there are 2m 1 degrees of freedom.These
2m 1 extra degrees of freedom can be used to incorporate additional properties of default events
while remaining consistent with the given default information.
REMARK 3: The scenario default probabilities described in the above result have the same structural
form as the corresponding results in Theorem 1. Since Theorems 1 and 2 are in a sense dual of each
other, we can obtain another parameterization of admissible decompositions which bears structural
similarity to the decomposition described in Theorem 2.
Summary
In this paper, we have developed an analytical and quick approach to obtaining the loss distribution
of a portfolio based upon the default probability of each asset and the default correlation between
each pair of assets. The main result shows that if the portfolio is made up of N asset types, then
CreditMetrics
April 1999
Monitor
page 70
References
Gupton, Christopher C. Finger, and Mickey Bhatia, CreditMetrics, Technical Document, New York:
Morgan Guaranty Trust Co., 1997.
Credit Risk+, A Credit Risk Management Framework, Technical Document, London: Credit Suisse
Financial Products, 1997.
Appendix
Proof of the Lemma
Here we show that if there exist i and pij with the attributes defined in the Lemma, then the M scenarios, viewed as M mutually exclusive outcomes, produce the desired default rates. Since the probability of scenario i is i , Eq. [13] guarantees that the M scenarios cover all possible outcomes if they
are mutually exclusive. Eq. [11] ensures that the probability of default of any asset matches its a priori given value while Eq. [12] ensures that the pairwise default probabilities of any two assets match
their given value provided default events in all scenarios are independent. Finally, Eq. [14] follows
from the observation that all scenarios are mutually exclusive and the probability of Scenario i is i .
Proof of Theorem 1
Here we have to show that for the 2m scenarios defined in steps (a), (b), and (c), Items 1 to 3 hold.
Item 3 is straightforward after Item 2 has been shown since the 2m scenarios are mutually exclusive
and cover all possible outcomes. We now show that Items 1 and 2 hold.
Item 1: To show p ij [ p jmin, p jmax ] .
From the definition of s in Eq. [24] it follows that
CreditMetrics
April 1999
Monitor
page 71
p jmax p j s u sj p jmin .
From the default probabilities defined in part (c) of the Theorem and Eq. [A.1], we observe that
p ij [ p jmin, p jmax ] for odd i .
The definition of s in Eq. [25] implies that p j + u sj [ p jmin, p jmax ] for all 0 s . This implies
that
[A.2]
1
1
p + ---------- u sj [ p jmin, p jmax ] since 0 < ---------- s (from Eq. [27]).
s s
s s
1
1
2
s = --------------s implying s s s = --------------s .
s s
s
2m
[A.4]
i pij
i=1
s=1
1
s s ( p j s u sj ) + s ( 1 s ) p j + ---------- u sj
s s
s pj
= pj ,
s=1
[A.5]
2 1
s s s + --------------s = 1 .
2
s s
2m
[A.6]
i pij pik
s=1
m
i=1
u sj
u sk
- p + ----------
s s ( pj s usj ) ( pk s usk ) + s ( 1 s ) p j + -------- s s k s s
s=1
[ s pj p k + usj u sk ]
s=1
= pj p k +
usj u sk
s=1
= q jk ,
CreditMetrics
April 1999
Monitor
page 72
Proof of Theorem 2
The proof is very similar to the proof of Theorem 1.
Item 1: To show p ij [ p jmin, p jmax ]
From the definition of s in Eq. [25] it follows that
[A.7]
p jmax p j + s u sj p jmin .
From the default probabilities defined in part (c) of the Theorem 2 and Eq. [A.7], we observe that
p ij [ p jmin, p jmax ] for odd i .
The definition of s in Eq. [24] implies that ( p j u sj ) [ p jmin, p jmax ] for all 0 s .
This implies that
[A.8]
1
1
p ---------- u sj [ p jmin, p jmax ] since 0 < ---------- s . (from Eq. [27])
s s
s s
1 s
1
2
s = -------------- implying s s s = --------------s .
s s
s
2m
[A.10]
i pij
i=1
u sj
s s ( p j + s u sj ) + s ( 1 s ) p j --------
s=1
s s
spj
s=1
[A.11]
2 1
s s s + --------------s = 1 .
2
s s
= pj
CreditMetrics
April 1999
Monitor
page 73
[A.12]
i pij pik
i=1
s=1
m
u sj
u sk
= p j pk +
u sj u sk
= q jk ,
s=1
Proof of Theorem 3
A comparison of inequalities Eq. [41] and Eq. [42] and the definition of default probabilities of assets
under each scenario (item (c) of Theorem 1) reveals that the default probabilities for j th asset type
under each scenario are between p jmin and pjmax . Verification of Eq. [32], Eq. [33] and Eq. [34]
follows exactly along the same lines as in the proof of Theorem 1.
[A.13]
Q = pp + U 1 U 1 + U 2 U 2 where U 1 = 10
0.99
0
2
1.09 , U 2 = 10 3.27 .
1.5
6
In the notation of equation Eq. [21], m = 2 in the above decomposition. This is a consequence of the
fact that the matrix of correlations is of rank 2. As shown previously, the number of scenarios
required for the proposed approach are twice the rank of matrix of correlations, or four in this
example.
1
1
------------ + ------------ = 0.92 < 1 ,
1 1 2 2
CreditMetrics
April 1999
Monitor
page 74
2 would also have been acceptable as long as Eq. [27] and Eq. [28] were not violated. Using these
values 1 = 0.5 and 2 = 0.5 we obtain from Eq. [29]
[A.15]
1
1
1 = -------------------- = 0.753, 2 = -------------------- = 0.7026 .
2
2
1 + 1 1
1 + 22
As in Theorem 1, we can now obtain the default probabilities for the four scenarios, producing the
results in Table 1.
Notice that in Scenario 1 the BBB assets, and in Scenario 3 the BB assets have default probabilities
that are at the minimum of their prescribed range (0.002 and 0.02 respectively). This is consistent
with Remark 3 described after Theorem 1.
1
1
1 = -------------------- = 0.6622, 2 = -------------------- = 0.4607 .
2
2
1 + 1 1
1 + 2 2
As in Theorem 1, we can now obtain the scenario default probabilities for the four scenarios in
Table 2. Notice that in odd scenarios (scenarios 1 and 3) one asset type has default probability at its
extreme value of 0.
Substituting 1 = 0.5 and 2 = 0.5 in Eq. [37], we obtain
[A.17]
4
1
1
1 = -------------------- = 5.55 10 , 2 = -------------------- = 0.0088 .
2
2
1 + 1 1
1 + 2 2
As in Theorem 2, we can now obtain the scenario default probabilities for the four scenarios in
Table 3. Notice that in odd scenarios (scenarios 1 and 3) one asset type has default probability at its
extreme value of 1.
CreditMetrics
April 1999
Monitor
page 75
Americas
Sarah Jun Xie (1-212) 981-7424
sarah.xie@riskmetrics.com
Europe/Asia
Rob Fraser (44-171) 842-0262
rob.fraser@riskmetrics.com
Co-sponsors
Arthur Andersen
Jitendra Sharma (1-212) 708-4536
jitendra.d.sharma@arthurandersen.com
Bank of America
David E. Gibbs (1-415) 953-1352
david.e.gibbs@bankamerica.com
PricewaterhouseCoopers LLP
Charles A. Andrews (1-212) 520-2306
charles_andrews@notes.pw.com
Bank of Montreal
Stuart Brannan (1-416) 867-4092
stuart.brannan@bmo.com
IBM
Bank of Tokyo-Mitsubishi
George S. Lee (1-212) 782-6971
glee@btmny.com
J.P. Morgan
Barclays Capital
Kalpana Telikepali (1-212) 412-1167
kalpana.telikepali@barcap.com
CATS Software, Inc.
Eduard Harabetian (1-310) 789-2000
eduard@cats.com
CIBC World Markets
John T. H. Cook (1-212) 856-6057
jay_cook@fp.cibc.com
Deloitte Touche Tohmatsu International
A. Scott Baret (1-212) 436-5456
sbaret@dttus.com
Deutsche Bank
James Glover (61-2)9258-2411
james.glover@db.com
Ernst & Young
Hank Prybylski (1-212) 773-2823
lawrence.prybylski@ey.com
CreditMetrics Monitor
April 1999
CreditMetrics Products
Introduction to CreditMetrics: A broad overview of the CreditMetrics methodology and
practical applications.
CreditMetricsTechnical Document: A comprehensive reference on the CreditMetrics methodology. The document begins with an overview and simple examples. Later
chapters include details on parameter estimation, the models assumptions, and simulation
framework. The final chapters include a more complete example, and a discussion of the application of the CreditMetrics measures of portfolio credit risk.
CreditMetrics Monitor: A semiannual publication which discusses a variety of credit
risk management issues, ranging from practical implementations to modeling and statistical
questions.
CreditMetrics data sets: Current market data (foreign exchange rates, yield curves, and
spread curves by industry and rating category), as well as derived data (industry correlations
and transition matrices). Current market data and industry correlations are updated weekly.
All of the above can be downloaded from the Internet at www.creditmetrics.com.
CreditManager PC application: A desktop software tool which implements the CreditMetrics methodology. Outputs include portfolio value distributions, marginal analyses, sector breakdowns, and stress tests. CreditManager can be purchased from the Risk Metrics
Group.
Trouble accessing the Internet? If you encounter any difficulties in either accessing the
CreditMetrics home page on www.creditmetics.com or downloading the CreditMetrics data
files, you can call 1-212-981-7475.
page 76