Вы находитесь на странице: 1из 76

CreditMetrics Monitor

April 1999
Editors Note

CreditMetrics News

The RiskMetrics Group Spins Off From J.P. Morgan.


Planned Enhancements for the Next Release of CreditManager.
J.P. Morgan
Co-sponsors:

CreditManager Users Corner

Bank of America
Bank of Montreal
Barclays Capital
Deutsche Bank

Risk-return Reporting
4
Risk-return reports are planned for the next version of CreditManager. Currently, it is possible to extract return information, and create these reports in a separate application.

KMV Corporation
UBS AG

Worst Loss Analysis of BISTRO Reference Portfolio


8
The BISTRO structure is an active synthetic Collateralized Bond Obligation. Using CreditManager, it is possible to examine worst case losses on the structures various tranches.

Bank of Tokyo-Mitsubishi
MBIA Inc.
CIBC World Markets
Moodys Investors Service

Using Multiple Databases


12
CreditManager 2.0 allows for the use of multiple databases, accelerating the obligor and
exposure editors, and improving the organization of portfolio data.

Royal Bank of Canada


Standard & Poors

Arthur Andersen
Deloitte Touche Tohmatsu
International
Ernst & Young
KPMG
Oliver, Wyman & Co. LLC
Pricewaterhouse Coopers LLP

CATS Software, Inc.


FITCH IBCA, Inc.

Methods and Applications


Conditional Approaches for CreditMetrics Portfolio Distributions
14
In some cases, the standard Monte Carlo procedure used in CreditMetrics shows relatively
slow convergence. Here, the structure of the model is explored in detail, and three methods
are presented which take advantage of this structure to improve upon the current Monte
Carlo performance.
The Valuation of Basket Credit Derivatives
34
Many new credit derivative products, including first-to-default structures and equity
tranches in CLOs, are associated with a portfolio of credit risks. This article discusses a
number of these structures, illustrates how some can be evaluated with the CreditMetrics
model, and provides an extension which permits valuation of more complex products.

The Fuji Bank, Limited


IBM
The Nomura Securities Co., Ltd.
Prudential Insurance Company
of America
Reuters, Ltd.

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


51
There are a number of ways to build correlations in a credit portfolio model. Taking default
probabilities and joint default probabilities as given, this article proposes a method whereby
the portfolio distribution can be obtained analytically, and whose computational burden is
comparable to the independent default case. Sensitivities to the model assumptions and comparisons to CreditMetrics results are also provided.

CreditMetrics
April 1999

Monitor

page 2

Editors Note
Christopher C. Finger
RiskMetrics Group
chris.finger@riskmetrics.com

With this issue, we present our first CreditMetrics Monitor as The RiskMetrics Group. While it is a
first issue in a sense, it is also a last, as we will no longer be publishing our research in CreditMetrics
or RiskMetrics Monitors. Our next research publication will be the inaugural issue of the RMG Journal. The RMG Journal will encompass both market and credit risk research, and continue the mix of
short, practical articles with longer research pieces. While it is likely that articles on credit risk and
CreditMetrics will appear in most issues, we plan to have occasional special issues devoted solely to
these themes.
Our Users Corner touches on a number of features which are often requested by CreditManager
users, and are actually possible with the current version, but which have not been explicitly incorporated into the software. The first article demonstrates how users might create risk-return reports using
CreditManager outputs along with a simple spreadsheet program, and provides an outline for how
this type of analysis will be implemented in the next version. The second article describes a common
synthetic Collateralized Loan Obligation (CLO) and utilizes CreditManager to perform a simple
worst case analysis for investors in the various tranches of this security. Finally, the third article discusses the use of multiple databases in CreditManager, a feature which may be exploited to improve
the performance of the application, and to help organize and share portfolio data.
All three of our longer articles investigate the structure of correlations in credit portfolio models. In
the first, we examine the CreditMetrics Monte Carlo approach. We observe that once the indices that
drive the portfolio have been determined, the movements of the individual obligors are conditionally
independent. Since there is a wealth of assumptions and techniques applicable only to portfolios of
independent assets, we are able to divide our Monte Carlo process. We first simulate the indices, then
rely on other analytical techniques to obtain the portfolio distribution conditional on the index values.
This has the effect of significantly reducing the dimensionality of the portfolio problem. We show
that this framework allows for closed form solutions in simple cases, and significantly improves simulation performance in more complex cases.
In the second article, Li of the RiskMetrics Group considers basket credit derivatives, a class of structures which includes the CLO treated in the Users Corner. The structures Li considers all depend on
the value of a portfolio of credit instruments; in this way, they differ from the simple credit derivatives we have treated previously, the value of which depended only on the credit standing of two
names. The author demonstrates that simple basket structures can be evaluated using the current
CreditMetrics framework. He points out that to value more complex structures, it is necessary to
model the timing of default events, and to extend the model to multiple horizons, while preserving
the existing correlation structure. Using modeling techniques from the insurance literature, he builds
the needed extensions, calibrates them to the current CreditMetrics framework, and illustrates how
to value the more complex basket structures.
In our last article, Nagpal and Behar of Standard & Poors take a new approach to constructing the
distribution of a portfolio of defaultable instruments. Rather than proposing a model for how defaults
occur, they begin with the available default data - default probabilities and joint default probabilities
(or default correlations) for groups of credits - and define arguably the simplest possible model that
is consistent with this data. This leads them to a family of decompositions which provide a relatively
inexpensive (in computational terms) way to obtain the entire portfolio distribution. Since a unique
decomposition is not specified, the authors present an example to illustrate the methods sensitivity
to the choice of decompositions; the example also shows that the CreditMetrics model consistent
with the same data gives results within this range of sensitivities. The authors finish with a rigorous
proof of sufficient conditions for their decomposition to exist.

CreditMetrics
April 1999

Monitor

page 3

CreditMetrics News
Sarah Jun Xie
Risk Metrics Group

The RiskMetrics Group Spins Off From J.P. Morgan

sarah.xie@riskmetrics.com

In the fall of 1998, the RiskMetrics Group (RMG) was spun off from J.P. Morgan. J.P. Morgan and
Reuters hold minority shares of RMG. RMG, known as the Risk Management Products and Research
Group while at J.P. Morgan, is responsible for the creation and development of benchmark risk management products including RiskMetrics, CreditMetrics, and DataMetrics. Ownership of the CreditMetrics, CreditManager, RiskMetrics, and FourFifteen brands has been transferred to RMG, and all
future enhancements to the methodology, data, and software will be by the new venture. All of the
CreditMetrics co-sponsor agreements are now also with RMG.

Planned Enhancements for CreditManagers Next Release


The next release of CreditManager, version 2.5, will be launched this summer. In this version, we
plan to expand the asset type coverage, add risk/return and benchmark analysis, and continue to work
on collecting and providing more credit data to our clients.
Asset coverage we are broadening the coverage of the traditional loan and bond products, e.g. amortizing structures, non-regular coupon structures, and cash flow streams for clients to handle various
type of instruments in their portfolios.
Analysis expected return calculation will be added to provide clients with risk/return analysis at a
portfolio level as well as individual exposure level. Credit Index portfolios will be built into the system to allow comparisons with benchmarks. The treatment of Market Driven Instruments will be
enhanced, with the application considering an entire credit exposure profile rather than a single average exposure input. Users will have the flexibility to define the percentiles for their VaR analyses, and
to stress the obligor-specific risk.
Data we will expand the range of credit spreads we current supply on our website. We are also working on conditional transition matrices driven by macroeconomic factors.

CreditMetrics
April 1999

Monitor

page 4

Risk-return Reporting
Christopher C. Finger
RiskMetrics Group
chris.finger@riskmetrics.com

One of the most common and most natural requests for future releases of the CreditManager product
is for the facility to compare the current risk outputs with an expected return measure. While this is
planned for future releases, it is possible to perform some risk-return analysis in the current version.
This article describes how a user might obtain return information using the current software to complement the existing risk outputs.1
Risk-return analysis at the exposure level takes its most classical form in mean-variance portfolio
theory. This theory, as expressed originally by Markowitz in the 1950s, investigates the portfolio
preferences that all investors should have, given only that they would prefer higher returns and lower
risk. For a given set of investable assets, Markowitz presents an optimal set of relative holdings, or
the efficient portfolio. Generally speaking, all investors should prefer such a portfolio, differing
only in how much or how little of this efficient portfolio they wish to hold.2
However tempting it is to apply the Markowitz theory to credit, and to insist that any portfolio should
be rebalanced to be efficient, to do so would be highly impractical. First, to arrive at an efficient asset
allocation, it is necessary to have unlimited flexibility to buy and sell exposures. Such flexibility is
rarely allowed with credit portfolios, as market illiquidity and client relationships put significant constraints on the portfolio manager. Second, while an application of the Markowitz theory might offer
an optimal trade-off between return and portfolio standard deviation, it is not likely to offer an optimal trade-off between return and capital. Despite these caveats, a loose application of the theory to
credit is helpful. Though the efficient portfolio may be unattainable, it serves as an ideal reference
point from which the portfolio manager can measure a need to shed exposure or to invest more. Furthermore, although optimizing standard deviation will not lead to an optimal capital level, for most
large portfolios, any reduction in standard deviation will produce a reduction in capital.
To analyze the risk-return profile of a portfolio, the user must begin with the inputs, namely, the risk
and return of the components of the portfolio. As CreditManager automatically produces the risk
measures, it remains only to specify the expected return for each exposure in the portfolio. While this
can be a difficult and subjective task for equity portfolios, there is a simple and objective definition
of return which can be used within the CreditMetrics model. We know intuitively that the expected
return on bonds and other fixed income exposures should account for the interest received; the cost
of funding the exposure; and the expected loss due to potential credit events. CreditManager defines
the mean value of an exposure at the risk horizon to be equal to the exposures current value, plus
cash (interest) received, plus changes due to rolling down the yield curve, plus changes due rating
changes and default. Thus, to obtain expected returns, a user need only export the exposures current
value and mean value, and then account for the funding cost and compute in a separate application:
mean value current value
----------------------------------------------------------------- cost of funding .
current value

Alternately, to account only for default losses, a user might define the expected return as

This article is partly based upon discussions with John Veidis at Fuji Bank, New York.

An accessible reference to this theory is Elton and Gruber, Modern portfolio theory and investment analysis, 4th ed., John
Wiley and Sons, Inc., 1991.

CreditMetrics
April 1999

Monitor

page 5

Risk-return Reporting
cash received expected loss due to default
--------------------------------------------------------------------------------------------------------- cost of funding .
current value

In either case, CreditManager makes the requisite information to compute expected returns available
for export. To export this information, the user runs a non-standard scatter report, and selects the
statistics discussed above. A sample definition screen for such a report is displayed below. Note that
at the bottom of the screen, the statistics needed to compute expected return have been selected, along
with the risk statistic to which the return will be compared.

CreditMetrics
April 1999

Monitor

page 6

Risk-return Reporting
The following serves as an example of the process if we were to run this report on the CreditManager
sample portfolio:
Assuming a flat funding cost of 5% for each exposure, we compute each exposures expected return.
In turn, we plot the expected returns against each exposures contribution to the risk of the portfolio
(that is, the exposures marginal standard deviation relative to its size), as shown in Chart 1. Notably,
the returns are tightly clustered, while the risk contributions are more spread out. In addition, the
chart illustrates little relation between the size of exposures and their risk return trade-off; there are
large exposures near the bottom right of the chart (worse risk-return performance) as well as small
exposures closer to the top left (better risk-return performance). Both of these observations suggest
that there exists room for further reduction of risk and that, by reallocating our investments within
the existing exposures, risk would be further reduced without severely impacting the portfolio return.
Chart 1
Risk-return for CreditMetrics sample portfolio.
Exposures grouped by current value.
Exp. return
1.25%

1%

0.75%

0.5%
<200k
200k-500k

0.25%

500k-2m
>2m
Marg. st. dev.
0%

0.5%

1%

1.5%

2%

2.5%

For the purposes of contrasting the sample portfolio, we construct an improved portfolio using the
same exposures, but setting the relative holdings in these exposures proportionally to their ratio of
expected return to marginal standard deviation.3 Intuitively, the idea is simply to invest more heavily
in instruments with greater return on risk. Performing the same analysis as above produces the results
in Chart 2. In general, the risk-return trade-off for the individual exposures is similar to that in the
sample portfolio, but the large investments in the improved portfolio are concentrated in those exposures with the best risk-return trade-off.

This does not produce the optimal portfolio in the Markowitz sense, but it does, in general, produce a better one, that
is one with a higher ratio of return to risk.

CreditMetrics
April 1999

Monitor

page 7

Risk-return Reporting
Chart 2
Risk-return for improved portfolio.
Exposures grouped by current value.
Exp. return
1.25%

<200k
200k-500k

1%

500k-2m
>2m

0.75%

0.5%

0.25%

Marg. st. dev.


0%

0.5%

1%

1.5%

2%

2.5%

While we can be reasonably sure that the improved portfolio has a lower standard deviation than
the original one, it is not certain that we have reduced any of the percentile losses, or capital, of the
original portfolio. We would expect that the capital amounts are closely associated with standard deviations, and that by reducing the portfolio standard deviation, we have reduced the worst case loss
comparably. This is in fact the case; the portfolio statistics are presented in Table 1.
Table 1
Statistics for sample and improved portfolios.
Results expressed as percentages of current portfolio value.
Sample
Improved
Return less funding

0.82

0.84

Standard deviation

0.46

0.25

5th percentile loss

0.63

0.29

1st percentile loss

2.06

0.99

0.1th percentile loss

4.73

3.10

In summary, although generating risk-return analysis is not a CreditManager core functionality, it is


possible for a user to generate risk-return reports using statistics the application does provide. As a
result, CreditManager analysis capabilities can be expanded, allowing the user to identify opportunities for both diversification and for higher levels of compensation for risk.

CreditMetrics
April 1999

Monitor

page 8

Worst Loss Analysis of BISTRO Reference Portfolio


Toru Tanaka, VP
Risk Management
Fuji Bank
Sheikh Pancham PhD, AVP
Risk Management
Fuji Bank
spancham@bloomberg.net

Tamunoye Alazigha, AVP


Risk Monitoring and Control
Fuji Bank

This paper illustrates how CreditManager is applied to perform a Worst Loss analysis on the underlying reference portfolio of J.P. Morgans BISTRO. The BISTRO is a synthetic Collateralized Loan
Obligation (CLO), one of a growing number of structured credit risk products being developed by
banks to address Loan Portfolio Management issues. Traditionally, once transactions are originated
for the credit portfolio, banks have adopted a buy and hold approach due to the illiquid secondary
markets for such positions. More recently however, banks have recognized that holding credit positions to maturity result in risk/return inefficiencies from burdensome regulatory capital requirements
and relationship constraints. Solutions to eliminating these inefficiencies have come in the form of
products such as the BISTRO.
In a standard CLO, the originating bank assigns its drawn/funded loans to a Special Purpose Vehicle
(SPV), which, in turn issues several classes of credit-tranched notes to capital market investors. Losses realized on loan transactions are passed to investors via the tranches which represent ownership
of the transactions. Synthetic CLOs, on the other hand, make use of credit derivatives contracts to
transfer the credit risk of a loan portfolio, rather than through the sale of transactions (as the standard
CLO structure). In this way, only the risk, but not the ownership of the underlying exposures is transferred.
Arguably, J.P. Morgans BISTRO has been the most active synthetic CLO issue. It is aimed at institutional spread investors. The BISTRO SPV (Trust) offers two levels of credit-tranched notes in addition to having an equity reserve account. Investors proceeds are used to purchase Treasury Notes
paying a fixed coupon and maturing on the BISTRO maturity date. At the same time, the BISTRO
SPV enters into a credit default swap with Morgan Guaranty Trust (MGT) referencing the underlying
credits in a pool of companies, each with a specified notional amount. Under the terms of the swap,
MGT pays the BISTRO Trust a fixed semi-annual payment comprising the spread between the Treasury coupons and the coupons promised on the issued BISTRO Notes. In return, at the Notes maturity the trust compensates MGT for losses experienced as a result of credit events. Investors are not
in a first-loss position with respect to this portfolio of credit risk. Payments are made by the BISTRO
Trust only after, and to the extent that, losses due to credit events have exceeded the first-loss threshold (the equity reserve account). Credit events are based on ISDA credit swap definitions including
bankruptcy, failure to pay, cross acceleration, restructuring or repudiation. Losses are computed either by a computation of final work-out value for companies emerging from bankruptcy prior to maturity or by soliciting bids from the market for senior unsecured obligations, and are allocated to the
two tranches according to seniority. To date, there have been three outstanding issuances of BISTROs. This analysis focuses on the BISTRO Trust 1997-1000. Table 1, Table 2, and Chart 1 provide
summary information on this issue
Table 1.
BISTRO Trust 1997-1000 Tranches.
Description
Super-Senior Tranche

Amount (US$M)

Coupon

8,993

Rating
Not Securitized

Senior Notes

460

6.35%

Aaa (Moodys)

Subordinated Notes

237

9.50%

Ba2 (Moodys)

Equity Reserve Account

32

Not Securitized

CreditMetrics
April 1999

Monitor

page 9

Worst Loss Analysis of BISTRO Reference Portfolio


Table 2
BISTRO Trust 1997-1000 Underlying Portfolio.
Face Amount

US$9.72B

Reference Credits 307 Senior Unsecured Obligations of US, European, and Canadian Companies
Maturity

December 31, 2002

Collateral

5.625% US Treasury Notes

Chart 1
BISTRO Structure and Cash Flow.
BISTRO
SPV
Senior
Tranche
Investors

6.35% coupon

SeniorTranche

+ Face amount
- realized losses
at maturity

Investors proceeds
used to buy T-Notes

Subordinated
Tranche

9.50% coupon

Subordinated
Tranche
Investors

Reference Pool
307 Companies
US$9.72B Notional

+ Face amount
- realized losses
at maturity

5.625% coupon

First-Loss
Equity Reserve
(0.33%)

0.725% of senior notional


3.875% of subordinated notional

Treasury Notes
(Collateral)
matures with
BISTRO

+ spread from MGT


pays investors

Realized losses due to credit events


paid at BISTRO maturity

Morgan Guaranty Trust

An institutional investor in the BISTRO Notes has exposure to the portfolio of underlying reference
credits. The exposure over the term (4 years remaining) of the BISTRO is examined using existing
functionality in CreditManager. The methodology to do this involves a few simple steps:
1. Data on the underlying credits is obtained from the Credit Derivatives Group at J.P. Morgan.
The data set includes reference names, notional amounts, credit ratings, and country and
industry classifications. Credit ratings, and country and industry classifications are necessary inputs for creating the Obligors import file.
2. A 4-year default/transition probability matrix is created manually in CreditManager using
data from Moodys credit research reports (or S&P).
3. In preparing the Exposures file for import, the following inputs are used:
(a) Asset Type is set to Credit Def. Swap since the BISTRO can be viewed as a basket of
default swaps on the underlying reference credits.
(b) Maturity of the underlying reference asset is set equal to the maturity of the BISTRO.

CreditMetrics
April 1999

Monitor

page 10

Worst Loss Analysis of BISTRO Reference Portfolio

(c) Seniority Class is set to SU (Senior Unsecured).


(d) Information on Coupon Rate or Spread of the underlying reference is not disclosed and is
therefore proxied by yields obtained from credit spread curves for senior unsecured obligations.
(e) The input for the Swap Spread Premium is obtained from a single asset credit default
swap pricing model developed by Quantservs CreditPricer (based on Jarrow-Turnbull methodology). The Swap Spread Premium can also be proxied by market quotes on asset swap
spreads from sources such as Bloomberg.
4. The Simulation Summary Report is run in CreditManager with Preferences set to 4-year
horizon. The important output statistics for our purposes in this report are the Change due to
Default and the Losses from the Mean at the 5th, 1st, and 0.1th Percentile levels.
The Change due to Default is basically the Expected Default Loss experienced by the portfolio given
the exposure amounts, credit ratings, default probabilities, recovery rates, and correlations among the
underlying reference credits. The Expected Default Loss is expected to accumulate up to the horizon.
The 10th Percentile Loss from the Mean describes the loss that is expected to occur not more than 10
percent of the time, and so forth for losses from the mean at the 5th, 1st, and 0.1th percentile levels.
Because the analysis horizon is set to 4 years (the remaining maturity of the BISTRO), the sum of
the Expected Default Loss and each Percentile Loss from Mean captures an increasing portion of the
loss that can accumulate after 4 years under worst case scenarios. Investors are indeed interested in
examining the accumulated loss after 4 years because contingent payments by investors to the BISTRO counterparty are due only at maturity regardless of the timing of default losses or other credit
events affecting underlying reference credits in the portfolio.
The insight to be gained from examining the sum of the Expected Default Loss and the Percentile
Losses from Mean statistics comes from the analysis presented in Table 3.
Table 3
Analysis of BISTRO 1997-1000.
BISTRO Reference Portfolio Face Amount (USD):

$9.72B

Expected Default Loss (EDL) at BISTRO maturity:

$24M

First-Loss Equity Reserve Account (0.33% of face amount):

$32M

Percentile Loss
10

0.1s

Loss from Mean

$42M

$92M

$267M

$648M

EDL plus Loss from


Mean less Equity

$34M

$84M

$259M

$640M

CreditMetrics
April 1999

Monitor

page 11

Worst Loss Analysis of BISTRO Reference Portfolio


Investors are accountable for portfolio losses incurred less the First-Loss Equity Reserve Account.
At the 10th percentile level from the mean, it is likely that a portfolio loss of $42M will not be exceeded. Summing this amount with the Expected Default Loss and reducing by the First-Loss Equity
Reserve (42+24-32 = 34) gives $34M. There is a 90% chance that a loss of such magnitude will not
occur. First-Loss Equity Reserve aside, investors in the Subordinated Tranche must bear contingent
payments up to $237M, with subsequent additional payments up to $460M coming from Senior
Tranche investors. Thus, given current market conditions, there is a 90% likelihood that 85% (34/
237) of the Subordinated Notes Principal will be returned to investors, and all of the Senior Notes
Principal.
Similarly, considering the results at the 5th, 1st, and 0.1th percentile levels from the mean: Only 5%
of the time is 35% (84/237) of Subordinated Principal expected to be lost. There is only a 1% chance
that the entire Subordinated Principal will be wiped out (loss of $259M at 99 percentile), thus, there
is almost a 99% chance that the Senior Principal will remain intact through to maturity. For comparison, the historical four year default rate for Ba-rated bonds is about 9%.1 In a 1-in-a-1000 worst case
or at the 0.1th percentile level, 88% of the Senior Principal ((640-237)/460) will suffer along with all
of the Subordinated Principal. This is roughly consistent with the historical four year default experience for Aaa-rated bonds.
The upshot from the results of this Worst Loss Analysis of the BISTRO reference portfolio is that the
tranches have a great likelihood of enduring until maturity and thus investors will receive their full
coupon (or yield to maturity). Investors in the BISTRO can perform this type of worst loss analysis
periodically using updated credit ratings for the underlying names and market data from the CreditMetrics web site. Further research on the BISTRO can, perhaps, attempt to relate the coupons earned
by the tranches to the losses incurred for various scenarios.

See for example Carty, Lea and Dana Lieberman, Corporate Bond Defaults and Default Rates 1938-1995, Moodys
Investors Service, 1996.

CreditMetrics
April 1999

Monitor

page 12

Using Multiple Databases


Rob Fraser
RiskMetrics Group Europe
rob.fraser@riskmetrics.com

Some CreditManager 1.0 users found that monolithic databases containing an unwieldy number of
records of obligors and exposures limited the flexibility and speed of the application. Unlike its predecessor, Version 2.0 gives the user the ability to divide a large database into several components. Thus,
the speed of the obligor and exposure editors is increased dramatically and the organization, management, and sharing of portfolio data is vastly improved.
When using multiple databases in CreditManager 2.0, the user can analyze individual databases by
using a standard configuration tool available on the CreditManager installation CD-ROM. Thus, it is
possible to share exposure and obligor data among multiple portfolios without needing to repeatedly
export and import large amounts of data from one portfolio to another. A shared copy of a central
master database can be accessed by all users within an organizations network to work with the same
base portfolios. Version 2.0 also allows users to keep saved copies of an existing database, to which
the user can revert to undo changes.

The Default Database


CreditManager 2.0 uses a flat file Paradox database which contains records of all market data, reports,
obligors, and exposures. After installation is complete, the default database will be located in the PC
file system in a folder in the CreditManager 2.0 directory called Data2. The Data2 folder contains all
the market data and sample portfolio information generated when CreditManager 2.0 is first opened.
If the user imports new obligors or exposures, or revises market data, the default Data2 database is
automatically updated. Again, while it is possible to hold relatively large amounts of data in any single
database, as was the case with Version 1.0, Version 2.0 allows for the creation and use of multiple
databases, thereby adding overall flexibility as well as the option to provide multi-user access to a
centralized database.
Creating additional databases
The easiest way to create a new database is
to copy an existing database folder, such as the
default Data2 folder. The following screen
shot shows the file system in which resides the
new, duplicated database, NewDataBase. To
select this NewDataBase for use with CreditManager 2.0, the user will need to use the Reconfigure Tool, which is installed with
CreditManager 2.0 from the CD. After closing
the CreditManager 2.0 application, the user can
run the Reconfigure program by selecting from
the CreditManager 2.0 items on the Start Menu.

CreditMetrics
April 1999

Monitor

page 13

Using Multiple Databases

The above is a shot of the first pop-up screen. The location of the new database is to be entered in the
Database Path field, as shown. When the user clicks OK, a confirmation message will pop up,
asking the user to confirm the change to the CreditManager Registry entries. Once the user has confirmed the change, she can start CreditManager 2.0 and it will use the new database1.
The user can empty this new database, import load obligors, exposures and data, and, at any time,
revert to the default database by closing CreditManager 2.0 and then using the Reconfigure Tool. This
procedure will work for database files located on a both network server, as well as a users local hard
disk.

A problem can occur if the files within the new database are pre-defined read-only. CreditManager 2.0 will detect such files
upon launch and produce an error message. This problem can occur when a database is archived to a CD-ROM and then copied onto a Windows NT file system. Changing the file access permissions to read-write will solve the problem.

CreditMetrics
April 1999

Monitor

page 14

Conditional Approaches for CreditMetrics Portfolio Distributions


Christopher C. Finger
RiskMetrics Group
chris.finger@riskmetrics.com

It is well known that the CreditMetrics model relies on Monte Carlo simulation to calculate the full
distribution of portfolio value. Taking this approach, independent scenarios are generated in which
the future credit rating of each obligor in the portfolio is known and correlations are reflected so that
highly correlated obligors, for example, default in the same scenario more frequently than less correlated obligors. In each scenario, the credit rating of the obligors determines the value of the portfolio; accumulating the value of the portfolio in each scenario allows us to estimate descriptive
statistics for the portfolio, or even to examine the shape of the distribution itself.
While the CreditMetrics Monte Carlo approach is attractive for its flexibility, it suffers from relatively slow convergence. Any statistic obtained through Monte Carlo is subject to simulation noise, but
this noise is slower to disappear in our model than it is in the case of models such as RiskMetrics,
where the distributions of individual assets are continuous. We shall see that by performing simulations as we do currently, we fail to take full advantage of the models structure. Specifically, we will
see that once we condition on the industry factors that drive the model, all defaults and rating changes
are independent. Though prior studies (Kolyoglu and Hickman (1998), Finger (1998), and Gordy
(1998)) have addressed this fact, they have focused more on using this to facilitate comparisons between CreditMetrics and other models. Intuitively, conditional independence is a useful feature,
since there is a wealth of machinery upon which we may call to aggregate independent risks. In this
article, we will illustrate how to view the CreditMetrics model in a conditional setting, and use conditional independence to exploit a variety of established results. This will provide us with a toolbox
of techniques to improve on the existing Monte Carlo simulations.
We begin by illustrating the conditional approach with a simple example. Next, we apply three different techniques to either approximate or compute directly the conditional portfolio distribution, and
show how these techniques may be used to enhance our Monte Carlo procedure. Finally, we relax the
assumptions of the simple example, and show how these techniques may be applied in the general
case.

The simplest case


To begin, let us assume the simplest possible credit portfolio. We assume a portfolio of N loans of
equal size, which are worth zero in the case of default and 1 N otherwise.1 Assume each loan has a
probability p of defaulting, and let Z i be the normalized asset value change for the i th obligor. Then
1
the model is simply that the i th obligor defaults if Z i < , where = ( p ) is the default thresh2 Suppose that each normalized asset value change can be expressed by
old.
[1]

Z i = w Z + 1 w i ,

where Z is the (normalized) return on a common market index, i is the idiosyncratic movement for
this obligor, and w is the common weight of each obligor on the market index. We assume, as always,
that Z, 1, 2, , N are independent, normally distributed random variables, with mean zero and
1

Thus, regardless of the number of loans N , the total size of the portfolio is one. The size of the loans is arbitrary, but this
formulation will aid our exposition later.

Throughout, we will use to denote the cumulative distribution function (CDF), and the density function for the standard normal distribution.

CreditMetrics
April 1999

Monitor

page 15

Conditional Approaches for CreditMetrics Portfolio Distributions


variance one. Readers will recognize this as the constant correlation case, with all pairwise asset cor2
relations set to w .
Under the standard approach, we observe that it is impractical to compute all of the joint default probabilities (for instance, the probability that two obligors default together, or that a given four obligors
default while seven others do not) directly by integrating the multivariate distribution of asset values.
For this reason, we obtain the portfolio distribution through Monte Carlo simulation. We generate a
large number of independent draws of Z and an equal number of independent draws of 1, 2, , N ,
and then produce the scenarios for the Z i through Eq. [1].3 Given the draws of Z i , we may identify
which obligors default in each scenario, tabulate the portfolio value in each scenario, and produce
the portfolio distribution.
While this is a natural approach, it suffers from a number of drawbacks. First, it is necessary to draw
a large number of random numbers -- 1,000 ( N + 1 ) for 1,000 Monte Carlo trials. This is costly in
terms of the number of operations and the memory required, and potentially taxing on a random number generator if the number of obligors is very large. Second, as mentioned before, convergence (that
is, the rate estimated statistics approach their true value as a function of the number of scenarios used)
is slow. One reason for this, intuitively, is that there is little information in each trial. That is, if p
were equal to 1% (and thus equal to -2.33), one trial where some Z i is equal to -2, and another
where Z i is equal to 0 are the same (both produce a no default for the i th obligor) and yet the first
trial is in a sense closer to a default. We would like to exploit the idea that not all non-default trials
are alike to make our simulations more efficient. Another aspect of the slow convergence is that relative risk rankings may be spurious. For instance, in our simple model clearly no loan is more risky
than any other, yet it is certainly plausible that in a set of Monte Carlo trials, some obligors will be
identified as defaulting more frequently than others, which produces the misleading conclusion that
one obligor, on a marginal basis, contributes more risk to the portfolio.
In order to improve our simulation approach, we first observe that once we have fixed the market
factor Z , everything else that happens to the obligors is independent. More formally, the obligors are
conditionally independent given Z . The conditional independence will prove crucial, as it transforms
the complex problem of aggregating correlated exposures into the well understood problem of convolution, or the aggregation of independent exposures. Returning to Eq. [1], we see that if we fix Z ,
the condition that a given obligor defaults becomes
[2]

wZ
i < --------------------- .
2
1w

Since i follows the standard normal distribution, the probability, given Z , that Eq. [2] occurs is given by

[3]

w Z
p ( Z ) = --------------------- .
1 w2

Equivalently, we could build the correlation matrix for the Z i and generate scenarios using the Cholesky decomposition
of this matrix. We choose the other method here mostly for ease of exposition.

CreditMetrics
April 1999

Monitor

page 16

Conditional Approaches for CreditMetrics Portfolio Distributions


We refer to p ( Z ) as the conditional default probability. Chart 1 illustrates the effect of different Z
values on the conditional asset distribution, and on the conditional default probability. Observe that
when Z is negative (the market factor decreases, indicating a bad year), the obligors are closer to
defaulting, and their conditional default probability is greater; when Z is positive, the opposite is
true.
Chart 1
Unconditional asset distribution, and conditional distributions with positive and negative Z .
Relative frequency
0.5

Unconditional
Cond., Z=-2
Cond., Z=2

0.4

0.3

0.2

0.1

Market factor
-3

-2

-1

The strength of the dependence on Z is a function of the index weight w . When w is close to one
(meaning asset correlations are high), the conditional default probability is most affected by the market; when w is lower, more of the obligor randomness is due to the idiosyncratic term, and the conditional default probability is less affected. At the extreme, when w is zero, there is no dependence
on the market, and the conditional default probability is always just p , regardless of the value of Z .
See Chart 2.
Before moving on to our new portfolio approaches, we point out that the conditional framework provides us with a convenient way to decompose and compute the variance of the portfolio. In general,
we may decompose the variance of any random variable as
[4]

Variance = ( Variance of conditional mean ) + ( Mean of conditional variance ) .

For our case, the conditional mean of the portfolio value V (recall that each loan is either worth zero
or 1 N ) is ( 1 p ( Z ) ) , the variance of which is just the variance of p ( Z ) . Since the expectation of
2
2
p ( Z ) is p , the variance is E [ p ( Z ) ] p . To compute the first term, we evaluate the integral4

See Vasicek (1997) for this and other details of the distribution of p ( Z ) . In the article, this distribution is referred to as
the normal inverse.

CreditMetrics
April 1999

Monitor

page 17

Conditional Approaches for CreditMetrics Portfolio Distributions

[5]

E[p(Z ) ] =

( z ) p ( z ) dz =

w z 2

(
z
)

-------------------- dz = 2 ( , ;w 2 ) ,

1 w2

where 2 ( ., . ;w 2 ) is the bivariate normal CDF with correlation w 2 . Thus, the first term in Eq. [4],
the variance of the conditional portfolio mean, is equal to ( 2 ( , ;w 2 ) p 2 ) . We may think of this
as the portfolio variance that is due to moves in the market factor.
Chart 2
Conditional default probability as a function of the market factor.
Cond. def. prob.
7%
w=0%
w=20%

6%

w=10%
w=30%

5%
4%
3%
2%
1%
Market factor
-3

-2

-1

To compute the second term in Eq. [4], we utilize the conditional independence of the individual obligors. In particular, given Z , defaults are independent, and occur with probability p ( Z ) . Thus, the
conditional variance of the value of individual loans is p ( Z ) ( 1 p ( Z ) ) N 2 .5 Since the loan values
are conditionally independent, the conditional portfolio variance is the sum of the individual conditional variances, or p ( Z ) ( 1 p ( Z ) ) N . Taking expectations and using Eq. [5] again, we see that the
mean of the portfolio conditional variance is ( p 2 ( , ;w 2 ) ) N . Putting everything together, we
see that the portfolio variance is given by
[6]

( 2 ( , ;w 2 ) p 2 ) + ( p 2 ( , ;w 2 ) ) N

variance based on market

The N 2 term is due to the size of the individual loans.

idiosyncratic variance

CreditMetrics
April 1999

Monitor

page 18

Conditional Approaches for CreditMetrics Portfolio Distributions


In Chart 3, we observe the percentage of portfolio variance that is due to the market, as a function of
correlation level and portfolio size. As we would expect, for higher correlations and larger portfolios,
more of the portfolio variance is explained by the market.
Chart 3
Percentage of portfolio variance due to market moves.
% of variance
100%

80%

60%

40%

20%

N=100

N=200

N=500

N=1000
w

0%

10%

20%

30%

40%

50%

60%

Three methods to obtain the portfolio distribution


In this section, we use our simple example and the conditional setting described above to illustrate
three alternatives to the current Monte Carlo approach. We begin with a method that relies on a rough
approximation, but that can be implemented in a simple spreadsheet. The next two methods rely on
successively weaker assumptions, but require more involved numerical techniques.
The Law of Large Numbers (LLN) method6
The Law of Large Numbers is a fundamental result in probability that states, loosely, that the sum of
a large number of independent random variables will converge to its expectation. For instance, this
is the result that guarantees that, as we continue to flip the coin more and more times, the proportion
of flips which result in heads will converge to one half. Likewise, we may apply this result to our
portfolio once we have conditioned on Z . That is, we suppose that our portfolio is large enough so
that once we have fixed the market variable, the proportion of obligors that actually defaults is exactly p ( Z ) , and that the portfolio value is equal to its conditional mean, ( 1 p ( Z ) ) . In effect, we assume that the portfolio distribution is just the distribution of ( 1 p ( Z ) ) , and that the second term in
Eq. [6] is negligible. Note that the plots in Chart 3 serve as a guide to how valid this assumption
6

This subsection draws significantly from Vasicek (1997), which provides a more rigorous treatment of the limit arguments
than is presented here.

CreditMetrics
April 1999

Monitor

page 19

Conditional Approaches for CreditMetrics Portfolio Distributions


might be. Intuitively, this assumption is that of perfect diversification in size; the portfolio only has
risk due to its sensitivity on the market factor, but not due to having large exposures to any obligor.
This most desirable feature of the LLN method is that it is trivial to compute percentiles of the portfolio distribution. Suppose we wish to find the 1st percentile portfolio value (that is, the 99% worst
case value). Since we have assumed that the portfolio value only depends on Z , and this dependence
is monotonic, Z s percentile maps to the portfolio percentile in a one-to-one fashion. Thus, since the
1st percentile of Z is -2.33, the 1st percentile portfolio value is ( 1 p ( 2.33 ) ) . Using this same logic,
we see that for a particular value v , the probability that the portfolio is worth less than v is given by

[7]

1 w (1 v )
Pr { V < v } = Pr { 1 p ( Z ) < v } = ------------------------------------------------------------ .

w
2

Differentiating the expression above with respect to v , we obtain the probability density for the portfolio:

[8]

1 w2 1 ( 1 v )
------------------------------------------------------------

1
w
------------------- --------------------------------------------------------------------- .
w
( 1 ( 1 v ) )
w2

We present plots of the portfolio density and CDF in Charts 4 and 5.


Chart 4
Portfolio density function using the LLN method.
p=5%, w=50%.
Relative frequency
30

25

20

15

10

90%

92%

94%

96%

98%

Portfolio value
100%

CreditMetrics
April 1999

Monitor

page 20

Conditional Approaches for CreditMetrics Portfolio Distributions


Chart 5
Portfolio CDF using the LLN method.
p=5%, w=50%.

Cumulative probability
100%

80%

60%

40%

20%

80%

85%

90%

95%

Portfolio value
100%

Returning to the two drawbacks of the standard Monte Carlo framework, we see that this method certainly addresses the first (computational complexity). On the other hand, while the avoidance of any
Monte Carlo techniques guarantees that the method is not subject to simulation error, there is a significant chance of error due to the methods strong assumptions. This model error is certainly enough
to suggest not using this method for capital calculations, yet it is difficult to ignore the models simplicity.
One potential application of the LLN method is for sensitivity testing. Since the computation of percentiles is so straightforward, it is simple to investigate the effect of increasing correlation levels or
default likelihoods. Furthermore, even with more complicated portfolio distributions, it is possible
to use the LLN method for quick sensitivity testing by first calibrating the two parameters, p and w
(we have assumed away any dependence on N ), such that the mean and variance of the LLN methods distribution match the mean and variance of the more complicated distribution. It is then reasonable to treat w as an effective correlation parameter, and investigate the sensitivity of percentile
levels to changes in this parameter. In this way, we use the distribution of ( 1 p ( Z ) ) as a simple two
parameter family which is representative of a broad class of credit portfolios. In principle, this is similar to Moodys use of the diversity score in their ratings model for collateralized bond and loan
obligations (see Gluck (1998)).
We have seen that the conditional approach can provide useful, quick, but rough approximations of
the credit portfolio distribution. While this is helpful, we would like to exploit the technique for
something more advanced than back of the envelope calculations, and in the next section, move to
less rigid assumptions and a more involved technique.

CreditMetrics
April 1999

Monitor

page 21

Conditional Approaches for CreditMetrics Portfolio Distributions


The Central Limit Theorem (CLT) method
Loosely again, the Central Limit Theorem establishes that (appropriately normalized) sums of large
numbers of independent random variables are normally distributed. It is said that because so many
of the quantities we encounter - stock price changes over a year, heights of people, etc. - are the result
of sums of independent random variables, the CLT is the explanation for why the normal distribution
is observed so frequently.
In our case, we use the CLT to justify the assumption that given Z , the conditional portfolio distribution is normally distributed. Thus, in contrast to the LLN method, where we assumed that given
Z , there was no variance, and therefore no randomness in the portfolio, here, we assume that the conditional portfolio variance is as stated in Eq. [6], but that the shape of the conditional portfolio distribution is normal.7 To contrast the LLN and CLT approaches, consider Chart 6. In the LLN
approach, we draw Z , which uniquely determines the portfolio value ( 1 p ( Z ) ) , indicated by the
heavy line in the chart. Note that there is no dependence on portfolio size. In the CLT approach, we
draw Z , and then assume that the portfolio is normally distributed, with mean given by the heavy
line, and standard deviation bands given by the appropriate lighter lines. Here, there is a dependence
on N , and further, we match two moments of the model distribution, whereas in the LLN approach,
we only match the expectation.
Chart 6
Conditional portfolio distribution as a function of Z .
Portfolio value
100%

95%

90%

85%

80%

Conditional mean
SD bands, N=100
SD bands, N=1000

75%

Market factor
-2

-1.5

-1

-0.5

0.5

The advantage of the CLT assumption is that conditional on the market factor, the portfolio distribution is tractable. Specifically, given Z , we know the portfolios conditional mean, 1 p ( Z ) , and conditional variance, p ( Z ) ( 1 p ( Z ) ) N , and can then write the conditional probability that the
portfolio value V is less than some level v by

The true shape of the conditional portfolio distribution is binomial.

CreditMetrics
April 1999

Monitor

page 22

Conditional Approaches for CreditMetrics Portfolio Distributions


[9]

v (1 p(Z ))
Pr Z { V < v } = ----------------------------------------------------- ,
p ( Z ) ( 1 p ( Z ) ) N

where Pr Z denotes the conditional probability given Z . To compute the unconditional probability
that the portfolio value falls below v , we take the expectation of the expression in Eq. [9]:

[10]

Pr { V < v } = E [ Pr Z { V < v } ] =

v ( 1 p( z) )

-
( z ) -------------------------------------------------p ( z ) ( 1 p ( z ) ) N

dz .

Although this integral is intractable analytically, it can be evaluated numerically by standard techniques. Thus, we can compute the entire portfolio distribution, which we present along with the LLN
results in Chart 7 and Chart 8. We also present selected percentile values in Table 1. In both the chart
and the table, we present CLT results for two choices of portfolio size N ; recall that in the LLN method, we do not account for N , and therefore our results will be the same for all portfolio sizes.
Chart 7
Portfolio density function for CLT and LLN methods.
p=5%, w=50%.
Relative frequency
25

CLT, N=50
CLT, N=200

20

LLN

15

10

90%

92%

94%

96%

98%

Portfolio value
100%

CreditMetrics
April 1999

Monitor

page 23

Conditional Approaches for CreditMetrics Portfolio Distributions


Chart 8
Portfolio CDF for CLT and LLN methods.
p=5%, w=50%.

Cumulative probability
4%

CLT, N=50
CLT, N=200
LLN

3%

2%

1%

Portfolio value
60%

65%

70%

Table 1
Percentile values for CLT and LLN methods.
p=5%, w=50%.
Percentile
CLT, N=50
CLT, N=200
50%
10%
5%
1%
10bp
4bp

97.2%
86.7%
81.5%
68.9%
51.4%
45.1%

97.2%
87.4%
82.5%
70.5%
53.8%
47.6%

75%

80%

LLN
97.1%
87.7%
82.9%
71.1%
54.6%
48.5%

We make two general observations. First, for larger values of N , the CLT and LLN methods give
more similar results; this is sensible, since it is at these values that the LLN assumption of zero conditional variance is more accurate. Second, the discrepancies between the two methods tend to be
greater at more extreme percentile levels. This is also to be expected, as the more extreme percentiles
correspond to market factor realizations where the conditional default probability, and therefore also
the conditional portfolio variance, is greater than at less extreme percentiles.
In the end, the CLT approach is a step up in complexity from the LLN case, in that we need to evaluate the integral in Eq. [10] to obtain the cumulative probability distribution for the portfolio. However, the complexity in the computation of this integral is still far less than that of the standard Monte
Carlo approach. Assuming we used the same number of sample points (say 1,000) in our evaluation
of Eq. [10] as we use scenarios in the Monte Carlo procedure8, the number of points necessary for
Monte Carlo is still N times greater than that for the CLT integration. Thus, where the Monte Carlo

CreditMetrics
April 1999

Monitor

page 24

Conditional Approaches for CreditMetrics Portfolio Distributions


approach treated the portfolio problem as having dimensionality N (that is, as having N degrees of
freedom), the CLT approach (like the LLN approach) treat the problem as having dimensionality 1.
This is the key advantage of this and all conditional approaches; we deal explicitly only with the factors that the obligors have in common, and use approximations or closed form techniques to handle
the factors that are idiosyncratic to the individual obligors.
On the other hand, the CLT approach is still subject to approximation errors, although less so than
the LLN approach. In the LLN case, we guaranteed that one moment of the portfolio distribution
would be exact, and assumed away the idiosyncratic variance; in the CLT case, we guarantee that the
first two moments of the portfolio distribution are exact, and make our assumptions about the shape
of the distribution given its variance. The price we pay (other than increased complexity) for this better approximation is the possibility of error in the numerical evaluation of the integral in Eq. [10].
The Moment Generating Function (MGF) method9
The last step in our hierarchy is to admit slightly more complexity, while modeling the entire portfolio distribution exactly. We will rely on the fact that a distribution is uniquely characterized by its
moment generating function (MGF). That is, if E e t X = E e t Y for two random variables X and Y
and all values of t ,10 then X and Y must have the same probability distribution. Intuitively, all of
the information about a random variable X is contained in the MGF ( t ) = E e t X . This is plausible
if we observe that the mean of X is equal to the first derivative of evaluated at t = 0 , the expected
value of X 2 is equal to the second derivative of evaluated at t = 0 ; and in general, the n th moment of X is equal to the n th derivative of evaluated at t = 0 .11 Therefore, if we can obtain the
MGF of our portfolio value V , then in principle, we have obtained the portfolio distribution. In practice, obtaining the distribution from the MGF can be involved numerically, yet in cases (such as ours,
as we will see) where the MGF is a straightforward computation, the approach is very appealing.12
To compute the MGF for our portfolio distribution, we again rely on the conditional independence
of the individual obligors. Letting V 1, V 2, , V N denote the values of the individual loans, and E Z
the expectation conditional on Z , we have that
[11]

E Z e t V = E Z exp [ t ( V 1 + + V N ) ] = E Z ( e t V 1 e t V N ) .

This actually is an extreme case, as we can obtain the same precision in the evaluation of Eq. [10] with a hundred sample
points as we would for the Monte Carlo procedure with several thousand sample points.

The author wishes to thank Andrew Ulmer for help with the computations for this method.

10

It is only necessary that there exists an interval containing 0 such that the equality holds for all t in this interval.

11

See DeGroot (1986) or any standard probability text.

12

For integer-valued random variables (for example, the number of obligors which default in a given period), it is more
common in practice to use the probability generating function, E X , from which the probability distribution is simpler
to obtain. This is the approach used by Nagpal and Bahar (1999). It would be reasonable to use this approach here, but we
use the MGF approach to ease our generalizations later.

CreditMetrics
April 1999

Monitor

page 25

Conditional Approaches for CreditMetrics Portfolio Distributions


Now since the V i are conditionally independent and have the same stand alone distributions, we may
write the right-hand side of Eq. [11] as the product of individual moment generating functions, and
obtain
[12]

E Z e t V = E Z e t V1 EZ e t V2 E Z e t VN = ( E Z e t V1 ) N .

To complete the calculation of the conditional portfolio MGF, we recall that, conditional on Z , V 1
takes on the value 1 N with probability 1 p ( Z ) and 0 with probability p ( Z ) . Thus the conditional
MGF for V 1 is
[13]

E Z e t V1 = ( 1 p ( Z ) ) e t / N + p ( Z ) e t 0 = e t / N [ 1 p ( Z ) ( 1 e t / N ) ] ,

and the conditional portfolio MGF is


[14]

E Z e t V = e t [ 1 p ( Z ) ( 1 e t / N ) ] N .

Finally, to obtain the unconditional MGF, we take the expectation of Eq. [14] and obtain

[15]

E et V = et

( z ) [ 1 p ( z ) ( 1 e t / N ) ]N dz .

As was the case with Eq. [10], we must rely on numerical techniques to evaluate this integral. Note
that in this setting, the CreditMetrics model looks similar to the CreditRisk+ model, with the only
notable difference being the shape of the distribution of the conditional default probability p ( Z ) .13
To obtain the portfolio distribution from Eq. [15], we apply the standard Fast Fourier Transform
(FFT) inversion (see Press et al (1992)). This technique requires sampling Eq. [15] at values of t
spaced equally around the unit circle in the complex plane, that is, at t = 2ik m , for
k = 0, 1, , m 1 , where for fastest results, m is an integer power of 2 .14 Applying this technique
requires more operations than does the CLT approach, but there are no approximation errors other
than the possible error in evaluating Eq. [15] and the similar error resulting from using a finite number of sample points for the FFT.
We present the results for all three techniques in Chart 9 and Table 2. There is very little discrepancy
between the three methods, and at more extreme percentiles, the MGF approach seems to agree more
closely with the LLN. Checking our results with 10,000 Monte Carlo simulations, we see that it is
impossible to conclude that any of the three methods gives incorrect results. Thus, even if the choice
between the methods is still unclear, we can be certain that all three provide significant improvements
over the slower Monte Carlo approach.

13

Gordy (1998) provides further details.

14

In our examples, we typically set m = 128 .

CreditMetrics
April 1999

Monitor

page 26

Conditional Approaches for CreditMetrics Portfolio Distributions


Chart 9
Portfolio density function using MGF, CLT, and LLN methods.
p=5%, w=50%, N=200.
Relative frequency
25

MGF
CLT

20

LLN

15

10

90%

92%

94%

96%

Table 2
Percentile levels using MGF, CLT, and LLN methods.
p=5%, w=50%, N=200.
Percentile
MGF
CLT
50%
97.0%
97.2%
10%
87.4%
87.4%
5%
82.5%
82.5%
1%
70.6%
70.5%
0.10%
54.4%
53.8%
0.04%
48.6%
47.6%

98%

Portfolio value
100%

LLN
97.1%
87.7%
82.9%
71.1%
54.6%
48.5%

To this point, we have been able to avoid Monte Carlo simulations entirely, and to exploit the structure of the CreditMetrics model in a simple case to analytically estimate the portfolio distribution.
Unfortunately, as we stray from our simple case (in particular, as we allow more market factors), the
analytical techniques are no longer practical. However, we may still exploit the conditional framework by using the techniques we describe above, and sampling the factors through a Monte Carlo
procedure. This is the subject of the next section.

Extensions of the basic case


The three strong assumptions we used in the previous section were:
only one market factor drove all of the asset value processes,

CreditMetrics
April 1999

Monitor

page 27

Conditional Approaches for CreditMetrics Portfolio Distributions


there were only two possible credit states, and
all of our exposures were identical - That is, every exposure had the same size, default probability, recovery rate, and weight on the market factor.
In this section, we will first present the framework and notation for a more general case, then discuss
how our three earlier methods can be extended into a Monte Carlo setting, and finally present results
for an example portfolio.

Framework and notation


For our more general case, we assume that there are two market factors (the extension to any arbitrary
number will be clear) and that these two factors are independent.15 Denote the two market factors Z 1
and Z 2 . Let r = 1, , m denote the possible rating states, with r = 1 corresponding to the highest
rating and r = m corresponding to default. For i = 1, , N , let the i th obligor be characterized by:
weights w i, k on the k th factor, meaning that the obligors asset value change is expressed as
[16]

Z i = w i, 1 Z 1 + w i, 2 Z 2 + 1 wi, 1 w i, 2 i ,

probabilities p ir of being in the r th rating state at the horizon, and


values v ir , which our exposure to the i th obligor will be worth if the obligor is in rating state
r at the horizon.
Rather than the single default threshold from the previous section, we now have multiple thresholds that are specific to the individual obligors. Let ir , r = 1, , m denote the rating thresholds for
the i th obligor, such that the obligor will be in (non-default) rating state r if Z i < ir and Z i ir + 1 ,
and will default if Z i < im . We calibrate the thresholds as in Chapter 8 of the Technical Document,
such that the probabilities that Z i falls between them correspond to the obligors transition probabilities. Thus, we set i1 = , and
m

[17]

1
r
s
i = p i , r = 2, , m .
s = r

Analogously to the previous section, we observe that given the values of Z 1 and Z 2 , the obligor transitions are conditionally independent. Let p ir ( Z 1, Z 2 ) denote the conditional probability that the i th
obligor migrates to rating r , given a realization of Z 1 and Z 2 . Noting that the condition that Z i is
less than a threshold ir becomes
[18]

15

ir w i, 1 Z 1 wi, 2 Z 2
i < ---------------------------------------------------------- ,
2
2
1 w i, 1 wi, 2

Both Gordy (1998) and Kolyoglu and Hickman (1998) point out that the assumption of independence is not restrictive,
since two correlated market factors may be expressed as linear combinations of two independent factors.

CreditMetrics
April 1999

Monitor

page 28

Conditional Approaches for CreditMetrics Portfolio Distributions


we obtain the conditional default probability as before:

[19]

im w i, 1 Z 1 w i, 2 Z 2
p im ( Z 1, Z 2 ) = ------------------------------------------------------------ ,
2
2

1 w i, 1 w i, 2

and the conditional transition probabilities similarly:16

[20]

ir wi, 1 Z 1 w i, 2 Z 2
ir + 1 w i, 1 Z 1 wi, 2 Z 2
p ir ( Z 1, Z 2 ) = ---------------------------------------------------------- ----------------------------------------------------------------- , r = 1, , m 1 .
2
2
2
2

1 w w
1 w w
i, 1

i, 2

i, 1

i, 2

With the conditional transition probabilities in hand, we have all of the information necessary to describe the conditional portfolio distribution. For instance, the conditional mean is simply
N

[21]

m ( Z 1,

Z2)

pir ( Z 1, Z 2 ) vir .
i = 1r = 1

The conditional variance and conditional MGF calculations are also straightforward:
N

[22]

2 ( Z 1, Z 2 ) =

p ir ( Z 1, Z 2 ) ( vir m ( Z1, Z 2 ) ) 2 , and


i = 1r = 1

[23]

E Z 1, Z 2

et V

EZ , Z
1

i=1

t Vi

p ir ( Z 1, Z2 ) e

t v ir

i = 1r = 1

We are now ready to discuss conditional Monte Carlo methods for the general case.

Conditional Monte Carlo using the LLN method


Recall from before that the key assumption for the LLN technique was that the conditional variance
is negligible, and that given the market factors, the portfolio value is exactly equal to its conditional
mean. Then since the conditional mean is a strictly increasing function of the market factor, we could
obtain percentiles for the portfolio by calculating the percentile for the market factor, and calculating
the conditional mean given this value.
In the general case, our assumption is the same: given the market factors, the portfolio value is exactly m ( Z 1, Z 2 ) . However, since we now have multiple market factors, we cannot simply carry over
factor percentiles to obtain portfolio percentiles. Rather, to arrive at the distribution of m ( Z 1, Z 2 ) , we
apply a Monte Carlo approach, generating a number of realizations of the two market factors, and

16

This procedure was discussed for the one factor case by Belkin, Suchower, and Forrest (1998).

CreditMetrics
April 1999

Monitor

page 29

Conditional Approaches for CreditMetrics Portfolio Distributions


evaluating the conditional mean in each case. We then estimate percentiles of the portfolio through
the simulated percentiles of the conditional mean.

Conditional Monte Carlo using the CLT method


For the CLT technique, our assumption in the general case does not change either. We assume that
given a realization of the market factors, the portfolio distribution is conditionally normal, with mean
m ( Z 1, Z 2 ) and variance 2 ( Z 1, Z 2 ) given by Eq. [21] and Eq. [22], respectively. Given this assumption, the probability that the portfolio value falls below a particular level v is given by a generalization of the integral in Eq. [10]:

[24]

Pr { V < v } = E [ Pr Z 1, Z 2 { V < v } ] =

v m ( z 1, z 2 )

( z1 ) ( z2 ) -----------------------------2( z , z )
1

dz 1 dz2 .

With only two market factors, it may still be practical to evaluate this integral numerically; however,
as the number of factors increases, Monte Carlo methods become more attractive. The Monte Carlo
procedure is as follows:
1. Generate n pairs ( z 11, z 12 ), ( z 21, z22 ), , ( zn1, z n2 ) of independent, normally distributed random variates.
2. For each pair ( z j1, z j2 ) , compute the conditional mean m j = m ( z j1, z j2 )
j2 = 2 ( z j1, zj2 ) .
3. Estimate the portfolio mean by m =
n

[25]

1
2 = ---
n

j m j n

and variance

and the portfolio variance by

)2 + 1
( mj m
---
n

j=1

j2 .
j=1

4. Estimate the cumulative probability in Eq. [24] by the summation:

[26]

v m
1
Pr { V < v } = --- --------------j .
n
2
j

Note that in step 3, we do not rely on any distributional assumptions, and admit only the error from
our Monte Carlo sampling. In step 4, we utilize the CLT assumption, and our estimation is subject to
error based on this as well as our sampling.

Conditional Monte Carlo using the MGF method


Our application of the MGF technique to the general case is similar. The generalization of Eq. [15],
using the result in Eq. [23], is

CreditMetrics
April 1999

Monitor

page 30

Conditional Approaches for CreditMetrics Portfolio Distributions

[27]

et V

= ( z1 ) ( z2 )

pir ( z1, z2 ) e

i = 1r = 1

t v ir

dz 1 dz 2 .

We implement a Monte Carlo approach to sample pairs ( zj1, z j2 ) as before, evaluate the conditional
MGF (the term in parentheses) in Eq. [27] for each pair, and aggregate the results to obtain an estimate of E e t V . Obtaining the MGF for an adequate set of t values allows us to apply the FFT inversion technique described earlier. This technique gives us the entire portfolio distribution, with
errors due only to sampling, but not to any assumptions.

Example
To illustrate the performance of our three proposed methods, we analyze an example portfolio. The
portfolio is comprised of exposures to 500 obligors, the ratings of which are distributed as in Table 3.
The exposures are well diversified by size (no single exposure accounts for more than 1% of the total)
and depend on two market factors; the dependence on the factors is such that the average asset correlation between obligors is about 30%. The portfolio has a mean value of 248.9 and a standard deviation of 2.56.
Table 3
Distribution of obligors across ratings.
Example portfolio.
Percent of
Default
obligors
prob. (%)
Rating
AAA
AA
A
BBB
BB
B

10
20
30
30
6
4

0.02
0.04
0.05
0.17
0.98
4.92

We apply the three methods using 625 trials in each case; for the MGF method, we use 256 values of
t .17 In addition, we perform a standard Monte Carlo simulation using 50,000 scenarios. The results
are presented in Charts 10 and 11 and Table 4.

17

This corresponds to an inverse FFT with order eight.

CreditMetrics
April 1999

Monitor

page 31

Conditional Approaches for CreditMetrics Portfolio Distributions


Chart 10
PDF for example portfolio.
Relative frequency
CLT
MGF
Monte Carlo

40%

30%

20%

10%

Portfolio value
240

242

244

246

248

250

252

254

Chart 11
CDF for example portfolio.

Cumulative probability
6%
CLT
5%

MGF
Monte Carlo

4%
3%
2%
1%
Portfolio value
220

225

230

235

240

245

CreditMetrics
April 1999

Monitor

page 32

Conditional Approaches for CreditMetrics Portfolio Distributions


Table 4
Percentile levels for example portfolio.
Monte Carlo results represent upper and lower confidence estimates.
Monte Carlo
Percentile
LLN
CLT
MGF
Lower
Upper
50%
249.3
249.3
249.3
249.4
249.4
10%
246.4
246.4
246.4
246.4
246.5
5%
244.8
244.7
244.7
244.3
244.5
1%
238.0
238.9
239.1
237.6
238.4
10bp
220.6
221.1
222.0
223.3
225.5
4bp
220.6
219.5
220.0
217.8
222.2
Time (sec)

36

46

263

1793

Our first observation is that all of the results are quite similar. In particular, there appears to be very
little difference between the CLT and MGF results, suggesting that the assumption of conditional
normality is appropriate, at least for this example. This is an encouraging result and would provide
significant memory and computational savings if it holds generally. Even more encouraging is that
with only 625 samples of the market factors, we obtain (with CLT and MGF) a good estimate of even
the four basis point (one in 2,500) percentile. Precision in the more extreme percentiles is a drawback
for the LLN method (and standard Monte Carlo), as we use order statistics as estimates 18. Thus, to
capture the four basis point percentile, we would need more than 2,500 trials; to capture the 1 basis
point percentile, we would need over 10,000. If we still need to use as many trials as in the standard
Monte Carlo case, there is little reason to use conditional simulations. For this reason, LLN in the
general case is of little use.
With regard to the cost of the computations, the times in Table 4, while not representing optimized
implementations, serve as good indicators of the relative benefits of performing conditional simulations. Using CLT or MGF, we obtain the same precision as with Monte Carlo, with up to a factor of
forty improvement in speed. However, as the number of market factors increases, so does the sensitivity of the MGF and CLT results to the sample factor points. Further investigation is necessary into
how many market factors can be accommodated using these methods, and whether sophisticated
sampling techniques might allow these methods to be extended to more factors.
In addition, further investigation into other sensitivities of the conditional methods is necessary. In
particular, we have assumed that the portfolios do not have significant obligor concentrations (such
as the case where one obligor in a portfolio of 200 accounts for 10% of the total exposure). The LLN
and CLT methods especially would be subject to more error in the presence of significant concentrations. Nevertheless, it is likely that for all but the most pathological cases, the error in our simulation
methods are significantly less than the errors due to uncertainty in the model parameters.

18

For example, the estimate for the 1st percentile using Monte Carlo (and 50,000 trials) is the 500th smallest value.

CreditMetrics
April 1999

Monitor

page 33

Conditional Approaches for CreditMetrics Portfolio Distributions


Conclusion
We have discussed a number of possibilities for improving upon the standard Monte Carlo approach
used currently in CreditMetrics. All of the possibilities are based on the observation that given the
moves in the market factors that drive our obligors, the individual obligor credit moves are conditionally independent. This observation opens the door for us to utilize the many techniques and assumptions that are only applicable to the independent case. Further, this observation illustrates that
the real dimensionality (that is, the number of independent variables that we must treat) of the CreditMetrics problem is not the number of obligors, but the number of market factors. Utilizing these
facts, we obtain preliminary results that suggest that our new approaches provide the same accuracy
as standard Monte Carlo in a fraction of the time. Continued investigation is planned.

References
Belkin, Barry, Stephan Suchower, and Lawrence Forest. A One-Parameter Representation of Credit
Risk and Transition Matrices, CreditMetrics Monitor, Third Quarter, 1998.
DeGroot, Morris H. Probability and Statistics, 2nd ed. (Reading, MA: Addison-Wesley Publishing
Company, 1986).
Finger, Christopher C. Sticks and Stones, The RiskMetrics Group, LLC, 1998.
Gluck, Jeremy. Moody's Ratings of Collateralized Bond and Loan Obligations, Conference on
Credit Risk Modelling and the Regulatory Implications, Bank of England, 1998.
Gordy, Michael B. A Comparative Anatomy of Credit Risk Models, Federal Reserve Board FEDS
1998-47, December, 1998.
Koyluoglu, Ugur, and Andrew Hickman. Reconcilable Differences, Risk, October 1998.
Nagpal, Krishan M. and Reza Bahar. An Analytical Approach for Credit Risk Analysis Under Correlated Defaults, CreditMetrics Monitor, April, 1999.
Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C, 2nd ed. (Cambridge: Cambridge University Press, 1992).
Vasicek, Oldrich. The Loan Loss Distribution, KMV Corporation, 1997.

CreditMetrics
April 1999

Monitor

page 34

The Valuation of Basket Credit Derivatives


David X. Li
RiskMetrics Group
david.li@riskmetrics.com

The rapidly growing credit derivative market has created a new set of financial instruments which
can be used to manage the most important dimension of financial risk - credit risk. In addition to the
standard credit derivative products, such as credit default swaps and total return swaps based upon a
single underlying credit risk, many new products are now associated with a portfolio of credit risks.
A typical example is the product with payment contingent upon the time and identity of the first or
second-to-default in a given credit risk portfolio. Variations include the instruments with payment
contingent upon the cumulative loss before a given time in the future. The equity tranche of a CBO/
CLO is yet another variation, where the holder of equity tranche incurs the first loss. Deductibles and
stop-loss in insurance products could also be incorporated into the basket credit derivatives structure.
As the CBO/CLO market continues to expand, the demand for basket credit derivative products will
most likely continue to grow. For simplicitys sake, we refer to all of these products as basket credit
derivatives.
In the last issue of the CreditMetrics Monitor, Finger (1998) studies how to incorporate the building
block of credit derivatives, credit default swaps, into the CreditMetrics model. This article further
explores the possibility of incorporating more complicated basket credit derivatives into the CreditMetrics framework.
The actual practice of basket product valuation presents a challenge. To arrive at an accurate valuation, we must study the credit portfolio thoroughly. CreditMetrics provides a framework to study the
credit portfolio over one period. In this paper, we extend the framework to cover multi-period problems using hazard rate functions. The hazard rate function based framework allows us to model the
default time accurately - a critical component in the valuation of some basket products, such as first
or second-to-default credit derivatives.
To properly evaluate a credit portfolio, it is critical to incorporate default correlation. Surprisingly,
existing finance literature fails to adequately define or discuss default correlation, defining it only as
being based on discrete events, which dichotomize according to survival or non-survival at a critical
period (one year, for example). We extend the framework by defining the correlation as the correlation between two survival times.
Additionally, we will explicitly introduce the concept of copula function which, in the current CreditMetrics framework, has only been used implicitly. We will give some basic properties and common
copula functions. Using a copula function and the marginal distribution of survival time of each credit in a credit portfolio, we will build a multivariate distribution of the survival times. Using this distribution, we can value any basket credit derivative structure.
The remainder of this article is organized as follows: First, we present a description of some typical
basket credit derivative instruments. Second, we explore the characterization of default using hazard
rate functions. Third, we correlate credit risks using copula functions. Finally, we present numerical
examples of basket valuation.

Instrument Description
There are a variety of basket type credit derivative products, which we classify into the following
three broad categories.

CreditMetrics
April 1999

Monitor

page 35

The Valuation of Basket Credit Derivatives


Type I. Payment associated with the total loss over a fixed period of time
This product has a contingent payment based upon the total loss of a given credit portfolio over a
fixed time horizon, such as one year. An investor who worries about her investment on an equity
tranche of a CBO or CLO can buy a contract which pays the loss over a certain amount or deductible.
Suppose L t is the cumulative total loss of a given credit portfolio at a prefixed time t in the future,
such as 1 year, and D is the deductible. The payoff function for this instrument at time t is

[1]

0,
if L t D,
P =
L t D , if L t > D.

We could also have a stop-loss type payoff function, in which the total loss up to a given amount is
paid by the contract purchaser. In this or the basic case, the payment is made at the end of the period
and the payment only depends upon the cumulative loss at period end. Thus, this type of product is
similar to European options.

Type II. Payment is associated with the cumulative loss across time
For this type of contract the payment is associated with the evolution of loss function across time.
For example, the contract could be structured in such a way that the payment starts if the cumulative
loss of a given credit portfolio becomes larger than a lower limit L , and the payment continues after
this point whenever new loss occurs until the cumulative loss reaches a upper limit H . Suppose we
have a portfolio which has a loss of 10, 13, 2, 0, 19 (in millions) in its first 5 years, and a lower limit
of $15 million and an upper limit of $45 million. The payment of the loss protection would be 0, 8,
8, 0, 14.
Table 1
An Example of Loss and Payment of a Type II Product.
Year
1

Loss

10

13

19

Cumulative loss

10

23

31

31

50

Payment

14

The loss and payment of this product is shown in Table 1, which illustrates the necessity of tracking
the loss function. L t at different times in the future for this product type.
Type III. Payment is associated with the time and identity of the default.
The payment of this product depends upon the time and identity of the first few defaults of a given
credit portfolio. For example, the first-to-default contract pays an amount either prefixed or associated with the identity of the first defaulted asset in a given credit portfolio. A portfolio manager
chooses a portfolio of three credit risks of face values $100, $150 and $200 million respectively, and
buys credit protection against the first-to-default of the portfolio. If one of the credit defaults, she

CreditMetrics
April 1999

Monitor

page 36

The Valuation of Basket Credit Derivatives


receives the face amount of defaulted asset, delivers the defaulted asset to the first-to-default protection seller, and the credit default swap terminates.
The cash flows for a first-to-default transaction are represented in Chart 1. If we suppose a portfolio
manager - a North American insurance company, for example - thinks the three underlying assets are
uncorrelated and believes that it is highly unlikely that more than one will default, the company can
buy a first-to-default protection from an European investor who seeks diversified credit exposure in
North America.1
Chart 1
Cashflows for a first-to-default structure.
x basis points per annum
Portfolio Manager

Credit Protection
Seller

Protection buyer
Par less recovery amount
following the first default
Reference Assets
AA, BB, CC

A Hazard Rate Function Approach to Default Studies


CreditMetrics provides a one-period framework in which we can obtain the profile of a credit portfolio at the end of one period, such as one year. This framework is sufficient to value the Type I basket
product and any derivative instruments with payment contingent upon the credit portfolio value at
period end.
However, for Type II and III basket products, we need to model either the cumulative loss function
across time or the default times of a given credit portfolio. To this end, we introduce some standard
techniques of survival analysis in statistics to extend our one-period framework to a multi-period
one. Survival analysis has been applied to other areas, such as life contingencies in actuarial science,
industry life testing in reliability studies, which share a lot of similarities with the credit problems
we encounter here. We introduce a random variable to denote each credits survival time (the time
from today until the time of default2), which plays an important role in the valuation of any credit
structures. The characterization of this variable is usually by a hazard rate function. The next section
provides some basic hazard rate function concepts.

It is generally best to use credit risks with similar credit ratings and notional amounts. Otherwise, a very weak credit risk
or a credit with large notional amount would dominate the basket pricing.

The survival time is infinity if default never occurs.

CreditMetrics
April 1999

Monitor

page 37

The Valuation of Basket Credit Derivatives


The Characterization of Default
In the study of default, interest centers on a group of individual companies for each of which there is
defined a point event, often called default, (or survival) occurring after a length of time. We introduce
a random variable called the time-until-default, or simply survival time, T A for a security A , to denote this length of time. This random variable is the basic building block for the valuation of cashflows subject to default. To precisely determine time-until-default we need:

an unambiguously defined time origin;

a time scale for measuring the passage of time;

a clear definition of default.

We choose the current time as the time origin to allow use of current market information to build
credit curves. The time scale is defined in terms of years for continuous models, or number of periods
for discrete models. For defaults, we must select from the various definitions provided by the rating
agencies and the International Swap Dealers Association.

Survival Function
Consider a credit A the survival time of which is T A . For notational simplicity we omit the use of
the subscript A .The probability distribution function of the survival time T can be specified by the
distribution function
[2]

F ( t ) = Pr { T t } ,

which gives the probability that default occurs before t . The corresponding probability density function is f ( t ) = dF ( t ) dt . Either distribution specification can be used, whichever proves more convenient. In studying survival data it is useful to define the survival function
[3]

S ( t ) = 1 F ( t ) = Pr { T > t } ,

giving the upper tail area of the distribution, that is, the probability that the credit survives to time t .
Usually, F ( 0 ) = 0 , which implies S ( 0 ) = 1 since survival time is a positive random variable. The
distribution function F ( t ) and the survival function S ( t ) provide two mathematically equivalent
ways of specifying the distribution of the survival time. There are, of course, other alternative ways
of characterizing the distribution of T . The hazard rate function, for example, gives the credits default probability over the time interval [ x, x + t ] if it has survived to time x
[4]

F ( x + t ) F ( x )
f ( x )t
Pr { x < T x + t T > x } = ----------------------------------------- -------------------- .
1 F( x)

The hazard rate function


[5]

f(x) - ,
h ( x ) = ------------------1 F(x)

1 F(x)

CreditMetrics
April 1999

Monitor

page 38

The Valuation of Basket Credit Derivatives


has a conditional probability density interpretation as the probability density function of default at
exact age x , given survival to that time. The relationship between the hazard rate function, the distribution function, and the survival function is
[6]

f(x)
S ( x )
h ( x ) = -------------------- = ------------ .
1 F(x)
S(x )

The survival function can then be expressed in terms of the hazard rate function using
t

[7]

S ( t ) = exp h ( s ) ds .
0

Using the hazard rate function, we can calculate different default and survival probabilities. For example, we can calculate the conditional probability that a credit survives to year x and then defaults
during the following t years as follows:
t

[8]

tqx

= 1 exp h ( x + s ) ds .
0

If t = 1 the series tqx for x = 0, 1, n gives the conditional default probability of a credit in the
next year if the credit survives to the year x .3 The probability density function of survival time of a
credit can also be expressed using the hazard rate function as follows:
t

[9]

f ( t ) = h ( t ) exp h ( x + s ) ds .
0

A typical assumption is that the hazard rate is a constant, h , over certain period, such as [ x, x + 1 ] .
In this case, the density function is
[10]

f ( t ) = he

ht

which shows that the survival time follows an exponential distribution with parameter h . Under this
assumption, the survival probability over the time interval [ x, x + t ], for 0 < t 1 is
t

[11]

tpx

= exp h ( s ) ds = e
0

ht

= ( px ) ,

where p x is the probability of survival over one year period. This assumption can be used to scale
down the default probability over one year to a default probability over a time interval less than one
year.
The aforementioned result can be found in survival analysis books, such as Bowers (1986).

Following the actuarial convention, we simply omit the subscript t when t = 1 .

CreditMetrics
April 1999

Monitor

page 39

The Valuation of Basket Credit Derivatives


Building Survival Functions in Practice
In the last section we introduced a new random variable to denote the survival time for each credit.
We also presented various equivalent ways to describe the new random variable. As in traditional
survival analysis, we use the hazard rate function to describe credits default property. If we know
the hazard rate function, we can calculate any default, survival probability or expected default time.
In this section we discuss how to obtain the hazard rate function in practice.
There exist three methods to obtain the term structure of default rates:
Obtaining historical default information from rating agencies;
Taking the Merton option theoretical approach;
Taking the implied approach using market price of defaultable bonds or asset swap spreads.
Rating agencies like Moodys publish historical default rate studies regularly. In addition to the commonly cited one-year default rates, they also present multi-year default rates. From these rates we
can obtain the hazard rate function. For example, Moodys publishes weighted average cumulative
default rates from 1 to 20 years. For the B rating, the first 5 years cumulative default rates in percentage are 7.27, 13.87, 19.94, 25.03 and 29.454. From these rates we obtain the marginal conditional
default probabilities: The first marginal conditional default probability in year one is simply the oneyear default probability, 7.27%. The other marginal conditional default probabilities can be obtained
using the following formula:
[12]

n + 1qx

= nq x + np x q x + n ,

which simply states that the probability of default over time interval [0, n+1] is the sum of the probability of default over the time interval [0, n], plus the probability of survival to the end of n th year,
and then default during the following one year. Using Eq. [12] we have the marginal conditional default probability:
[13]

n + 1qx nqx
-,
q n + 1 = -------------------------1 nqx

which results in the marginal conditional default probabilities in year 2, 3, 4, 5 as 7.12%, 7.05%,
6.36% and 5.90%. If we assume a piece wise constant hazard rate function over each year, then we
can obtain the hazard rate function using Eq. [8]. The hazard rate function is given in Chart 2.

These numbers are taken from Carty and Lieberman (1997).

CreditMetrics
April 1999

Monitor

page 40

The Valuation of Basket Credit Derivatives


Chart 2
Hazard rate function for B rating.
Hazard rate
8%

7.5%

7%

6.5%

6%
Years
0

This hazard rate function is a decreasing function of time, which implies that the further into future,
the lower the marginal default probability. Intuitively, B rated credit tends to be upgraded rather than
to be downgraded, as grade B is the lowest grade in Carty and Liebermans (1997) cumulative default
rate table.
Mertons option-based structural models could also produce a term structure of default rates. Using
the above, we can derive the hazard rate function.
Alternatively, we can take the implicit approach by using market observable information, such as the
asset swap spreads or risky corporate bond prices. This is the approach used by most credit derivative
trading desks. The extracted default probabilities reflect the market-agreed perception today about
the future default tendencies of the underlying credit. For details about this approach, we refer to Li
(1998).5

Correlating Survival Times Using Copula Functions


Let us study some problems of an n credit portfolio. Using either the historical approach or the implicit approach, we can construct the marginal distribution of survival time for each of the n credit
risks in the portfolio. If we assume mutual independence among the n credit risks, we can study any
problem associated with the portfolio. However, the independence assumption of the n credit risks
is obviously not realistic; in reality, a group of credits default rate tends to be higher in a recession
and lower when the economy is booming. This implies that each credit is subject to the same set of
macroeconomic environment, and that there exists some form of positive dependence among the
5

The hazard rate function is usually called a credit curve due to the analogy between the yield curve and the hazard rate
function.

CreditMetrics
April 1999

Monitor

page 41

The Valuation of Basket Credit Derivatives


credits. In order to introduce correlation structure into the portfolio, we must determine how to specify a joint distribution of survival times, with given marginal distributions.
Obviously, this problem has no unique solution. Generally speaking, knowing the joint distribution
of n random variables allows us to derive the marginal distributions and the correlation structure
among the n random variables, but not vice versa. There are many different techniques in statistics
which allow us to specify a joint distribution function with given marginal distributions and a correlation structure. Among them, a copula function is a simple and convenient approach. We give a brief
introduction to the concept of a copula function in the next section.

Definition and Basic Properties of a Copula Function


A copula function (the function C below) links or marries univariate marginals to their full multivariate distribution. For m uniform random variables, U 1, U 2, , U m , the joint distribution function
is expressed as6
[14]

C ( u 1, u 2, , u m ) = Pr { U 1 < u 1, U 2 < u 2, , U m < u m } .

We see that a copula function is just a cumulative multivariate uniform distribution function. For given univariate marginal distribution functions F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) , the function
[15]

F ( x 1, x 2, , x m ) = C ( F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) ) ,

which is defined using a copula function C , results in a multivariate distribution function with
univariate marginal distributions as specified.
This property can be shown as follows:
[16]

C ( F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) ) = Pr { U 1 < F 1 ( x 1 ), U 2 < F 2 ( x 2 ), , U m < F m ( x m ) }


1

= Pr { F 1 ( U 1 ) x 1, F 2 ( U 2 ) x 2, , F m ( U m ) x m }
= Pr { X 1 < x 1, X 2 < x 2, , X m < x m }.

The marginal distribution of X i is


[17]

C ( F 1 ( ), F 2 ( ), , F i ( x i ), , F m ( ) )
= Pr { X 1 < , X 2 < , , X i < x i, , X m < }
= Pr { X i < x i }.

Sklar (1959) established the converse. He showed that any multivariate distribution function F can
be written in the form of a copula function. He proved the following: if F ( x 1, x 2, , x m ) is a joint
multivariate distribution function with univariate marginal distribution functions F 1 ( x 1 ), F 2 ( x 2 ) ,
, F m ( x m ) , then there exists a copula function C ( u 1, u 2, , u m ) such that

The function also contains correlation information which we do not explicitly express here for simplicitys sake.

CreditMetrics
April 1999

Monitor

page 42

The Valuation of Basket Credit Derivatives


[18]

F ( x 1, x 2, , x m ) = C ( F 1 ( x 1 ), F 2 ( x 2 ), , F m ( x m ) ) .

If each F i is continuous, then C is unique.7 Thus, copula functions provide a unifying and flexible
way to study multivariate distributions.
For simplicitys sake, we discuss only the properties of bivariate copula function C ( u, v, ) for uniform random variables U and V defined over the area { ( u, v ) 0 < u 1, 0 < v 1 } , where is a correlation parameter. We call simply a correlation parameter since it does not necessarily equal the
usual correlation coefficient defined by Pearson, nor Spearmans Rho, nor Kendalls Tau.8 The bivariate copula function has the following properties:
Since U and V are positive random variables, C ( 0, v, ) = C ( u, 0, ) = 0 .
Since U and V are bounded above by 1, then the marginal distributions can be obtained as
follows C ( 1, v, ) = v, C ( u, 1, ) = u .
For independent random variables U and V , C ( u, v, ) = uv .
Frechet (1951) showed there exist upper and lower bounds for a copula function:
[19]

max { 0, u + v 1 } C ( u, v ) min { u, v } .

The multivariate extension of Frechet bounds is given by Dall'Aglio (1972).

Some Common Copula Functions


We present a few copula functions commonly used in biostatistics and actuarial science.
The Frank Copula function is defined as:

[20]

u
v
1
( e 1 )( e 1 )
C ( u, v ) = --- ln 1 + -------------------------------------------- , < < .

e 1

The Bivariate Normal function is:


[21]

C ( u, v ) = 2 ( ( u ), ( v ), ) ,
1

where 2 is the bivariate normal distribution function with correlation coefficient , and is the
inverse of a univariate normal distribution function. As we shall see later, this is the copula function
used in CreditMetrics.

For the proof of this theorem we refer to Sklar (1959).

We define Spearmans Rho and Kendalls Tau later.

CreditMetrics
April 1999

Monitor

page 43

The Valuation of Basket Credit Derivatives


The Bivariate Mixture Copula Function is formed using existing copula functions. If the two uniform
random variables u and v are independent, we have a copula function C ( u, v ) = uv . If the two random variables are perfect correlated we have the copula function C ( u, v ) = min ( u, v ) . Mixing the
two copula functions by a mixing coefficient > 0 we obtain a new copula function as follows:
[22]

C ( u, v ) = ( 1 )uv + min { u, v }, if > 0 .

If 0 we have
[23]

C ( u, v ) = ( 1 + )uv ( u 1 + v ) ( u 1 + v ), 0 ,

where

[24]

( x ) = 1, if x > 0,
0, if x 0.

is an indicator function.

Copula Function and Correlation Measurement


To compare different copula functions, we need to have a correlation measurement independent of
marginal distributions. The usual Pearsons correlation coefficient, however, depends on the marginal
distributions (see Lehmann (1966)). Both Spearmans Rho and Kendalls Tau can be defined using a
copula function only as follows:
[25]

s = 12 [ C ( u, v ) uv ] du dv ,

[26]

= 4 C ( u, v ) dC ( u, v ) 1 .

Comparison between results using different copula functions should be based on either a common
Spearman's Rho or on Kendalls Tau.
Further examination of copula functions can be found in a survey paper by Frees and Valdez (1988).

The Calibration of Default Correlation of Survival Times


Having chosen a copula function, we need to have the pairwise correlation of all survival times. Using CreditMetrics asset correlation approach, we can obtain the default correlation of two discrete
events over one year period. As it happens, CreditMetrics uses the normal copula function in its default correlation formula even though it does not use the concept of copulas explicitly.
First, let us summarize how CreditMetrics calculates the joint default probability of two credits, A
and B. Suppose the one year default probabilities for A and B are q A and q B . CreditMetrics uses the
following steps:

CreditMetrics
April 1999

Monitor

page 44

The Valuation of Basket Credit Derivatives


Obtain Z A and Z B such that q A = Pr { Z < Z A } and q B = Pr { Z < Z B } , where Z is a standard normal random variable.
If is the asset correlation, the joint default probability for credit A and B is calculated as
follows:
[27]

Pr { Z 1 < Z A, Z 2 < Z B } =

Z A ZB

2 ( x, y, ) dx dy

= 2 ( Z A, Z B, ) ,

where 2 ( x, y, ) is the standard bivariate normal density function with a correlation coefficient , and 2 is the bivariate cumulative normal distribution function.
If we use a bivariate normal copula function with a correlation parameter , and denote the survival
times for B and A as T B and T A , the joint default probability of A and B, which is the probability
that T A < 1, T B < 1 , can be calculated using
[28]

Pr { T A < 1, T B < 1 } = 2 ( ( F A ( 1 ) ), ( F B ( 1 ) ), ) ,

where F A and F B are distribution functions for T A and T B . Noting


[29]

q j = Pr { T j < 1 } = F j ( 1 ) and Z j = ( q j ) for j = A, B ,

we see that Eq. [27] and Eq. [28] give the same joint default probability over one year period if
= .
We can conclude that CreditMetrics uses a bivariate normal copula function with the asset correlation
as the correlation parameter in the copula function. Thus, to generate survival times of two credit
risks, we use a bivariate normal copula function with correlation parameter equal to the CreditMetrics asset correlation. We note that this correlation parameter is not the correlation coefficient between the two survival times T A and T B . The correlation coefficient between T A and T B is much
smaller than the asset correlation. Conveniently, the marginal distribution of any subset of an n dimension normal distribution is still a normal distribution. Using asset correlation, we can construct
high dimension normal copula functions to model credit portfolio.

Valuation Method
Suppose now that we are to study an n credit portfolio problem. For each credit i in the portfolio,
we have constructed a credit curve or a hazard rate function h i for its survival time T i either based
on historical default experience or using implicit approach. We assume that the distribution function
of T i is F i ( t ) . Using a copula function C , we also obtain the joint distribution of the survival times
T 1, T 2, , T n as follows:
[30]

F ( t 1, t 2, t n ) = C ( F 1 ( t 1 ), F 2 ( t 2 ), , F m ( t m ) ) .

If we use the normal copula function we have:

CreditMetrics
April 1999

Monitor

page 45

The Valuation of Basket Credit Derivatives


[31]

F ( t 1, t 2, t n ) = n ( ( F ( t 1 ) ), F ( t 2 ) ), , ( F ( t n ) ) ) ,

where n is the n -dimension cumulative distribution function with correlation matrix .


To simulate correlated survival times, we introduce another series of n random variables,
Y 1, Y 2, Y n such that
[32]

Y 1 = ( F ( T 1 ) ), Y 2 = ( F ( T 2 ) ),, Y n = ( F ( T n ) ) .

There is a one-to-one mapping between T 1, T 2, , T n and Y 1, Y 2, , Y n . Thus, simulating


T 1, T 2, , T n is equivalent to simulating Y 1, Y 2, , Y n . As shown in the last section, the correlation
between the Y s is the asset correlation of the underlying credits. We have the following simulation
scheme:
Simulate Y 1, Y 2, , Y n from a multivariate normal distribution with correlation matrix .
1

Obtain T 1, T 2, , T n using T i = F ( ( Y i ) ), i = 1, 2, , n .
Given full information on the default time and identity, we can track on any loss function, such as
the cumulative loss over a certain period. We can also price the first or second-to-default structure
since the default times T 1, T 2, , T n can be easily sorted and ranked.
In the special case of independence among T 1, T 2, , T n , the first-to-default can be valued analytically. Let us denote the survival time for the first-to-default of n credits as T , i.e.
[33]

T = min { T 1, T 2, , T n } .

Under the independence assumption the hazard rate function of T is


[34]

h T = h 1 + h2 + + hn ,

assuming a contract which pays one dollar when the first-default of the n credits occurs within 2
rT
years, and the yield is a constant r . The present value of the contract is Z = 1 e . The survival
hT t
time for the first-to-default has a density function f ( t ) = h T e
, so the value of the contract can
be calculated as
2

[35]

V =

1 e
0

2
rT

f ( t ) dt

1 e
0

rT

hT e

hT t

hT
dt = -------------- ( 1 exp [ 2 ( r + h T ) ] ).
r + hT

However, Eq. [34] will not necessarily hold if T 1, T 2, , T n are not independent. For example, consider the bivariate exponential distribution, the joint survival function of which is given by
[36]

S ( t 1, t 2 ) = Pr { T 1 > t 1, T 2 > t 2 } = exp [ 1 t 1 2 t 2 12 max { t 1, t 2 } ] ,

where 1, 2, 12 > 0 are three parameters. The marginal survival functions for T 1, T 2 are

CreditMetrics
April 1999

Monitor

page 46

The Valuation of Basket Credit Derivatives


[37]

S ( t 1 ) = exp [ ( 1 + 12 )t 1 ] and S ( t 2 ) = exp [ ( 2 + 12 )t 2 ] .

It is easy to show the covariance between T 1 and T 2 is given by


[38]

12
-,
Cov [ T 1, T 2 ] = -------------------------------------------------------------------------------------( 1 + 2 + 12 ) ( 1 + 12 ) ( 2 + 12 )

which implies that T 1, T 2 are not independent if 12 0 . It can also be shown that the hazard rate for
the minimum of T 1, T 2 is 1 + 2 + 12 , instead of 1 + 2 in the case of independence.
Hence for a credit portfolio with a large number of correlated credit risks, we still resort to the Monte
Carlo simulation scheme we outlined in this section.

Numerical Examples
The first example illustrates how to value a Type (I) first loss credit derivatives using the CreditManager application. The second shows how to value a first-to-default structure on a portfolio of five
credits.

Example 1
CreditManager uses a simulation approach to obtain the distribution of a credit portfolio value at the
end of a time horizon, such as one year. Using this distribution, we can assess the possible values of
the portfolio in the future. Using more detailed reports we can also express credit risk broken down
by country, industry, maturity, rating, product type, or any other category of credit exposure. Thus,
managers can identify different pockets, or concentration of risk within a single portfolio, or across
an entire firm and take appropriate action to actively manage the portfolio. As the credit derivative
market develops, portfolio managers can use the new credit derivative instruments to accomplish
their goals.
As an example, we run a simulation on the CreditManager sample portfolio. The simulation summary
is given in Chart 3. We see that the sample portfolio is a pretty good credit portfolio in the sense
that the distribution of the portfolio value is very much concentrated on the mean and has a relatively
smaller standard deviation. Regardless, if the portfolio manager still wants to buy a credit derivative
to protect his portfolio from declining to a level below C , how much should be paid for the protection?
The payoff function of the structure is

[39]

if X > C,
P = 0,
C

X
,
if X C,

where X is the portfolio value at the end of one year. The expected value of P is

CreditMetrics
April 1999

Monitor

page 47

The Valuation of Basket Credit Derivatives


[40]

E[P] =

0 ( C x )f ( x ) dx,

where f ( x ) is the probability density function of the sample portfolio value at the end of 1 year.
Chart 3
Simulated distribution of the CreditManager sample portfolio.

To calculate the payment premium, we need to evaluate only Eq. [40] and discount the result to the
present. For simplicitys sake, we assume the discount factor is 1.0. CreditManager can export the
density function of the simulated portfolio value into a spreadsheet, from which we can evaluate
Eq. [40].
If the portfolio manager wants to cover the portfolio value from falling below 10th percentile loss
from the mean value of 128.41 million, the premium she needs to pay is approximately $62,547. The
premium to protect other percentile losses can be calculated similarly; the results are depicted in
Chart 4.

Example 2
The second example shows how to value a first-to-default contract. We assume we have a portfolio
of n credits. For simplicitys sake, we assume each credit has a constant hazard rate of h for
0 < t < . From Eq. [10] we know the density function for the survival time is f ( t ) = h exp [ ht ] .
This shows that the survival time is exponentially distributed with mean E [ T ] = 1 h . We also assume that the n credits have a constant pairwise asset correlation .9

To have a positive definite correlation matrix, the constant correlation coefficient has to satisfy the condition
> 1 ( n 1 ) .

CreditMetrics
April 1999

Monitor

page 48

The Valuation of Basket Credit Derivatives


Chart 4
Premium as a function of percentile level protected.
Premium H$000sL
70
60
50
40
30
20
10
Percentile level
0%

2%

4%

6%

8%

10%

The contract is a two-year transaction which pays one dollar if the first default occurs during the first
two years. We also assume a constant interest rate of 10%. If all the credits in the portfolio are independent, the hazard rate of the minimum survival time is T n = nh and the contract value is given by
Eq. [35].
We choose the following basic set of parameters: n = 5, h = 0.1, = 0.25, r = 0.1 .
First, we examine the impact of the number of assets on the value of the first-to-default contract. If
there is only one asset, the value of the contract should be 0.1648. As the number of assets increases,
the chance that there is one default within the two years also grows, as does the value of the first-todefault contract. Chart 5 shows how the value of the first-to-default changes along with the number
of the assets in the portfolio. We see that the value of the first-to-default contract increases at a decreasing rate. When the number of assets increases to 15, the value of the contract becomes 0.7533.
From Chart 5 we also see that the impact of the number of assets on the value of the first-to-default
decreases as the default correlation increases.
Second, we examine the impact of correlation on the value of the first-to-default contract of 5 assets.
If = 0 , the expected payoff function, based on Eq. [35], should give a value of 0.5823. Our simulation of 50,000 runs gives a value of 0.5830. If all 5 assets are perfectly correlated, then the firstto-default of 5 assets should be the same as the first-to-default of 1 asset since any one default induces
all others to default. In this case the contract should worth 0.1648. Our simulation of 50,000 runs produces a result of 0.1638. Chart 6 shows the relationship between the value of the contract and the
constant correlation coefficient. We see that the value of the contract decreases as the correlation increases. We also examine the impact of correlation on the value of the first-to-default of 20 assets in
Chart 6. As expected, the first-to-default of 5 assets has the same value of the first-to-default of 20
assets when correlation approaches 1.

CreditMetrics
April 1999

Monitor

page 49

The Valuation of Basket Credit Derivatives


Chart 5
Value of first-to-default as a function of number of assets.
Correlation levels of 25% and 50%.
Value

0.8

0.6

0.4

0.2
Correlation= 25%
Correlation= 50%
Assets
0

10

15

20

Chart 6
Value of first-to-default as a function of correlation level.
5 and 20 asset baskets.
Value
1

0.8

0.6

0.4

0.2

5 assets
20 assets

0%

20%

40%

60%

80%

Correlation
100%

CreditMetrics
April 1999

Monitor

page 50

The Valuation of Basket Credit Derivatives


Conclusion
We have shown how to value some basket-type credit derivative contracts with the payoff contingent
upon the default properties of a portfolio of credits. If the payoff function depends only on the value
of the portfolio at a given time in the future, we can use CreditManager to price the transaction. For
other products with payoff contingent upon either the cumulative loss across time or the order and
identities of default, we introduce a hazard rate function based approach. The hazard rate function
based approach attempts to model the default time directly, which characterizes default more precisely than does the discrete approach over a given period. We also explicitly introduce the concept of
copula function, and provide the basic concept, properties, and examples, thus demonstrating how
the CreditMetrics framework facilitates the valuation of any credit derivative basket.

References
Gupton, Greg M., Christopher C. Finger, and Mickey Bhatia. CreditMetrics -- Technical Document,
New York: Morgan Guaranty Trust Co., 1997.
Bower, Newton et al. Actuarial Mathematics, The Society of Actuaries, 1986.
Carty, Lea and Dana Lieberman. Historical Default Rates of Corporate Bond Issuers, 1920-1996,
Moodys Investors Service, January 1997.
Finger, Christopher. Credit Derivatives in CreditMetrics, CreditMetrics Monitor, 3rd Quarter
1998.
Sklar, A. Fonction de Repartition a n Dimensions et Leurs Marges, Publications de LInstitute
Statistique de LUniversite de Paris, 8:229-231, 1959.
DallAglio, G. Frechet Classes and Compatibility of Distribution Functions, Symp. Math., 9:131150, 1972.
Frees, W. Edward and Emiliano A. Valdez. Understanding Relationship Using Copulas, North
American Actuarial Journal, Vol. 2, Num. 1, 1-25, 1998.
Lehmann, E. L. Some Concepts of Dependence, Annals of Mathematical Statistics, 37, 1137-1153,
1966.
Li, David X. Constructing a Credit Curve, Credit Risk, A Risk Special Report, 40-44, 1998.

CreditMetrics
April 1999

Monitor

page 51

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Krishan.M. Nagpal
Standard and Poors
krishan_nagpal@mcgraw-hill.com

Reza Bahar
Standard and Poors
rbahar@mcgraw-hill.com

Accurate and efficient ways of modeling credit risk are extremely important for capital
allocation and portfolio management. Credit events often exhibit significant correlations
and ignoring them in analysis such as VaR, could produce misleading answers. In this
paper we develop a framework in which correlated credit events such as defaults can be
studied and analyzed in a closed form fashion without the need of simulations. Under
some mild assumptions, the proposed approach allows us to obtain explicitly the loss
distribution of a portfolio of assets using the default probabilities of each asset and default
correlation between every pair of assets. Since there are usually an infinite number of loss
distributions which are consistent with the given default rates and correlations, we also
provide a computationally simple approach to obtain a family of possible loss
distributions which are consistent with the given default data. Such a parameterization
may be useful to obtain the range of possible losses a portfolio could experience under the
given assumptions on default rates and correlations. From a theoretical perspective, the
proposed approach provides a relatively simple approach to obtain probability of all
outcomes involving discrete random variables from the knowledge of only the first and
second order statistics.

Introduction
In a portfolio approach to modeling credit risk, it is important to take into account the correlated
nature of credit events. Correlations between credit events are difficult to obtain empirically due to
the events infrequency. Even when reliable information about the correlations is available, obtaining
a multi-asset type portfolios loss distribution (i.e., the probability of each possible loss amount) can
be quite daunting.
There are a number of credit risk management approaches. Two are publicly available:
CreditMetrics, developed by JP Morgan and CreditRisk+, developed by Credit Suisse. The
CreditMetrics methodology is a simulation based approach where loss distribution is obtained using
default rates and default correlations. A change in each assets rating (credit quality) is modeled
using a (Gaussian) random variable. A default takes place if this variables value falls below a
probability-contingent threshold. If the default events between two assets are correlated, then an
appropriate level of correlation is incorporated in the distribution of the two random variables
representing the given assets. Thus, the a priori information on default rates and all pairwise
correlations is used in determining the covariance matrix of the random variables representing credit
migrations. The overall loss distribution is obtained from Monte Carlo simulation of random
variables with this covariance. All this can become computationally demanding as the number of
credits increase. In using CreditRisk+, we assume that the default rates are themselves random with
the volatilities of the default rates adjusted in a manner that captures the effect of correlations and the
background factors. Thus, when using CreditRisk+, we try to capture the effect of using default
correlations by using suitable default rate volatilities applied in a sector analysis approach.
In this paper, we suggest an approach to arriving at a multi-asset portfolios loss distribution where
default probabilities and correlations between pairs of credits are directly incorporated into the
analysis without approximations. The exposures are assumed to be equal to the face amount. If there
is a default, then the full face amount is assumed to be lost and the dependence of defaults upon
interest rates or other market observable parameters is not addressed. The advantages of the proposed
approach are (i) explicit solution to the problem where there is no need for simulations and (ii) low
computational complexity where the overall loss distribution is obtained by combining loss
distributions from multiple scenarios with independent defaults. The number of scenarios with

CreditMetrics
April 1999

Monitor

page 52

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


independent defaults we need to consider is never more than twice the number of asset types. Thus,
the computational complexity of the proposed approach is proportional to the complexity associated
with the task of obtaining the loss distribution under the assumption of independence - a problem
which can be solved explicitly without simulations.
Usually, there are an infinite number of distributions consistent with a given set of default
probabilities and correlations. Thus, it is desirable to incorporate the range of possible loss
distributions a portfolio could experience under the given assumptions on defaults. To address this
issue, we provide a computationally simple parameterization of solutions that can be used to obtain
such a family of loss distributions. We aim to provide portfolio managers the useful tools with which
to quickly and easily compare the loss distribution of a portfolio of assets under different
assumptions on default rates and correlations.
The organization of the paper is as follows: First, we will review some background material
regarding the independent default case and provide a simple example to illustrate the main idea of the
paper. Second, we describe the papers main decomposition idea, and provide a detailed example.
Third, we present the papers main mathematical results and algorithms and then conclude with a
Summary. All the proofs and the intermediate steps of the example are relegated to the Appendix.

Background and the Main Decomposition Idea


Review of Independent Defaults
First, let us review how to obtain the loss distribution in a portfolio in which all the default events are
independent. Let e i and p i denote the exposure amount and probability of default of the i th
exposure. Let us also assume that the recovery has already been factored into the determination of the
exposure amount, so that if an i th counterparty defaults, the amount e i is lost. It is convenient to
work in units so that all exposure amounts e i are positive integers. For a portfolio of N exposures,
define the probability generating function (PGF) in terms of an auxiliary variable z as
N

[1]

F(z) =

( 1 pi + pi z

ei

) = 0 + 1z

m1

+ + kz

mk

i=1

Then for the given portfolio, under the assumption of independence of default events, the probability
of losing 0 is 0 and the probability of losing m i is i for i = 1, , k . The above, stated as a
polynomial multiplication problem, is a standard convolution problem and provides a closed form
expression for the loss distribution of a portfolio of exposures, provided all default events are
independent. The CreditRisk+ model is based upon similar convolution ideas, yet it allows for
greater generality by allowing default rates to be themselves stochastic.

An Illustrative Example for the Proposed Approach


When default events are correlated, the aforementioned approach is no longer valid. However, the
approach described in this paper allows us to include the correlations by considering multiple
scenarios with different default probabilities where under each scenario default events are
independent. To illustrate the idea, we use the following example.

CreditMetrics
April 1999

Monitor

page 53

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Consider a portfolio of five bonds of $1 each from five different obligors. The default probability of
each obligor is 1.4%. The defaults between obligors are correlated with pairwise correlation of
4.64%. Let p denote the probability of default (here 0.014) and q the probability of pairwise default.
2
Had the defaults been independent, we would have q = p = 0.000196 , but because of correlations
the probability that any given pair defaults is
[2]

q = 0.014 + 0.0464 0.014 ( 1 0.014 ) = 0.00026 ,

which is more than had the defaults been independent.


Now consider two mutually exclusive scenarios - (i) in Scenario 1 the defaults are independent with
default probability of 1% and (ii) Scenario 2 where again the default events are independent but with
default probability of 3%. The probability of Scenario 1 is 80% while that of Scenario 2 is 20%.
Then the default probability for any asset under the two scenarios just described is
2

[3]

Pr { default } =

Pr { Scenario i } Pr { default in Scenario

i}

i=1

= ( 0.8 0.01 + 0.2 0.03 ) = 0.014.

Similarly, for any two assets


2

[4]

Pr { joint default } =

Pr { Scenario i } Pr { joint default in Scenario

i}

i=1

= ( 0.8 0.01 2 + 0.2 0.03 2 ) = 0.00026,

where in the last equation we have assumed that default events are independent under each scenario.
Thus the two scenarios viewed together result in precisely the given probability of defaults and
probability of pairwise defaults.
Since the default events are independent under each scenario, the PGF (defined above) for a portfolio
of 5 bonds of $1 for Scenario 1 is
[5]

F 1 ( z ) = ( 1 0.01 + 0.01z )
= 9.51 10

+ 4.8 10

z + 9.7 10

z + 9.8 10

z + 4.95 10

z + 10

10

z .

The PGF for Scenario 1 describes probability of losing each possible loss amount - for example the
2
probability of losing $0 is 95.1% while the probability of losing $2 is 9.7 10 %. Similarly for the
same portfolio, the PGF under Scenario 2 is

CreditMetrics
April 1999

Monitor

page 54

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


[6]

F 2 ( z ) = ( 1 0.03 + 0.03z ) = 8.59 10


+ 8.21 10

z + 2.54 10

+ 1.33 10

z + 3.93 10

z
4

z + 2.43 10

z .

Now noting that the probability of the Scenarios 1 and 2 are 0.8 and 0.2 respectively, the PGF for the
portfolio is
[7]

F ( z ) = 0.8 F 1 ( z ) + 0.2 F 2 ( z ) = 9.325 10


+ 2.42 10

z + 5.86 10

+ 6.5 10

z + 8.25 10

z + 4.94 10

z .

Thus, for the example portfolio, the probabilities of losing 0,1,2,3,4 and 5 are 93.25%, 6.5%, 0.242%,
3
5
7
5.86 10 %, 8.25 10 %, and 4.94 10 % respectively. If the default events were independent,
3
the probabilities of losing 0, 1, 2, 3, 4, and 5 would be 93.2%, 6.62%, 0.188%, 2.67 10 %,
5
8
1.89 10 %, and 5.38 10 %.
REMARK 1: The number of scenarios to be considered does not depend on the number of exposures.
If there were 10 exposures instead of the assumed 5, the only difference would be that in the PGFs,
the exponent term would be changed from 5 to 10. Thus, even though the proposed approach takes
into account the correlation between every pair of exposures, there is no exponential growth with
respect to exposures (in terms of outcomes to be considered). From an implementation point of view,
the same tools used for the convolution in the independent case can be used for addressing the effect
of correlations.
REMARK 2: Specifying the default probabilities of each exposure and the correlation of defaults
between every pair does not uniquely specify the probabilities of all events. We observe that, if only
default probabilities (first moment information) and correlations between default events (second
moment information) are known, the probabilities of possible outcomes are not unique when the
number of exposures is greater than two.1 Indeed, even in the proposed approach we can construct
any number of scenarios which, when viewed as a whole, result in the desired default rates and
default correlation. For example let , p 1 and p2 be positive real numbers between 0 and 1 that
satisfy the following:
[8]

p 1 + ( 1 )p 2 = p = 0.014 (Constraint imposed by given default rate),

[9]

p 1 + ( 1 )p 2 = q = 0.00026 (Constraint imposed by given pairwise default rate).

Assuming that such a solution exists, consider the following mutually exclusive scenarios - (i)
Scenario 1 occurs with probability , default events are independent and have probability of p 1 , (ii)
Scenario 2 occurs with probability 1 , default events are independent and have probability of p 2 .
The pair of scenarios match the given default probabilities and joint default probabilities (or
equivalently, default correlations) by virtue of the fact that , p 1 and p 2 satisfy Eq. [8] and Eq. [9].
Non uniqueness is inferred by the fact that we have three unknowns for two equations. And, apart
from the values chosen in the example, = 0.754 , p 1 = 0.0186 , and p 2 = 0 also satisfy the
equations above and are between 0 and 1. , p 1 , and p 2 are chosen to be between 0 and 1 because

For a small finite set of extreme cases, defining the first two moments uniquely specifies the probability density function.

CreditMetrics
April 1999

Monitor

page 55

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


they represent probabilities of either different scenarios or of an occurrence of default within a given
scenario.
REMARK 3: Creating mutually exclusive scenarios with independent defaults does not always work.
2
For example, if the default correlation is negative, i.e., the pairwise default probability q [0, p ) ,
then it can be seen that there are no , p 1 , and p 2 in the interval [ 0, 1 ] that satisfy Eq. [8] and
Eq. [9]. This is because if , p 1 and p 2 satisfy both constraints then
[10]

q p = p 1 + ( 1 )p 2 [ p 1 + ( 1 )p 2 ] = ( 1 ) ( p 1 p 2 ) 0 for all [ 0, 1 ].

The General Approach - Correlations Generated by Independent Default Scenarios


In the last section we considered an example in which the default probabilities of every asset and the
default correlation between every pair of assets were the same. In general, both quantities depend
upon the type of asset. Here, we describe the problem in the most general form.
The following list provides the known portfolio data, and the notation to be used in the rest of the
paper:
N : The number of different types of exposures. The type of the exposure could be based on the rating of the
exposure (A or BB, for example), the sector of the exposure (banking or aerospace, for example), or the geographical region or a combination of all three.
n i : The number of exposures of the ith asset type.
e ik : The dollar amount of the k th exposure of the i th asset type.
p i : The probability of default for assets of i th type.
c ij : The default correlation between exposures of i th type and j th type.
q ij : The joint default probability of an exposure of type i and an exposure of type j . Given the definitions
above, the following identity holds:
q ij = p i p j + c ij p i ( 1 p i )p j ( 1 p j ).

Our overall objective is to obtain loss distribution for the portfolio based upon the data described
above, or, to obtain the probability associated with every possible loss amount. We propose
considering several different scenarios, each involving independent defaults under possibly different
default rates, which, when viewed together, match the given default probabilities and correlations.
This is summarized in the following Lemma which describes the conditions that scenarios must
satisfy so that they generate default rates and correlations consistent with the given data.
Lemma (The General Decomposition): Suppose there exist:
1. An integer M > 0 ,
2. Real numbers p ij [ 0, 1 ]for i = 1, , M and j = 1, , N, and
3. Real numbers i [ 0, 1 ] for i = 1, , M

CreditMetrics
April 1999

Monitor

page 56

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


such that the following equations hold:
M

[11]

i pij =

p j for all j = 1, , N .

i=1

[12]

i pij pik =

q jk for all j, k = 1, , N .

i=1

[13]

= 1.

i=1

Then for i = 1 to M define the M scenarios as follows:


1. In Scenario i the default rates of assets of type j are p ij for j = 1, N .
2. In any scenario default events are independent.
3. The probability of Scenario i is i for i = 1, , M .
Then the M scenarios viewed as mutually exclusive events, are consistent with the given default probabilities and correlations. Moreover, for any given portfolio made up of the given assets
M

[14]

Pr { loss=x } =

i Pr { loss=x in Scenario i }.
i=1

The proof of the Lemma is given in the Appendix.


The above shows that if the unknowns M , p ij and i can be found to meet the conditions described
in the Lemma, then the portfolio loss distribution can be obtained by combining the portfolio loss
distributions of M scenarios. As for each of the M scenarios, the loss distribution is obtained under
the assumption of independence, the computational requirement for the proposed approach is
relatively light.
The remaining questions regarding the applicability of this approach are (i) whether M , p ij and i
exist that satisfy the conditions described in the Lemma, (ii) if they do exist how to obtain them, and
(iii) in case of non uniqueness, what is the best way of choosing these variables? The problem of
existence is made difficult by the non linearity of the constraints (the constraints Eq. [11] and
Eq. [12] involve products of the unknown variables p ij and i ). We have not been able to obtain the
general necessary and sufficient conditions on the default data ( p i and q ij ) that guarantee the
existence of M , p ij , and i , which satisfy the conditions described above. Some sufficient
conditions and explicit algorithms to obtain M , pij and i are given later. From a practical
perspective, we can use standard optimization software to obtain the values of M , pij and i .

CreditMetrics
April 1999

Monitor

page 57

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


If a solution exists to the decomposition problem presented in the Lemma, then in almost all cases
the solution is not unique and the family of solutions has at least MN + M N ( N + 1 ) 2 N 1
degrees of freedom. This is because there are MN + M unknowns ( pij and i ) and
N + N ( N + 1 ) 2 + 1 equality constraints from Eq. [11], Eq. [12] and Eq. [13]. The degrees of
freedom maybe greater if the equations describing the equality constraints are not independent (for
example, this would be the case if the default correlation matrix does not have full rank). Since all
solutions to the decomposition problem presented in the Lemma are consistent with the a priori data
on default probabilities and correlations, we need additional criteria to choose the best solution or
preferable class of solutions.
One additional criterion for partitioning the family of solutions would be to consider those solutions
as preferable where under all scenarios, the default probabilities for each asset type lie in their
historically observed range. More precisely, let p jmax and p jmin represent the maximum and
minimum historically observed default rates for j th asset type. Then, we could classify those
solutions as preferable where p ij are such that
[15]

p ij [ p jmin, p jmax ] for all i = 1, , M and all j = 1, , N .

The rationale for imposing this constraint comes from the following interpretation of the
decomposition idea presented in the Lemma: The M scenarios in the decomposition can be thought
of as M possible states of the market. The probability of the market being in state i is i and the
default probabilities conditioned on the market being in state i are the default probabilities under the
i th scenario. Thus, even though the M scenarios in the Lemma are mathematical constructs, the
decomposition idea of the Lemma can be viewed in terms of M possible states of the market. If these
scenarios are considered to be linked to market fluctuations, then they should also represent other
properties of the states of market that are not captured by the first and second moments of the default
rates. For example, the default rates should be in their historically observed range, which is precisely
the constraint described in Eq. [15].
Later, we present results that give sufficient conditions for the existence of a solution to the
decomposition idea presented in the Lemma, where all the scenarios also satisfy the constraint
described in Eq. [15].
Other criteria could make the default rates under different scenarios be more representative of the
historical experience. We will not discuss the issue of non-uniqueness further, other than to mention
that it is an important issue which needs to be addressed in greater detail. We would need to
incorporate additional criteria beyond constraints such as Eq. [15] to capture additional properties of
the default events.

An Example
In this section, we use an example to show how the decomposition idea presented in the Lemma is
applied to obtain the loss distribution for a portfolio composed of several asset types. The default
probabilities under different scenarios are obtained using Theorems 1 and 2. The intermediate steps
of the example presented here are given in the Appendix.
Consider a portfolio of 100 assets, each worth $1. We will ignore the effect of discounting and
assume no recovery so that in any default, $1 is lost. The portfolio is made up of assets of ratings

CreditMetrics
April 1999

Monitor

page 58

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


BBB, BB and B.2 The assets are distributed as follows: 70 of the assets have a BBB rating, 20 of the
assets are BB while the remaining $10 are B. The maturities of all assets correspond to the horizon of
the default probabilities.3 The default probabilities are:
p BBB

[16]

p =

p BB
pB

0.01
= 0.05 .
0.1

Let us assume that the default correlations are also known:


c BBB, BBB c BBB, BB c BBB, B

[17]

c BB, BBB c BB, BB c BB, B


c B, BBB

c B, BB

c B, B

0.01 0.005 0.005


= 0.005 0.025 0.0325 .
0.005 0.0325 0.0425

In Eq. [17], for example, c B, BBB denotes the default correlation between a B and a BBB asset. We can
verify that the matrix is of rank two. Later, we show that the number of independent scenarios needed
for the proposed approach is twice the rank of the matrix of default correlations. Thus in this case,
four scenarios with independent defaults would be enough to generate the given default rates and
correlations.
Based on the default probabilities and correlations, we can obtain the pairwise default probabilities
q BBB, BBB q BBB, BB q BBB, B

[18]

Q =

q BB, BBB q BB, BB q BB, B


q B, BBB

q B, BB

q B, B

= 10

0.02 0.06 0.11


0.06 0.37 0.71 ,
0.11 0.71 1.38

where, for example, q B, BBB denotes the probability of a pairwise default of a B and a BBB asset.

Decompositions with Constraints on Scenario Default Probabilities


Previously, we saw that for any default probabilities and correlations, there may be many
decompositions that satisfy the constraints in the Lemma. In such cases, it may be desirable to
impose additional constraints on scenario properties so as to better reflect the observed default
experience. One such requirement we proposed was to impose that under each scenario, the default
probability for any asset type is in a given range. In this section, we show how the results presented
later in this paper can be used to obtain the loss distribution from decompositions which satisfy not
only the conditions described in the Lemma, but a constraint imposed on the scenario default
probabilities as well.

In general, the asset types may depend not only on the rating but industry and geographical region as well.

All the numbers chosen in this example are for illustrative purposes only and do not necessarily correspond to historically
observed values.

CreditMetrics
April 1999

Monitor

page 59

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Let us assume that we would like to find decompositions such that under all scenarios, the default
probabilities for BBB, BB and B are in the range described in Eq. [19]:

[19]

p min =

0.035
0.002
=
p
,
0.125 .
0.02
max
0.25
0.04

Here vectors p min and p max represent the minimum and maximum desired default probabilities.
Eq. [19], for example, implies that we would like the default probabilities for BB assets in all scenarios to be between 0.02 and 0.125.
Applying the steps outlined in Theorem 1, the details of which are in the Appendix, we obtain the
four scenarios which when viewed as mutually exclusive outcomes, match the given default data. Let
us denote this decomposition as Decomposition 1. The probabilities and the default rates under the
four scenarios of Decomposition 1 are given in Table 1.
Table 1
Scenario Probabilities for Decomposition 1.
Default Probability
Scenario

Probability

BBB

BB

0.3765

0.0020

0.0410

0.0880

0.1235

0.0344

0.0769

0.1370

0.3513

0.0100

0.0200

0.0448

0.1487

0.0100

0.1210

0.2300

We can now verify that the default and scenario probabilities in Table 1 satisfy equations Eq. [11],
Eq. [12], and Eq. [13]. Moreover, the default probabilities are always between the asset types
minimum and maximum prescribed range for all scenarios.
We can now obtain the loss distribution under each scenario, assuming independent defaults, and
then use Eq. [14] to obtain the desired portfolio loss distribution. Chart 1 provides the resulting loss
distribution as well as the distribution under independent defaults. For comparison, the chart also
includes the loss distribution obtained using the simulation approach proposed in CreditMetrics.
Chart 2 contains a comparison of cumulative probability functions. Note that the charts reflect the
commonly observed property of the loss distribution when defaults are positively correlated - the
positive correlations increase both the probability of no loss and the probability of large losses.

CreditMetrics
April 1999

Monitor

page 60

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Chart 1
Portfolio loss distribution.
Loss probability
25%

Decomposition 1
CreditMetrics

20%

Independent defaults

15%

10%

5%

Loss amount
0

10

Chart 2
Portfolio loss cumulative probability function.
Cumulative probability
100%

80%

60%

40%
Decomposition 1
CreditMetrics
Independent defaults

20%

Loss amount
0

10

CreditMetrics
April 1999

Monitor

page 61

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Sensitivity of Loss Distribution to the Choice of Decomposition
Since there are an infinite number of solutions to the decomposition presented in the Lemma, it is
desirable to know the sensitivity of the loss distribution to the choice of scenarios. In this section we
compare the loss distribution obtained from two of the extreme decompositions of the proposed
approach. These distributions should provide a measure of the variability of the loss distribution
using the proposed approach. Since our objective is to obtain extreme cases of loss distributions
consistent with the given default data, no additional constraints are imposed on the permissible range
of default probabilities under different scenarios. In terms of the notation used in this paper, this
means that p jmin and p jmax are now set to 0 and 1 respectively for all asset types.Using the algorithm
outlined in Theorem 1, we can obtain the default probabilities for the four scenarios that match the
given default data. The intermediate steps in obtaining the scenario properties are described in the
Appendix. The results are presented in Table 2.
Table 2
Scenario Properties for Decomposition 2.
Default Probability
Scenario

Probability

BBB

BB

0.3311

0.0000

0.0390

0.0849

0.1689

0.0296

0.0716

0.1297

0.2304

0.0100

0.000

0.0082

0.2696

0.0100

0.0930

0.1789

Note that the default probabilities of asset types BBB and BB in Scenarios 1 and 3 respectively, are 0
- an extreme value for the default probability. In this sense, this decomposition is an extreme case of
all possible decompositions.
Using the algorithm outlined in Theorem 2, we can obtain the default probabilities for another set of
four scenarios that match the given default data. The intermediate steps are again in the Appendix.
These results are presented in Table 3.
Table 3
Scenario Properties for Decomposition 3.
Default Probability
Scenario
1

Probability

BBB

BB

0.6040

0.7040

1.0000

2.78 10
0.4994

0.0097

0.0496

0.0995

0.0044

0.0100

0.5405

1.0000

0.4956

0.0100

0.0456

0.0920

Note that in odd scenarios (Scenarios 1 and 3), B assets have default probability of 1 - an extreme
value. Thus, as with Decomposition 2, Decomposition 3 represents an extreme case of acceptable decompositions.

CreditMetrics
April 1999

Monitor

page 62

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Chart 3 provides a comparison of loss distributions using Decompositions 2 and 3 and the results
obtained using CreditMetrics. All three loss distributions are consistent with the given default rate
and correlation information. Note that the loss distribution using CreditMetrics is somewhere
between the loss distributions obtained using Decompositions 2 and 3. For a more detailed analysis
we can consider other decompositions. Towards this end, Theorem 3 provides a relatively simple
parameterization of admissible solutions to the given problem. The authors experience has been that
most of the loss distributions using the proposed approach lie somewhere between the loss
distributions obtained from scenarios described in Theorems 1 and 2 when there are no additional
constraints imposed on scenario default probabilities (i.e., when p jmin and p jmax are set to 0 and 1
respectively).
Chart 3
Portfolio loss cumulative probability function -- extreme decompositions.
Cumulative probability
100%

80%

60%

40%
Decomposition 2
CreditMetrics
Decomposition 3

20%

Loss amount
0

10

Sufficient Conditions and Decomposition Algorithms


We have viewed correlated discrete events through a decomposition into a few scenarios with
independent events. The questions of when such a decomposition exists and how to obtain it were not
addressed. In this section, we provide closed form solutions to the decomposition approach outlined
in the Lemma when the correlations are, roughly speaking, positive.
Recall that p j , q jk , and c jk are default probability of assets of type j , the pairwise default
probabilities of assets of type j and k and the default correlation between assets of type j and k ,
respectively.
We now address three problems regarding the existence of decompositions. In all cases, we require
that the default probabilities for the j th asset type are in the interval [ p jmin, p jmax ] for all scenarios.

CreditMetrics
April 1999

Monitor

page 63

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Problem 1: (Sufficient conditions for existence of decompositions) Given a priori data on default
rates and correlations p j , q jk , c jk , p jmin and p jmax does there exist a solution to the decomposition
problem described in the Lemma?
Problem 2: (Parameterization of admissible decompositions) Assuming admissible solutions to the
decomposition problem described in the Lemma exist, can we obtain an easily parameterized family
of solutions?
Problem 3: (Sensitivity of loss distribution to the choice of decomposition) Can we obtain some
extreme cases of admissible decompositions which satisfy the requirements described in the
Lemma? (In this case, a decomposition is described as extreme if in some scenarios of the
decomposition, the default probabilities of some asset types are at their extreme value of
p jmin or p jmax ).
Since the loss distribution is rarely unique for the given default rates and correlations, the results
obtained from the third problem can be used to obtain the range of possible solutions as well as to
provide some indication of the sensitivity of the loss distribution to the choice of decomposition.
If we substitute 0 and 1 for p jmin and p jmax respectively, we obtain the most general conditions for
existence of decompositions which match the given first and second moments of the default
processes.
In most situations of interest in credit risk analysis, the default correlations are positive. This can be
attributed to common or similar economic conditions. Additionally, we need to be more concerned
with positive correlation, as there is a greater possibility of large losses than under independent
defaults.
We now describe the assumptions on a priori default data ( p j, q jk, c jk, p jmin and p jmax ) required for
the main results of this section. The first assumption, roughly speaking, states that the correlations
must be positive.
Assumption 1: The default correlations are such that the matrix of correlations is positive
semidefinite, that is

[20]

c 11 c 1N

: : 0.
c 1N c NN

The existence of the following decomposition then follows:

[21]

where

1 0
q 11 q 1N
p1

= : ( p 1 p N ) + : :
: :
0
q 1N q NN
pN

c 11 c 1N 1 0

:
: :
: :
N c 1N c NN 0

: = pp + U i U i ,
i=1
N

CreditMetrics
April 1999

Monitor

page 64

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults

[22]

p1
u i1
U i = : , p = : and j =
u iN
pN

pj ( 1 pj ) .

Note that in this decomposition, the vectors U i are not unique for N 2 .They can be obtained from
approaches such as the Cholesky decomposition or the singular value decomposition. Additionally,
the off-diagonal correlation terms need not be positive for the matrix of correlations to be positive
semidefinite.
The matrix of correlations is positive semidefinite if: (i) diagonal elements are positive, and (ii) offdiagonal elements are sufficiently small compared to the diagonal elements. Due to the effect of
common economic conditions, the default correlations for assets within the same type are usually
positive, implying that the diagonal elements are usually positive. Moreover, the correlation between
two different asset types is usually smaller than correlation within those asset types - for example
performances of two banks are likely to be more correlated to each other than performances of a bank
and an aerospace company. This implies that off-diagonal elements of the matrix of correlations are
smaller than the corresponding diagonal elements. These observations suggest that the positive semidefiniteness of the matrix of correlations may not be a restrictive assumption in most practical
situations.
In every case, we are interested in finding only those decompositions in which the default
probabilities for assets of type j are in the interval [ pjmin , p jmax ] for all scenarios. Let us define
vectors p min and p max as follows:

[23]

p min

p 1max
p 1min

=
: , p max =
: , where 0 p jmin < p j < p jmax 1for j = 1, , N .
p Nmin
p Nmax

To obtain the sufficient conditions for the existence of decompositions as stated in the Lemma, all
elements of vectors p jmin and p jmax should be set to 0 and 1, respectively.
Assumption 2 below depends on the particular choice of U i chosen above. The following definitions
of i and i for i = 1, , m , are used in Assumption 2:4
[24]

i = sup { :p max p U i p min } ,

[25]

i = sup { :p max p + U i p min } .

If we assume that p jmin < p j < p jmax for all j = 1, , N


i, i > 0 for all i = 1, , m.

then it is easily observed that

In Eq. [24] and Eq. [25], the relationship x y for vectors x and y means that all elements of the vector x y are nonnegative.

CreditMetrics
April 1999

Monitor

page 65

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Assumption 2: for i and i as defined in Eq. [24] and Eq. [25] the following holds:
m

[26]

1.
-------------

i=1

Assumption 2, loosely speaking, restricts the level of correlation present in the pairwise default
probabilities. The correlations are small if the vectors U i are small. If the vectors U i are small in
comparison to p , then from Eq. [24] and Eq. [25] we would expect i, i >> 1 and thus Assumption
2 to be satisfied. In the authors experience, this assumption holds for most typical default data if we
take p jmin and p jmax as vectors of 0s and 1s, respectively.
An immediate consequence of Assumption 2 is that there exist 1, m ( 0, 1 ) such that
[27]

1
-------------- i for i = 1, , m , and
i i
m

[28]

= 1 where i ( 0, 1 ).

i=1

We now present the main results of this section. Theorem 1 shows that if the two assumptions
described above hold, there exist 2m scenarios of independent defaults, which when viewed
together, produce the desired default data and the default probabilities in each scenario are in the
desired range. Moreover, as shall be evident later, the decomposition described in the following
Theorem is an extreme decomposition. This means that in some of the scenarios, the default
probabilities of some asset types are at their extreme permissible values.
Theorem 1 (Sufficient Conditions for Decomposition & Extreme Decomposition I): Let Assumptions 1 and 2 hold. Let 1, , m be chosen as in equations Eq. [27] and Eq. [28] and let
i ( 0, 1 ) be defined as below
[29]

1
i = -------------------2 for i = 1, , m
1 + ii

where i is defined in Eq. [24].


Define Scenarios 1 through 2m as follows:
(a) Defaults in each scenario are independent.
(b) Probability of Scenario i is i where

[30]

if i = 2s 1 , where s = 1, , m.
i = s s
s ( 1 s ) if i = 2s, where s = 1, , m.

CreditMetrics
April 1999

Monitor

page 66

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


(c) In Scenario i , the default rate of asset of type j is p ij where

[31]

p ij

p j s u sj if i = 2s 1, where s = 1, , m,

=
u sj
p j + ---------- if i = 2s, where s = 1, , m.
s s

For the 2m scenarios defined above, the following hold:


1. p ij [ p jmin, p jmax ] for j = 1, , N and i = 1, , 2m.
2. When viewed together as mutually exclusive outcomes, the 2m scenarios with independent defaults produce the desired data, or equivalently the following hold:
2m

[32]

= 1.

i=1

2m

[33]

i pij

= p j for j = 1, N.

i=1

2m

[34]

i p ij p ik

= q jk for j, k = 1, , N.

i=1

3. The loss distribution for the given portfolio is obtained as follows:


2m

[35]

Pr { loss=x } =

i Pr { loss = x in scenario i }
i=1

Proof is given in the Appendix.


REMARK 1: We can always choose m N in the decomposition Eq. [21] since the matrices are of
dimension N N , where N denotes the number of asset types. To obtain the loss distribution under
correlated defaults using the algorithm described above, we compute the loss distribution for the
given portfolio under no more than 2N scenarios involving independent defaults. Thus, the sole
dependency of computational complexity of the proposed approach, as compared to the simpler
problem involving independent defaults, is upon the number of asset types (and that linearly) and not
on the number of total assets within each type.
REMARK 2: In the above, we have neither assumed nor exploited structural properties of the matrix
of default correlations.The proposed sufficient conditions can be relaxed if we were to exploit
additional structural properties of the given data. Let us consider the following: If we could partition

CreditMetrics
April 1999

Monitor

page 67

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


a portfolios asset types into groups in which defaults are uncorrelated between groups, then we have
a matrix of correlations which is block diagonal,

[36]

c 11 c 1N

: : =
c 1N c NN

*0
0*

0
0


: :
0


: :
0 *

where * indicates a non-zero submatrix. In such a case, the loss distribution of the overall portfolio
can be obtained by convolving the loss distributions of several smaller portfolios that are independent
of each other. For such a problem, there may be no decomposition if we consider the entire portfolio
of various asset types, even though there may be admissible decompositions for each of the groups
considered separately. This is because the Assumption 2 may not hold if all asset types are considered
together but would be satisfied if we consider each group separately. In these situations, it may be
computationally simpler to partition the portfolio into smaller independent groups. The loss distribution of each of the smaller groups can be obtained using the approach outlined in this paper while the
overall loss distribution can be obtained from convolving the loss distributions of the independent
smaller portfolios.
REMARK 3: The scenarios described in Theorem 1 are, in a sense, an extreme solution to the decomposition problem. This is because for all odd scenarios there is at least one asset type for which
the default probability p ij is at its extreme value of p jmin or p jmax .This follows from the definitions
of i in Eq. [24] and default probabilities in part (c) of Theorem 1, based upon which we note that
for all odd scenarios ( when i = 2s 1 ) the default probability of at least one asset type must be at its
extreme value. More specifically, given the definition of i in Eq. [24], there must be at least one
j = 1, , N for which p j i u ij is at its extreme value of p jmin or p jmax .
REMARK 4: The scenario probabilities described in Theorem 1 involve i but not i .
Analogously, there is another extreme solution for scenarios where some of the default probabilities
are at an extreme value of their permissible range, but where the scenario probabilities are described
in terms of i (instead of i ) and the extreme values of default rates arise from equation Eq. [25]
(instead of Eq. [24]). This extreme solution to the decomposition problem does not provide any new
results on existence of the proposed decompositions - indeed it can be thought of as another way of
showing that the Assumptions 1 and 2 are sufficient for the existence of desired decompositions. Notably, the scenarios described in Theorems 1 and 2 together form two of the extreme cases of decompositions with desired properties. The loss distributions obtained from the scenarios described in
Theorems 1 and 2 may be useful for the following reasons: (i) they provide a measure of the range
of possible loss distributions for the given portfolio and default assumptions, and (ii) they show the
sensitivity of the loss distribution to the choice of parameters in obtaining the scenario probabilities.
Theorem 2 (Extreme Decomposition - II): Let Assumptions 1 and 2 hold. Let 1, , m be chosen
as in Eq. [27] and Eq. [28], and let i ( 0, 1 ) be defined as:
[37]

1
i = -------------------2 for i = 1, , m
1 + ii

CreditMetrics
April 1999

Monitor

page 68

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


where i is defined in Eq. [25].
Define Scenarios 1 through 2m as:
(a) Defaults in each scenario are independent.
(b) Probability of Scenario i is i where

[38]

k if i = 2s 1 , where s = 1, , m,
i = s s
s ( 1 k s ) if i = 2s, where s = 1, , m.

(c) In Scenario i , the default rate of asset of type j is p ij where

[39]

p + u if i = 2s 1 , where s = 1, , m,
s sj
j
p ij =
1
p j ---------- if i = 2s, where s = 1, , m.

s s

For the 2m scenarios defined above, the conclusions (1) through (3) of Theorem 1 hold.
Proof is given in the Appendix.
We note that i are defined differently in the two Theorems 1 and 2.
We can view Theorem 2 as a dual of Theorem 1, as it provides a solution using i instead of i .
Theorem 1 and 2 not only show that Assumptions 1 and 2 are sufficient for existence of required
decompositions but also provide two of the extreme decompositions. Previously, we showed that in
the odd scenarios described in Theorem 1, the default probabilities of at least one asset type are at an
extreme value. Similarly from definitions of i in Eq. [25] and default probabilities in part (c) of
Theorem 2, we note that for all odd scenarios ( when i = 2s 1 ) , the default probability of at least
one asset type must be at its extreme value. As illustrated in the example previously, the extreme
values of the default probabilities in scenarios generated using decompositions in Theorems 1 and 2
are usually different values. For example, in Decomposition 2 presented in the example (generated
using Theorem 1), the default probabilities are 0 in some scenarios for some asset types, while in
Decomposition 3 (generated using Theorem 2), the default probabilities are 1 in some scenarios for
some asset types.
The results above specify precisely the default probabilities associated with all the scenarios. As
discussed before, the given problem may have many solutions. Thus, in order to incorporate
additional properties of the default events, we should have a simple way of generating other
solutions. Theorem 3 gives a simple parameterization of solutions. The parameterization may not
cover all admissible solutions, but it is sufficiently broad to cover a wide range. The main advantage
of the parameterization is that it transforms the N + N ( N + 1 ) 2 nonlinear constraints present in the
general formulation (Eq. [11] and Eq. [12]) into one linear constraint (Eq. [40] below) - but with
some additional inequality constraints. The Theorem shows that if the 2m positive real numbers
1, , N, 1, , N are chosen to satisfy certain constraints, then the scenarios as chosen in

CreditMetrics
April 1999

Monitor

page 69

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


Theorem 1 with i and i replaced by i and i provide the 2m scenarios that are consistent with
the given default information and the default probabilities in all scenarios are between p jmin and
p jmax .
Theorem 3 (Parameterization of Admissible Scenarios): Let Assumptions 1 and 2 hold and let
1, , m, , 1 , m be positive real numbers such that:
m

[40]

= 1,

i=1

and that for all i = 1, , m :


[41]

p jmax p j i u ij p jmin for j = 1, , N , and

[42]

u ij
p jmax p j + -------- p jmin for j = 1, , N .
i i

Define 1, , m ( 0, 1 ) by
[43]

1
i = -------------------2 for i, , m
1 + i i

Let us now define 2m scenarios as in parts (a) to (c) of Theorem 1 where s, s and s are replaced
by s, s and s respectively. Then the new 2m scenarios satisfy items 1 to 3 of Theorem 1.
Equivalently, all statements of Theorem 1 hold if s, s and s are replaced by s, s and s
respectively, provided these new variables satisfy Eq. [40], Eq. [41], and Eq. [42].
Proof is given in the Appendix.
REMARK 1: It can be shown that Assumption 2 provides the necessary and sufficient condition for
existence of positive real numbers 1, , m, 1, , m that satisfy the constraints Eq. [40], Eq. [41]
and Eq. [42]. Thus the verification of the existence of the desired real numbers i, i can be carried
out using equation Eq. [26].
REMARK 2: Theorem 3 provides a parameterization of scenarios that match the given default information. The parameterization involves 2m variables 1, , m, 1, , m but since one of the constraints (Eq. [40]) is an equality constraint, we notice that there are 2m 1 degrees of freedom.These
2m 1 extra degrees of freedom can be used to incorporate additional properties of default events
while remaining consistent with the given default information.
REMARK 3: The scenario default probabilities described in the above result have the same structural
form as the corresponding results in Theorem 1. Since Theorems 1 and 2 are in a sense dual of each
other, we can obtain another parameterization of admissible decompositions which bears structural
similarity to the decomposition described in Theorem 2.

Summary
In this paper, we have developed an analytical and quick approach to obtaining the loss distribution
of a portfolio based upon the default probability of each asset and the default correlation between
each pair of assets. The main result shows that if the portfolio is made up of N asset types, then

CreditMetrics
April 1999

Monitor

page 70

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


(under some mild conditions) the loss distribution under correlated defaults can be obtained by
combining loss distribution of at most 2N scenarios. The loss distribution under each scenario is
obtained assuming independence and thus the computational complexity for the correlated default
case is no greater than 2N times the complexity associated with the independent default case.
Unlike a Monte Carlo approach which might require us to work with a calibrated distribution
function for the total number of assets, here the complexity only depends on the number of asset
types and not the total number of assets.
The proposed methodology can also be generalized to situations where exposure is time dependent as
in the case of amortizing exposure situations. Here the proposed approach can be used to obtain
marginal default rates for scenarios which when viewed together match the given default rates and
correlations for all partitioned time intervals. The loss distribution is similarly obtained by combining
loss distributions from multiple scenarios where under each scenario default events are independent
and marginal default rates are as determined.
The authors are grateful to Cristina Polizu and Christopher C. Finger for many helpful comments,
and Roger Taillon for his constant encouragement and support.

References
Gupton, Christopher C. Finger, and Mickey Bhatia, CreditMetrics, Technical Document, New York:
Morgan Guaranty Trust Co., 1997.
Credit Risk+, A Credit Risk Management Framework, Technical Document, London: Credit Suisse
Financial Products, 1997.

Appendix
Proof of the Lemma
Here we show that if there exist i and pij with the attributes defined in the Lemma, then the M scenarios, viewed as M mutually exclusive outcomes, produce the desired default rates. Since the probability of scenario i is i , Eq. [13] guarantees that the M scenarios cover all possible outcomes if they
are mutually exclusive. Eq. [11] ensures that the probability of default of any asset matches its a priori given value while Eq. [12] ensures that the pairwise default probabilities of any two assets match
their given value provided default events in all scenarios are independent. Finally, Eq. [14] follows
from the observation that all scenarios are mutually exclusive and the probability of Scenario i is i .
Proof of Theorem 1
Here we have to show that for the 2m scenarios defined in steps (a), (b), and (c), Items 1 to 3 hold.
Item 3 is straightforward after Item 2 has been shown since the 2m scenarios are mutually exclusive
and cover all possible outcomes. We now show that Items 1 and 2 hold.
Item 1: To show p ij [ p jmin, p jmax ] .
From the definition of s in Eq. [24] it follows that

CreditMetrics
April 1999

Monitor

page 71

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


[A.1]

p jmax p j s u sj p jmin .

From the default probabilities defined in part (c) of the Theorem and Eq. [A.1], we observe that
p ij [ p jmin, p jmax ] for odd i .
The definition of s in Eq. [25] implies that p j + u sj [ p jmin, p jmax ] for all 0 s . This implies
that
[A.2]

1
1
p + ---------- u sj [ p jmin, p jmax ] since 0 < ---------- s (from Eq. [27]).
s s
s s

Thus for even values of i as well, p ij [ p jmin, p jmax ] .


Item 2: To show equations Eq. [32],Eq. [33] and Eq. [34]
Eq. [32] is verified using the definition of i in part (b) and Eq. [28]. Using the definition of s we
note that
[A.3]

1
1
2
s = --------------s implying s s s = --------------s .
s s
s

From Eq. [A.3] and the definitions of p ij and i it follows that


m

2m

[A.4]

i pij

i=1

s=1

1
s s ( p j s u sj ) + s ( 1 s ) p j + ---------- u sj
s s

s pj

= pj ,

s=1

or that Eq. [33] holds.


From the definition of s in Eq. [29], we can also show the identity

[A.5]

2 1
s s s + --------------s = 1 .
2
s s

Thus for any j and k ,


m

2m

[A.6]

i pij pik

s=1
m

i=1

u sj

u sk

- p + ----------
s s ( pj s usj ) ( pk s usk ) + s ( 1 s ) p j + -------- s s k s s
s=1

[ s pj p k + usj u sk ]
s=1

= pj p k +

(using Eq. [A.3] and Eq. [A.5])

usj u sk
s=1

= q jk ,

CreditMetrics
April 1999

Monitor

page 72

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


where the last identity follows from Eq. [21].

Proof of Theorem 2
The proof is very similar to the proof of Theorem 1.
Item 1: To show p ij [ p jmin, p jmax ]
From the definition of s in Eq. [25] it follows that
[A.7]

p jmax p j + s u sj p jmin .

From the default probabilities defined in part (c) of the Theorem 2 and Eq. [A.7], we observe that
p ij [ p jmin, p jmax ] for odd i .
The definition of s in Eq. [24] implies that ( p j u sj ) [ p jmin, p jmax ] for all 0 s .
This implies that
[A.8]

1
1
p ---------- u sj [ p jmin, p jmax ] since 0 < ---------- s . (from Eq. [27])
s s
s s

Thus for even values of i as well, pij [ p jmin, p jmax ] .


Item 2: To show equations Eq. [32], Eq. [33]and Eq. [34].
The equation Eq. [32] is verified using the definition of i in part (b) and Eq. [28]. Using the
definition of s in Eq. [37] one notes that
[A.9]

1 s
1
2
s = -------------- implying s s s = --------------s .
s s
s

From Eq. [A.9] and the definitions of p ij and i it follows that


m

2m

[A.10]

i pij

i=1

u sj
s s ( p j + s u sj ) + s ( 1 s ) p j --------

s=1

s s

spj
s=1

or that Eq. [33] holds.


From the definition of s in Eq. [37] one can also show the following identity

[A.11]

2 1
s s s + --------------s = 1 .
2
s s

Thus for any j and k

= pj

CreditMetrics
April 1999

Monitor

page 73

An Analytical Approach for Credit Risk Analysis Under Correlated Defaults


2m

[A.12]

i pij pik

i=1

s=1
m

u sj

u sk

- p --------- s s ( p j + s u sj ) ( pk + s usk ) + s ( 1 s ) pj -------- s s k s s


s=1

[ s p j pk + u sj usk ] using Eq. [A.9] and Eq. [A.11]


s=1

= p j pk +

u sj u sk

= q jk ,

s=1

where the last identity follows from Eq. [21].

Proof of Theorem 3
A comparison of inequalities Eq. [41] and Eq. [42] and the definition of default probabilities of assets
under each scenario (item (c) of Theorem 1) reveals that the default probabilities for j th asset type
under each scenario are between p jmin and pjmax . Verification of Eq. [32], Eq. [33] and Eq. [34]
follows exactly along the same lines as in the proof of Theorem 1.

Intermediate Steps of the Example


It is easily verified that the matrix of correlations is positive semidefinite and so Assumption 1 is
satisfied. Moreover, from the eigenvalues of this matrix we note that it is of rank 2.
As in the decomposition Eq. [21], from Cholesky decomposition we can show that

[A.13]

Q = pp + U 1 U 1 + U 2 U 2 where U 1 = 10

0.99
0
2
1.09 , U 2 = 10 3.27 .
1.5
6

In the notation of equation Eq. [21], m = 2 in the above decomposition. This is a consequence of the
fact that the matrix of correlations is of rank 2. As shown previously, the number of scenarios
required for the proposed approach are twice the rank of matrix of correlations, or four in this
example.

Intermediate steps in obtaining Decomposition 1


Using the vectors p min and pmax defined in Eq. [19] and definitions described in Eq. [24] and
Eq. [25], we obtain 1 = 0.81, 1 = 2.53, 2 = 0.92, 2 = 2.5 . Thus
[A.14]

1
1
------------ + ------------ = 0.92 < 1 ,
1 1 2 2

and Assumption 2 is also satisfied.


We now obtain the scenario properties for the decomposition described in Theorem 1. It is easy to
check that 1 = 0.5 and 2 = 0.5 would satisfy Eq. [27] and Eq. [28]. Any other choice of 1 and

CreditMetrics
April 1999

Monitor

page 74

2 would also have been acceptable as long as Eq. [27] and Eq. [28] were not violated. Using these
values 1 = 0.5 and 2 = 0.5 we obtain from Eq. [29]

[A.15]

1
1
1 = -------------------- = 0.753, 2 = -------------------- = 0.7026 .
2
2
1 + 1 1
1 + 22

As in Theorem 1, we can now obtain the default probabilities for the four scenarios, producing the
results in Table 1.
Notice that in Scenario 1 the BBB assets, and in Scenario 3 the BB assets have default probabilities
that are at the minimum of their prescribed range (0.002 and 0.02 respectively). This is consistent
with Remark 3 described after Theorem 1.

Intermediate steps in obtaining Decompositions 2 and 3


With p jmin and p jmax chosen as 0 and 1 respectively, from Eq. [24] and Eq. [25] we obtain
1 = 1.01, 1 = 60, 2 = 1.53 , and 2 = 15 . It is easily checked that Assumption 2 is satisfied.
We can also verify that 1 = 0.5 and 2 = 0.5 satisfy Eq. [27] and Eq. [28].
Substituting 1 = 0.5 and 2 = 0.5 in Eq. [29], we obtain
[A.16]

1
1
1 = -------------------- = 0.6622, 2 = -------------------- = 0.4607 .
2
2
1 + 1 1
1 + 2 2

As in Theorem 1, we can now obtain the scenario default probabilities for the four scenarios in
Table 2. Notice that in odd scenarios (scenarios 1 and 3) one asset type has default probability at its
extreme value of 0.
Substituting 1 = 0.5 and 2 = 0.5 in Eq. [37], we obtain
[A.17]

4
1
1
1 = -------------------- = 5.55 10 , 2 = -------------------- = 0.0088 .
2
2
1 + 1 1
1 + 2 2

As in Theorem 2, we can now obtain the scenario default probabilities for the four scenarios in
Table 3. Notice that in odd scenarios (scenarios 1 and 3) one asset type has default probability at its
extreme value of 1.

CreditMetrics
April 1999

Monitor

page 75

Worldwide CreditMetrics Contacts


RiskMetrics Group

Americas
Sarah Jun Xie (1-212) 981-7424
sarah.xie@riskmetrics.com

Europe/Asia
Rob Fraser (44-171) 842-0262
rob.fraser@riskmetrics.com

Co-sponsors
Arthur Andersen
Jitendra Sharma (1-212) 708-4536
jitendra.d.sharma@arthurandersen.com

FITCH IBCA, Inc.


Robert S. Grossman (1-212) 908-0535
rgrossman@fitchibca.com

Oliver, Wyman & Company, LLC


James Dylan Roberts (1-212) 541-8100
droberts@owc.com

Bank of America
David E. Gibbs (1-415) 953-1352
david.e.gibbs@bankamerica.com

The Fuji Bank, Limited

PricewaterhouseCoopers LLP
Charles A. Andrews (1-212) 520-2306
charles_andrews@notes.pw.com

John C. Veidis (1-212) 898-2589


fuji-credit-usa@worldnet.att.net

Bank of Montreal
Stuart Brannan (1-416) 867-4092
stuart.brannan@bmo.com

IBM

Bank of Tokyo-Mitsubishi
George S. Lee (1-212) 782-6971
glee@btmny.com

J.P. Morgan

Barclays Capital
Kalpana Telikepali (1-212) 412-1167
kalpana.telikepali@barcap.com
CATS Software, Inc.
Eduard Harabetian (1-310) 789-2000
eduard@cats.com
CIBC World Markets
John T. H. Cook (1-212) 856-6057
jay_cook@fp.cibc.com
Deloitte Touche Tohmatsu International
A. Scott Baret (1-212) 436-5456
sbaret@dttus.com
Deutsche Bank
James Glover (61-2)9258-2411
james.glover@db.com
Ernst & Young
Hank Prybylski (1-212) 773-2823
lawrence.prybylski@ey.com

Duncan Wilson (44-171) 202-3826


duncan_wilson@uk.ibm.com

Mickey Bhatia (1-212) 648-4299


bhatia_mickey@jpmorgan.com
KMV Corporation
Stephen Kealhofer (1-415) 296-9669
kealhofer@kmv.com
KPMG
Angus T. Shearer (1-801) 237-1460
ashearer@kpmg.com
Andrew L. H. Smith (44-171) 311-5270
andrew.dr.smith@kpmg.co.uk
MBIA Inc.
Jan Nicholson (1-212) 415-6278
jan.nicholson.@mbia.com
Jack Praschnik (1-212) 415-6279
jack.praschnik@mbia.com
Moody's Investors Service
Thomas M. Hughes (1-212) 553-7116
hughest@moodys.com

Mark Evens (44-171) 951-1388


mevens@ernsty.com
The Nomura Securities Co., Ltd.
Som-Lok Leung (1-212) 667-1551
sleung@nomurany.com

James Vinci (1-212) 259-1842


james.vinci@us.pwcglobal.com
Prudential Insurance Company of America
Dennis M. Bushe (1-973) 802-9116
dennis.bushe@prudential.com
Craig R. Gardner (1-973) 802-7576
craig.gardner@prudential.com
Reuters, Ltd.
John Neasham (44-171) 250-1122
john.neasham@reuters.com
Royal Bank of Canada
Francine Blackburn (1-416) 974-6654
francine.blackburn@royalbank.com
Standard & Poors
James E. Satloff (1-212) 208-5240
Stuart D. Braman (1-212) 208-5542
sbraman@mcgraw-hill.com
UBS AG
Kenneth J. Phelan (1-203) 719-1686
ken.phelan@wdr.com
Linda Bammann (1-203) 719-1985
lindabammann@wdr.com

CreditMetrics Monitor
April 1999

CreditMetrics Products
Introduction to CreditMetrics: A broad overview of the CreditMetrics methodology and
practical applications.

CreditMetricsTechnical Document: A comprehensive reference on the CreditMetrics methodology. The document begins with an overview and simple examples. Later
chapters include details on parameter estimation, the models assumptions, and simulation
framework. The final chapters include a more complete example, and a discussion of the application of the CreditMetrics measures of portfolio credit risk.
CreditMetrics Monitor: A semiannual publication which discusses a variety of credit
risk management issues, ranging from practical implementations to modeling and statistical
questions.
CreditMetrics data sets: Current market data (foreign exchange rates, yield curves, and
spread curves by industry and rating category), as well as derived data (industry correlations
and transition matrices). Current market data and industry correlations are updated weekly.
All of the above can be downloaded from the Internet at www.creditmetrics.com.
CreditManager PC application: A desktop software tool which implements the CreditMetrics methodology. Outputs include portfolio value distributions, marginal analyses, sector breakdowns, and stress tests. CreditManager can be purchased from the Risk Metrics
Group.
Trouble accessing the Internet? If you encounter any difficulties in either accessing the
CreditMetrics home page on www.creditmetics.com or downloading the CreditMetrics data
files, you can call 1-212-981-7475.

page 76

Вам также может понравиться