Вы находитесь на странице: 1из 61

A Comparison Of Value-At-Risk Methods For Portfolios Consisting Of Interest Rate Swaps And FRAs

Robyn Engelbrecht Supervisor: Dr. Graeme West

December 9, 2003

Acknowledgements I would like to thank Graeme West and Hardy Hulley for all their help, and especially Dr. West for the many consultations, as well as for providing me with his bootstrapping code. Thanks also to Shaun Barnarde of SCMB for providing the historical data.

Contents

1 Introduction 1.1 1.2 Introduction to Value at Risk . . . . . . . . . . . . . . . . . . Problem Description . . . . . . . . . . . . . . . . . . . . . . .

5 5 7 9 9 10 11 11 12 15 15 17 18 18

2 Assumptions and Background 2.1 2.2 2.3 2.4 2.5 Choice of Return . . . . . . . . . . . . . . . . . . . . . . . . . The Holding Period . . . . . . . . . . . . . . . . . . . . . . .

Choice of Risk Factors when Dealing with the Yield Curve . . Modelling Risk Factors . . . . . . . . . . . . . . . . . . . . . . The Historical Data . . . . . . . . . . . . . . . . . . . . . . .

3 The Portfolios 3.1 3.2 Decomposing Instruments into their Building Blocks . . . . . Mapping Cashows onto the set of Standard Maturities . . .

4 The Delta-Normal Method 4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.2

Description of Method . . . . . . . . . . . . . . . . . . . . . .

18 21 21 21 22

5 Historical Value at Risk 5.1 Classical Historical Simulation . . . . . . . . . . . . . . . . . 5.1.1 5.1.2 5.2 Background . . . . . . . . . . . . . . . . . . . . . . . . Description of Method . . . . . . . . . . . . . . . . . .

Historical Simulation with Volatility Updating (Hull-White Historical Simulation) . . . . . . . . . . . . . . . . . . . . . . 5.2.1 5.2.2 Background . . . . . . . . . . . . . . . . . . . . . . . . Description of Method . . . . . . . . . . . . . . . . . .

23 23 23 25 25 26

6 Monte Carlo Simulation 6.1 6.2 6.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monte Carlo Simulation using Cholesky decomposition . . . .

Monte Carlo Simulation using Principal Components Analysis 26 30 36 39 39 40 41 42 3

7 Results 8 Conclusions A Appendix A.1 The EWMA Model . . . . . . . . . . . . . . . . . . . . . . . . A.2 Cash Flow Mappping . . . . . . . . . . . . . . . . . . . . . . . A.3 The Variance of the Return on a Portfolio . . . . . . . . . . . B Appendix

B.1 VBA Code: Bootstrapping Module . . . . . . . . . . . . . . . B.2 VBA Code: VaR Module . . . . . . . . . . . . . . . . . . . . B.3 VBA Code: Portfolio Valuation Module . . . . . . . . . . . .

42 44 54

Chapter 1

Introduction
1.1 Introduction to Value at Risk

Value at Risk, or VaR, is a widely used measure of nancial risk, which provides a way of quantifying and managing the risk of a portfolio. It has, in the past few years, become a key component of the management of market risk for many nancial institutions1 . It is used as an internal risk management tool, and has also been chosen by the Basel Committee on Banking Supervision as the international standard for external regulatory purposes2 . Denition 1.1 The VaR of a portfolio is a function of 2 parameters, a time period and a condence interval. It equals the loss on a portfolio that will not be exceeded by the end of the time period with the specied condence level [2]. Estimating the VaR of a portfolio thus involves determining a probability distribution for the change in the value of the portfolio over the time period (known as the holding period). Consider a portfolio of nancial instruments i = 1, . . . , n, whose values at time t depend on the k market variables (risk factors) (1) (k) xt := xt , . . . , xt
Market risk is the risk associated with uncertainty about future earnings relating to changes in asset prices and market rates. 2 The capital that a bank is required to hold against its market risk is based on VaR with a 10-day holding period at a 99% condence level. Specically, the regulatory capital requirement for market risk is dened as max(VaRt1 , k Avg{V aRti |i = 1, . . . , 60}). Here k is the multiplication factor, which is set to between 3 and 4 depending on previous backtest results, and VaRt refers to a VaR estimate for day t based on a 10 day holding period [1].
1

These market variables could be exchange rates, stock prices, interest rates, etc. Denote the monetary positions in these instruments by W := (W1 , . . . , Wn ) (A fundamental assumption of VaR is that the positions in each instrument remain static over the holding period, so the subscript t above has been omitted.) Let the values of the instruments at time t be given by Pt := Pt (xt ), . . . , Pt (xt )
(i) (1) (n)

(1.1.1)

where Pt (xt ), i {1, . . . , n} are pricing formulae, such as the Black-Scholes pricing formula. The value of the portfolio at time t is given by
n

Vt (Pt , W ) =
i=1

Wi Pt (xt )

(i)

(1.1.2)

Let t be the length of the holding period, and (0, 1) be the level of signicance3 . VaR can be dened in terms of either the distribution of the portfolio value Vt+t , or in terms of the distribution of the arithmetic return of the portfolio Rt+t = (Vt+t Vt )/Vt . Consider the (100 )th percentile of the distribution of values of the portfolio at t + t. This is given by the value v in the expression4 P [Vt+t v ] = Let R be the arithmetic return corresponding to v . In other words v = Vt (1 + R ). The relative VaR of the portfolio is dened by VaR(relative) := Et [Vt+t ] v = Vt (R Et [Rt+t ]) (1.1.3)

(1.1.3) gives the monetary loss of the portfolio over the holding period relative to the mean of the distribution. Sometimes, the value of the portfolio at time t, Vt , is used instead of Et [Vt+t ]. This is known as the absolute VaR: VaR(absolute) := Vt v = Vt R (1.1.4)

(1.1.4) gives the monetary loss relative to zero, or without reference to the expected value. The equations above follow from the fact that Et [Vt+t ] v = Vt (1 + Et [Rt+t ]) Vt (1 + R ) and Vt v = Vt Vt (1 + R ). When the time horizon is small, the expected return can be quite small, and so the results of (1.1.3) and (1.1.4) can be similar, but (1.1.3) is a better estimate in general,
Typically = 0.05 or = 0.01, which correspond to a condence level of 95% and 99% respectively. 4 This percentile always corresponds to a negative value, but VaR is given by a positive amount, hence the greater than or equals to sign here.
3

since it accounts for the time value of money, and pull-to-par eects which may become signicant towards the end of the life of a portfolio [3]. Thus the problem of estimating VaR is reduced to the problem of estimating the distribution of the portfolio value, or that of the portfolio return, at the end of the holding period. Typically this is done via the estimation of the distribution of the underlying risk factors. However, there is no single industry standard used to do this. The general techniques commonly used include analytic techniques (the Delta-Normal method and the DeltaGamma method), Historical simulation, and Monte Carlo simulation. As described in [4], the techniques dier along two lines: Local/Full valuation This refers to whether the distribution is estimated using a Taylor series approximation (local valuation), or whether the method generates a number of scenarios and estimates the distribution by revaluing a portfolio under these scenarios (full valuation). Parametric/Nonparametric Parametric techniques involve the selection of a distribution for the returns of the market variables, and the estimation of the statistical parameters of these returns. Nonparametric techniques do not rely on these estimations, since it is assumed that the sample distribution is an adequate proxy for the population distribution. For large portfolios, the decision of which method is chosen presents a tradeo of speed against accuracy, with the fast analytic methods relying on rough approximations; and the most realistic approach, Monte Carlo simulation, often considered too slow to be practical.

1.2

Problem Description

The aim of this project is to implement various VaR methods, and to compare these methods on portfolios consisting of the linear interest rate derivatives, forward rate agreements (FRAs) and interest rate swaps. The performance of a VaR method depends very much on the type of portfolio being considered, and the aim of this project is therefore to determine which of the methods is best for this type of portfolio. These derivatives represent an important share of the interest rate component of a banks trading portfolios, since the more complicated interest rate derivatives dont trade that much in this country. The methods to be considered are: 1. The Delta-Normal method (classic RiskMetrics approach) 7

2. Historical Simulation 3. Historical Simulation with Volatility updating 4. Monte Carlo simulation using Cholesky decomposition 5. Monte Carlo Simulation using Principal components analysis Although the Delta-Normal method and Monte Carlo simulation are parametric, and Historical simulation is nonparametric, direct comparison is possible since the distributional parameters will be estimated using the same historical data as will be used to generate scenarios for the Historical simulation methods. The methods will be tested on various hypothetical portfolios, by estimating VaR for these portfolios over the historical period, and comparing the estimates to the actual losses that occurred. VaR will be estimated for these portfolios at both the 95% and 99% condence levels. Implementation of the methods is to be done in VBA. This was decided on mostly for convenience, due to the large amount of data that needs to be read from and written to spreadsheet. Historical swap curve interest rate data for a period of 2 years will be used, from 2 July 2001 to 30 June 2003. For the historical simulation techniques, a 1-day moving window of 250 days of data will be used when estimating VaR. This is in accordance with the Basel Committee requirements for length of historical observation period. VaR can therefore be determined from day 251 onwards that is, from 2 July 2002 to 30 June 2003. For the Delta-Normal method and the Monte Carlo simulation methods, VaR can be determined for the entire historical period, since all that is required is a set of volatility and correlation estimates, which can be obtained from day one. All volatility and correlation estimates are performed using the Exponentially Weighted Moving Average technique, with a decay factor of = 0.94, as described in Appendix A.1.

Chapter 2

Assumptions and Background


2.1 Choice of Return

The arithmetic return on an instrument at time t + t is given by Ra (Pt+t ) = (Pt+t Pt )/Pt The equivalent geometric return is given by Rg (Pt+t ) = ln(Pt+t /Pt ) We dene the arithmetic returns in the risk factors, Ra (xt+t ) and Rg (xt+t ) in exactly the same way. Both formulations have their own advantages and disadvantages. The arithmetic return of an instrument is needed when aggregating assets across portfolios, since the arithmetic return of a portfolio is a weighted linear sum of the arithmetic returns of the individual assets (the corresponding formula for the return of a portfolio in terms of geometric returns of the assets is not linear). However, the geometric return aggregates better across time (the n day return is equal to a linear sum of the 1 day returns, whereas the corresponding formula for the n day arithmetic return is not so simple). See [4] for more detail on this. Also the geometric return is more meaningful economically, since, for example, if the geometric returns of an instrument, or of a market variable, are normally distributed, then the corresponding instrument prices, or the market variables themselves, are lognormally distributed, and so will never be negative. However, if arithmetic returns are normally distributed, then, 9
(i) (i) (i) (i) (i) (i) (i) (i) (i)

looking at the left tail of the distribution of these returns, Ra (Pt+t ) implies that Pt+t < 0.
(i)

(i)

If returns are small, then the dierence between Ra (Pt+t ) and Rg (Pt+t ) is small, since we can use a Taylor series approximation to get Rg (Pt+t ) = ln(Pt+t /Pt ) = ln((Pt+t Pt )/Pt + 1) 1 (i) (i) (i) (i) (i) (i) = (Pt+t Pt )/Pt ((Pt+t Pt )/Pt )2 + . . . 2 (i) (i) (i) (Pt+t Pt )/Pt (2.1.1) to rst order. (Equivalently, for Rg (xt+t ).) In practice, when modelling returns, for the above reasons, geometric returns are preferred, but the assumption is always made, for convenience, that the geometric return of a portfolio is the linear weighted sum of the geometric returns of the individual instruments. Also, the denitions in (1.1.3) and (1.1.4) pertain to the arithmetic returns of a portfolio. In practice, regarding these denitions, the distribution is again modelled according to geometric returns, and the assumption in (2.1.1) is used.
(i) (i) (i) (i) (i) (i) (i)

(i)

(i)

2.2

The Holding Period

Although the holding period is typically a day, 10 days (for regulatory purposes), or a month, VaR calculations are always initially done on a holding period of 1 day, since this provides the maximum amount of historical information with which to estimate parameters. Time aggregation techniques are then used to transform the distribution for daily VaR into a distribution for the longer holding period. (This is described in [3]). In the case where portfolio returns can be assumed to be i.i.d., it is possible to simply scale the VaR number obtained by multiplying it by the square root of the required time horizon. So from now on we assume that t = 1 day.1
1 Note that the short term market risk measurement which VaR concerns itself with is in contrast with credit risk measurement, where it is necessary to model changes in market variables over much longer periods of time (as well as to model the changing structure of the portfolio over time, whilst VaR, as mentioned previously, assumes a static portfolio).

10

2.3

Choice of Risk Factors when Dealing with the Yield Curve

The distribution used to estimate the VaR of a portfolio is determined from the distribution of the risk factors to which the portfolio is exposed. The rst step to measuring the risk of a portfolio is to determine what these risk factors are. When dealing with the yield curve, the question which arises, due to the nontraded nature of interest rates, is whether the underlying risk factors are in fact the rates themselves, or the corresponding zero coupon bond prices. In this project, they were taken to be the zero coupon bond prices2 . However, both methods are used in practice.

2.4

Modelling Risk Factors

The easiest and most common model for risk factors is the geometric Brownian motion model, in other words, the risk factors follow the process dxt = i xt dt + i xt dWt
(i) (i) (i) (i)

for 1 i k , where i and i represent the instantaneous drift and volatil(i) ity, respectively, of the process for the ith risk factor and (Wt )t0 are Brownian motions, which will typically be correlated. To determine an ex(j ) g 2g 1 1 pression for xt , let g (xj ) = ln xj . Then x = ij x and x = ij x 2 i i i xj i where ij is the indicator function. By the multidimensional Ito formula we get dg (xj t) = =
i=1 k i=1 k

g (i) 1 dx + xi t 2 g (i) 1 dx + xi t 2 dxt +


(j )

i=1 j =1 k i=1

2g (i) (j ) dx dxt xi xj t

(2.4.2)

2g (i) (dxt )2 x2 i (dxt )2


(i)

= =

1
(j ) xt

1 2

1
(i) (xt )2

1 2 (j ) dt + j dWt j j 2 1 2 (j ) j j t + j Wt 2

Thus, discretizing this, over t, xt+t = xt exp


2

(j )

(j )

This is the method used in [4].

11

1 2 j j t + j tZj 2 (j ) xt exp j tZj = xt exp


(j )

(2.4.3)

where Z k (0, ) and is the variance-covariance matrix which will be dened in (4.2.2). (2.4.3) follows since the holding period t is only one day. The major problem with the normality assumption is that, as described in [5], the distribution of daily returns of any risk factor would in reality typically show signicant amounts of positive kurtosis. This leads to fatter tails and extreme outcomes occurring much more frequently than would be predicted by the normal distribution assumption, which would lead to an underestimation of VaR (since VaR is concerned with the tails of the distribution). But the model is generally considered to be an adequate description of the process followed by risk factors such as equities and exchange rates. However, when dealing with the yield curve, the assumption of normality of returns clearly does not give an adequate description of reality, since interest rates are known to be mean reverting, and zero coupon bonds are subject to pull-to-par eects towards the end of their life. Although a simple interest rate model, such as a short rate model, may price an instrument more accurately (since it would capture the mean reverting property of interest rates), short rate models assume that all rates along the curve are perfectly correlated with the short rate, and thus will all move in the same direction. Since this is of course not the case in reality, it doesnt give much of an indication of the risk of a portfolio to moves in the yield curve. A multi-factor model increases complexity dramatically, and still does not give us what we want. To illustrate this, suppose that rates have been on the increase for a while. An interest rate model would model a decrease in rates due to the mean reverting eect which would come into play. However, what a risk measurement technique is in fact interested in, is a continued upward trend. So although the normality assumption would lead to unrealistic term structures over longer time horizons, since we are dealing with a holding period of a day, the assumption seems to be appropriate for risk measurement, since to quote [3], for risk management purposes, what matters is to capture the richness in movements of the term structure, not necessarily to price todays instruments to the last decimal point.

2.5

The Historical Data

The historical data provided consists of the following array of swap curve market rates:

12

overnight, 1 month, and 3 month rates (simple yields) 3v6, 6v9, 9v12, 12v15, 15v18, 18v21 and 21v24 FRA rates (simple yields) and 3, 4, 5,. . . , 10, 12, 15, 20, 23, 25, 30 year swap rates (NACQ), for the period from 2 July 2001 to 30 June 2003. Each array of rates was bootstrapped to determine a daily NACC swap curve. This was done using the Monotone Piecewise Convergence (MPC) interpolation method3 . Given that a large portfolio could depend on any number of the rates on the curve, and that measuring the risk of a portfolio typically involves the estimation of a variance-covariance matrix of the underlying risk factors, it can become infeasible, if not impossible, to estimate the risk of a portfolio to each of these rates individually. A standard set of maturities needs to be dened in order to implement a risk measurement technique with a xed set of data requirements. The rates at these standard maturities are then assumed to describe the interest rate risk of the entire curve, and all calculations (of volatilities, correlations, etc) are done based on this standard set of maturities. The standard maturities selected consists of the following 24 maturities, in days (these correspond as closely as possible to the actual/365 daycount used in South Africa): 1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, 2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125 and 10950 days Figure 2.1 shows daily zero coupon yield data over the 2 year period for the standard set of maturities. From this graph it is evident that interest rates were quite volatile during this period. In particular, note the spike in almost all the rates which occurs in December 2001. (This event signicantly aects the results seen). Also, note that in June 2002, the entire yield curve inverts, resulting in the term structure changing from being approximately increasing to approximately decreasing.

For an explanation of this interpolation method, see [6]. The bootstrapping code was provided by Graeme West.

13

Figure 2.1: Daily zero coupon yields (NACC)

14

Chapter 3

The Portfolios
3.1 Decomposing Instruments into their Building Blocks

Since it is impossible to measure the risk of each of the enormous number of risk factors which a portfolio could depend on, simplications are required. The rst step to measuring the risk of a portfolio is to decompose the instruments making up a portfolio into simpler components. Both FRAs and swaps can be valued by decomposing them into a series of zero-coupon instruments1 . The next step is to map these positions onto positions for which we have prices of the components. Finally, we estimate VaR of the mapped portfolio. A forward rate agreement (FRA) is an agreement that a certain interest rate will apply to certain notional principal during a specic future period of time [7]. If the contract, with a notional principal of L, is entered into at time t, and applies for the period T1 to T2 (T2 - T1 is always 3 months), then the party on the long side of the contract agrees to pay a xed rate r (the FRA rate) and receive a oating rate rf (3 month JIBAR, a simple yield rate) over T1 to T2 . This is equivalent to the following cashows: time T1 : L time T2 : +L(1 + r(T2 T1 )) So at any time s, where t s T1 , a FRA can be valued by discountAlthough FRAs and swaps can both be represented in terms of implied forward rates, decomposing them into zero coupon bonds makes them linear in the underlying risk factors.
1

15

ing these cashows. In other words V = LB (s, T2 )(1 + r(T2 T1 )) LB (s, T1 ) where B (s, T ) is the discount factor at s for the period s to T . The value to the short side is just the negative of this amount. A swap is an agreement between two counterparties to exchange a series of future cashows. The party on the long side of the swap agrees to make a series of xed payments and to receive a series of oating payments, typically every 3 months, based on the notional principal L. The oating payments are based on the oating rate (typically 3 month JIBAR) observed at the beginning of that 3 month period. Payments always occur in arrears, i.e. they follow the natural time lag which many interest rate derivatives follow, allowing us to value a swap in terms of bonds. The value of the swap to the long side can be determined as the exchange of a xed rate bond for a oating rate bond. V = Boat Bx The value to the short side is just the negative of this. Let L be the notional principal and R the eective swap rate per period, let there be n payments remaining in the life of the swap, and let the time till the ith payment be ti , 1 i n. Then the value of the xed leg at time t is
n

Bx = RL
i=1

B (t, ti )

If t is a cashow date, then the value of the oating leg is just Boat = L(1 B (t, tn )) If t is not a cashow date, then the value of the xed leg is unchanged, but the value of the oating leg is determined using prespecied JIBAR rate for the next payment. Suppose ti1 < t < ti . Let the JIBAR rate for (ti1 , ti ] be rf (simple yield rate). Then the value of the oating leg is Boat = L 1 + rf (ti t) 365 B (t, ti ) LB (t, tn )

The formula above is derived in [8]. This method of analysing risk by means of discounted cashows, enables us to get an idea of the sensitivity of the market value of an instrument to changes in the term structure of interest rates.

16

3.2

Mapping Cashows onto the set of Standard Maturities

In general, we have a cashow occurring at time t2 , and two standard maturities t1 and t3 which bracket t2 . We need a way of determining the value of the discount factor at t2 , which will necessarily involve some type of interpolation. There are various procedures for mapping cashows of interest rate positions. The technique of cash ow mapping used here, again consistent with [4, Ch 6], splits each cashow into two cashows occurring at its closest standard vertices, and has the property that it assigns weights to these two vertices in such a way that Market value is preserved i.e. the total market value of the two standardized cash ows is equal to the market value of the original cash ow. Variance is preserved i.e. the variance of the portfolio of the two standardized cash ows is equal to the variance of the original cash ow. Sign is preserved i.e. the standardized cash ows have the same sign as the original cash ow. The method is described in Appendix A.2. This mapping technique is done for all cashows in a portfolio, except of course in the case where the cashow corresponds exactly to one of the standard vertices, where mapping becomes redundant. Using the mapped cashows, we are able to determine VaR for the portfolio, since we have the required volatilities and correlations of the bond prices for these maturities. In the (nonparametric) Historical Simulation techniques, rather than mapping the position, a cashow occurring at a non-standard maturity t2 is valued by a simple linear interpolation between the zero coupon yield to maturities which occur at standard maturities t1 and t3 , and, in the case of Hull-White Historical simulation, also between the volatilities which occur at the standard maturities.

17

Chapter 4

The Delta-Normal Method


4.1 Background

The Delta-Normal method (classic RiskMetrics Approach) is a parametric, analytic technique where the distributional assumption made is that the daily geometric returns of the market variables are multivariate normally distributed with mean return zero. The major advantage of this is that we can use the normal probability density function to estimate VaR analytically, using just a local valuation. The normality assumption for returns of the underlying risk factors was discussed in Section 2.4. The more problematic assumption in general is the linearity assumption, since a rst order approximation wont be able to adequately describe the risk of portfolios containing optionality. However, since this project is only concerned with linear instruments, the assumption is not problematic here. The advantages of this method include its speed (since it is a local valuation technique) and simplicity, and the fact that the distribution of returns need not be assumed to be stationary through time, since volatility updating is incorporated into the parameter estimation.

4.2

Description of Method

VaR is determined as the absolute VaR based on the distribution of portfolio returns, dened in (1.1.4). This in eect means that Delta-Normal VaR is a direct application of Markowitz portfolio theory, which measures the risk 18

of a portfolio as the standard deviation of returns, based on the variances and covariances of the returns of the underlying instruments. The only real dierence here is that in Markowitz portfolio theory, the underlying variables are taken to be the instruments making up a portfolio, whereas in Delta-Normal VaR, because the distributional assumptions are based on the returns of the risk factors and not on the instruments, the underlying variables are taken to be the risk factors themselves. As a consequence of the linearity assumption, we can in fact consider a portfolio of holdings in n instruments as a portfolio of holdings, W = (W1 , . . . , Wk ), in the k underlying risk factors, where k is not necessarily equal to n. Scaling these i to determine the relative holdings i = W (i) , i = 1, . . . , k so that the latter sum to unity, we get the vector = (1 , . . . , k ), (i) Let Rt denote the arithmetic return of the ith risk factor at time t. The (arithmetic) return of the portfolio at time t is
k Pt

Rt =
i=1

wi Rt

(i)

The expected return of the portfolio at time t is


k

E[Rt ] = E
i=1

wi Rt

(i)

=
i=1

wi E[Rt ] = 0

(i)

using the approximation in (2.1.1) and the assumption that the geometric returns have mean zero. The variance of the portfolio return at time t is given by (4.2.1) Var[Rt ] = T where is the Variance-Covariance matrix 2 1 1 2 1,2 2 2 1 1,2 2 = . . . . . . k 1 1,k k 2 2,k given by . . . 1 k 1,k . . . 2 k 2,k . .. . . . 2 ... k (4.2.2)

i is the standard deviation (volatility) of the ith risk factor and i,j is (i) (j ) (i) (j ) Cov[Rt ,Rt ] the correlation coecient of Rt and Rt , dened by i,j := . i j See Appendix A.3 for the derivation of this. Now, under the assumption of normality of the distribution of Rt+t , given the value of the standard deviation of Rt+t , SDev[Rt+t ] = Var[Rt+t ], the VaR can be determined analytically via the (100 )-th percentile of Rt+t . It should rst be noted that volatility is always an annual measure, so to measure daily VaR we need to include a scaling factor of 250 (where 250 is the approximate number 19

of business days in a year). VaR is determined as 1 VaR(absolute) = |Vt | z SDev[Rt+t ] 250 = |Vt | z = z T 250 T W W 250

(4.2.3)

where z is the inverse of the cumulative normal distribution function. The variance-covariance matrix is estimated using the Exponentially Weighted Moving Average scheme.

20

Chapter 5

Historical Value at Risk


5.1
5.1.1

Classical Historical Simulation


Background

Here the distribution of the returns of the risk factors is determined by drawing samples from the time series of historical returns. This is a nonparametric technique, since no distributional assumptions are made, other than assuming stationarity of the distribution of returns of market variables (in particular their volatility), through time. If this assumption did not hold, the returns would not be i.i.d., and the drawings would originate from dierent underlying distributions. The assumption does not hold in reality, since as described in [4], volatility changes through time, and periods of high volatility and low volatility tend to cluster together. Historical simulation is a full revaluation technique. A possible problem with the implementation of this method could be a lack of sucient historical data. Also, comparing this method to Monte Carlo simulation, whereas Monte Carlo will typically simulate about 10000 sample paths for each risk factor, Historical simulation considers only one sample path for each risk factor (the one that actually happened). The advantage of Historical simulation is that since it is nonparametric, all aspects of the actual distribution are captured, including kurtosis and fat tails. Also, the full revaluation means that all nonlinearites are accounted for, although it does make this technique more computationally intensive than the Delta-Normal method.

21

5.1.2

Description of Method

The approach is straightforward. Let the window be of size N 250, to be in accordance with the regulatory requirements for minimum length of observation window. Scenarios are generated by determining a time series of historical day-to-day returns of the market variables over the last N business days, and then assuming that the return of each market variable from t to t + 1 is the same as the return was over each day in the time series. (j ) In other words, as described in [9], if xt xt is the value of a particular (j,i) market variable at time t, then a set of hypothetical values xt+1 for all i {t-N, . . . , t-1} is determined by the relationship ln xt+1 xt
(j ) (j,i) (j,i)

= ln
(j )

xi+1 xi
(j ) (j )

(j )

(5.1.1)

xt+1

= xt .

xi+1 xi
(j ) (j,i)

This is done for all j {1, . . . , k }, to determine the matrix xt+1 , 1 j k , 1 i (N 1), and a full revaluation of each instrument in the portfolio is done for each column using (1.1.1), to determine Pti+1 , i {t-N, . . . , t-1}. From these, we determine Vti +1 , i {t-N, . . . , t-1} and sort the results to generate a histogram describing the pmf for the future portfolio value. VaR is then determined using denition (1.1.3), by determining the mean and appropriate percentile of this distribution. Since interest rates are not taken to be the risk factors, and zero coupon bond prices are, to relate these to the above, let B (t, ) denote the price of a period zero coupon bond at time t, which corresponds to the continuously compounded annual yield to maturity rt ( ), so B (t, ) = exp(rt ( ) ). Then (5.1.1) becomes ln B (i) (t + 1, ) B (t, ) = ln B (i + 1, ) B (i, )

rt+1 ( ) rt ( )
(i)

(i)

= [ri+1 ( ) ri ( )]

rt+1 ( ) = rt ( ) + ri+1 ( ) ri ( )

22

5.2

Historical Simulation with Volatility Updating (Hull-White Historical Simulation)


Background

5.2.1

This method is proposed in [2]. It is an extension of traditional Historical Simulation, which does away with the drawback of assuming constant volatility. [2] mentions that the distribution of a market variable, when scaled by an estimate of its volatility, is often found to be approximately stationary. If the volatility of the return of a market variable at the current time t is high, then due to the tendency of volatility to cluster, one would expect a high return at time t + 1. If historical volatility was in reality low relative to the value at t, however, the historical returns would underestimate the returns one would expect to occur from t to t + 1. The reverse is true if the volatility is low at time t and the historical volatility is high relative to this. This approach incorporates the exponentially weighted moving average volatility updating scheme, used in the Delta-Normal method, into classical Historical Simulation.

5.2.2

Description of Method

Rather than assuming that the return of each market variable from t to t + 1 is the same as it was over each day in the time series, it is now the returns scaled by their volatility which we assume will reoccur. This is described in (j ) [9]. If xt xt is the value of a particular market variable at time t, and t,j is the volatility at time t of this market variable, then a set of hypothetical (j,i) values xt+1 for all i {t-N, . . . , t-1}, is determined by the relationship x +1 1 ln t( j) t,j xt
(j,i)

x 1 ln i(+1 j) i,j xi
(j ) (j ) xi+1 (j ) xi

(j )

(5.2.2)
t,j i,j

xt+1

(j,i)

= xt

and we proceed as in (section 5.1.2). Relating this to interest rates, using the same notation as above, (5.2.2) becomes 1 B (i) (t + 1, ) ln t,j B (t, ) = 1 B (i + 1, ) ln i,j B (i, )

23

1 (i) r ( ) rt ( ) t,j t+1

1 [ri+1 ( ) ri ( )] i,j t,j (i) rt+1 ( ) = rt ( ) + [ri+1 ( ) ri ( )] i,j =

24

Chapter 6

Monte Carlo Simulation


6.1 Background

Monte Carlo simulation techniques are by far the most exible and powerful, since they are able to take into account all non-linearities of the portfolio value with respect to its underlying risk factors, and to incorporate all desirable distributional properties, such as fat tails and time varying volatilities. Also, Monte Carlo simulations can be extended to apply over longer holding periods, making it possible to use these techniques for measuring credit risk. However, these techniques are also by far the most expensive computationally. Typically the number of simulations of each random variable needed, in order to get a sample which reasonably approximates the actual distribution, is around 10000. Like Historical simulation, this is a full revaluation approach. Here, however, rather than drawing samples from the distribution of historical returns, a stochastic process is selected for each of the risk factors, the parameters of the returns process are estimated (again using exponentially weighted moving averages here), and scenarios are determined by simulating price paths for each of these risk factors, over the holding period. The portfolio is revaluated under each of these scenarios to determine a pmf of portfolio values, given by a histogram. VaR is determined as in historical simulation, using denition (1.1.3). Apart from the computational time needed, another potential drawback of this method is that since specic stochastic processes need to be selected for each market variable, the method is very exposed to model risk. Also, sampling variation, due to only being able to perform a limited number of simulations, can be a problem.

25

6.2

Monte Carlo Simulation using Cholesky decomposition

Each bond price corresponding to the set of standard maturities is simulated according to its variance-covariance matrix, using (2.4.3). Returns are generated by performing a Cholesky decomposition of the matrix to determine the lower triangular matrix L such that = LLT . (A derivation of Cholesky decomposition is contained in [10]). Then, we can simulate normal random numbers according to this distribution by simulating the independant normal random vector Z k (0, I ), and calculating the matrix product X = LZ , since we have that E[X ] = LE[Z ] = 0, and Var[X ] = E[(LZ )(LZ )T ] = E[LZZ T LT ] = LE[ZZ T ]LT = LLT = An issue to note here is that Cholesky decomposition can only be performed on matrices which are positive semi-denite1 . Theoretically, this will be the case, since the formula for the variance of a portfolio, given by (4.2.1), where w is a (nonzero) vector of portfolio weights, can never be negative. However, in practice, when dealing with large covariance matrices with highly correlated components, this theoretical property can break down. Thus covariance matrices should always be checked for positive semi-deniteness before trying to apply Cholesky decomposition (or principal components analysis, which is discussed below). A solution to the problem is described in [11].

6.3

Monte Carlo Simulation using Principal Components Analysis

Principal Component Analysis (PCA) provides a way of decreasing the dimensionality of a set of random variables which are highly correlated. It is in fact often possible to describe a very large proportion of the movements
1

A symmetric matrix A is positive semidenite if and only if b Ab 0 for all nonzero b

26

in these random variables using a relatively small number of principal components, which can very eectively decrease the computational time needed for simulation. As could be seen in Figure 2.1, the movements of rates along the yield curve are highly interdependant, thus PCA is often applied to the yield curve. The random variables are now taken to be the returns in zero coupon yields at all 24 standard maturities, as opposed to using the bond price returns, as was done in the previous methods considered. Principal components are hypothetical random variables that are constructed to be uncorrelated with one another. Suppose we have a vector of returns in zero coupon yields (dropping the subscript t for now) x = (x1 , . . . , xk )T . The rst step is to determine the normalized k 1 vector of returns y = (y1 , . . . , yk )T where yi = xi /i 2 . y is transformed into a vector of principal components, d = (d1 , . . . , dk ), where each principal component is a simple linear combination of the original random vector. At the same time, it is determined how much of the total variation of the original vector is described by each of the principal components. Suppose the k k correlation matrix of y is given by y = [ij ], 1 i, j k . It is determined from the covariance matrix (4.2.2) of x, found by exponentially weighted moving averages. To dene the principal components of y , as described in [12], dene the the k 1 vector := (1 , . . . , k )T to be the vector of eigenvalues of y , ordered in decreasing order of magnitude. Also dene the k k matrix := [ (1) , . . . , (k) ], where the columns (i) , 1 i k are the orthonormal3 eigenvectors of y , corresponding to i . The principal components are dened by d := T y Since is orthogonal, premultiplying by gives T y = y = d y i = d 1 i
(1)

(6.3.1)

(6.3.2)
(k)

+ d2 i

(2)

+ . . . + d k i

(6.3.3)

In other words, y can be determined as a linear combination of the components. Now, since the new random variables di are ordered by the amount of variation they explain4 , considering the ith entry of y , we get yi = d1 i
(1)

+ d 2 i +

(2)

+ . . . + d m i + ... +

(m)

+ i (6.3.4)

(1) d1 i

(2) d 2 i

(m) d m i

2 The normalized random variables are dened as yi = (xi i )/i where i and i are the mean and standard deviation of xi , but as before, we assume a mean return of 0. The analysis is done on these normalized variables, and once the components have been determined, we simply transform back to the original form of the data. 3 A k k matrix U is orthonormal i U T U = I = U U T . 2 4 The total variation described by the ith eigenvector is measured as i /( n j =1 j ).

27

where m < k and the error term i is introduced since we are only using the rst m components. These m components are then considered to be the key risk factors, with the rest of the variation in y being considered as noise. As shown in [13], the principal components di are also orthogonal, in other words they are uncorrelated, so their covariance matrix is just the diagonal matrix of their variances. Also, their variances are equal to their corresponding eigenvalues i . This means they can be easily simulated independantly, without the need for Cholesky decomposition, which cuts down the computational requirements even further. Now, using the approximation (6.3.4), we can rewrite (6.3.2) as y d (6.3.5)

where is the k m matrix of the rst m eigenvectors, and d = (d1 , . . . , dm )T is an m 1 vector. Simulating d enables us to obtain an approximation for y . The more components that are taken, the better the approximation. Here the assumption was made that the yields are lognormally distributed, so by simulating the components according to the normal distribution, we obtain an approximation for the vector of normalized returns y as a linear combination of these (this is also normally distributed). This is transformed back into the form of the returns x, from which we can determine a scenario vector for the yields themselves. When considering a vector x of yield curve returns, typically m is taken to be 3 (the rst 3 components are assumed to describe most of the movement in the yield curve). The eigenvectors of y describe the modes of uctuation of y [12]. Specically, the rst eigenvector describes a parallel shift of all rates on the curve, the second eigenvector describes a twist or steepening of the yield curve, and the third describes a buttery move, i.e. short rates and long rates going in one direction, and intermediate rates going in the opposite direction. The remaining eigenvectors describe other modes of uctuation. To illustrate this, Figure 6.1 shows the correlation matrix for 30 June 2003, and the decomposition of this matrix into its eigenvectors. Note how the signs of the rst eigenvector are all the same - this is corresponding to a parallel shift in the yield curve. Likewise, the second component corresponds to half the rates going in one direction and the other half going in the other direction (a twist in the yield curve), and the third corresponds to a buttery move. Also, the total variation described by the rst 3 eigenvectors is seen here to be approximately 90%.

28

Figure 6.1: Correlation matrix of yield returns, and the corresponding eigenvalues and eigenvectors (ordered in decreasing magnitude of eigenvalue). It is easily seen that the rst three eigenvectors correspond to a parallel shift, a twist, and a buttery move of the yield curve.

29

Chapter 7

Results
The way in which a VaR model is assessed statistically is by performing a backtest, which determines if the number of times VaR is exceeded is consistent with what is expected for the model. The Basel Committee regulatory requirements for the backtesting window for Historical simulation methods is 250 trading days. This is over and above the 250 trading day observation window needed to estimate VaR itself, which means that backtesting can only begin 500 days into the data. There is therefore insucient historical data to perform a backtest and so a qualitative assessment is done instead. The methods will be tested on various hypothetical portfolios, by estimating VaR for these portfolios over the historical period, and comparing the estimates to the actual losses that occurred. Test portfolios consist of individual instruments (a swap or a FRA). A variety of maturities were considered so that some portfolios were only exposed to short term interest rate risk, and others to more medium term interest rate risk. Both long and short positions were considered. Note that we are interested in the ability of a method to estimate VaR for a specic portfolio, so, for example, if we are estimating VaR of a portfolio consisting of a 3v6 FRA at time t, we are estimating the value of that portfolio one business day into its life, at time t + 1. When we are at time t + 1 we are interested in estimating the VaR of a new 3v6 FRA, one business day into its life. In all graphs, the histogram shows the daily prot-andloss (P &L), and the line graphs show the negative daily VaR of the various methods (negative in order to correspond with the losses of the P &L). From here on, methods are abbreviated as follows. DN Delta-Normal method HS Classical Historical simulation 30

HW Hull-White Historical simulation MC Monte Carlo simulation using Cholesky decomposition My code for Monte Carlo simulation using PCA unfortunately has a bug in it which, due to time constraints, I was unable to x, so the results for this method have not been graphed. Looking at the trends of results, though, the method seems to correspond fairly well to MC, but results are on a much smaller scale. Figure 7.1 and Figure 7.2 give results for a long position in a 3v6 FRA with a notional principal of R10m, using all methods, at the 95% and 99% condence levels respectively. Notice that the high period of volatility towards the end of 2001, which we saw in Figure 2.1, is very prevalent here, and in all the results. In both the graphs, apart from very slight sampling errors in MC (which was done using 8000 simulations), DN and MC track one another exactly. This is because the FRA is completely linear in the underlying zero coupon bond prices; both methods assumed that the daily bond price returns were multivariate normally distributed; and both methods are using the same volatility and correlation estimates. In this case, therefore, the use of MC is not justied at all, since the computational time required was enormous in comparison to that of DN. This was the case for all portfolios considered, since swaps are completely linear in the underlying bond prices as well. Now, to consider the performance of the methods, notice how quickly MC and DN react to increases in volatility. This is due to the exponentially weighted moving average technique which places a very high weighting on recent events. Often these methods can overreact to a sudden increase in volatility because of this. Considering Figure 7.1 and Figure 7.2, DN and MC do seem to be overreacting to large losses, relative to the other methods. DN and MC seem to be performing similarly at both the 95% and 99% condence levels (although of course 99% VaR is necessarily higher than 95% VaR). Consider the performance of the HS. At the 95% condence level, although VaR is not underestimated or overestimated very badly, the method just doesnt react to changes in P &L. At the 99% condence level, the method is reacting slightly more, but is overestimating VaR badly. The reason it is reacting more, is because the 99% condence level takes us further into the tails of the distribution. Essentially, what HS does, is to take an unweighted average of the 1st or the 5th percentile of the estimated value distribution, based on a 250 day window. Considering Figure 7.2, the volatile period towards the end of 2001 will therefore contribute signicantly, up until the day it drops out of the observation window, at which point the VaR drops dramatically. The drop in VaR in December 2002 appears to be corresponding to the 31

drop in VaR of the other methods. However, the reason for the drop is because this is exactly the point where the very volatile from a year ago begins to drop out of the window. By the time we get to February 2003, HS is at its lowest, whilst the other methods are at their highest. At this point, however, it begins to increase again, since there are now enough days in the new window which had relatively high returns. HS then remains high, even though the last four months were very quiet. Finally, considering HW, at the 95% condence level, we see that this method is a major improvement on HS. In this particular graph, it performs better than all the other methods. It is reacting well to changes in volatility, yet not overreacting, which is the case with DN and MC. At the 99% condence level, HW isnt performing well at all, however, and is overreacting to changes in volatility even more than DN and HS. There is no obvious explanation for this, and it perhaps merits further investigation. Figure 7.3 and Figure 7.4 show results for a short position in a 3 year interest rate swap, with a notional principal of R1m and quarterly payments, using the 4 methods, at the 95% and 99% condence levels respectively. Again we see that December 2001 is prevalent. Considering the 95% Condence level, we see that DN and MC are performing very well over most of the period, except for a few months in early 2002 where it is overestimating VaR. The performance of HS in this case is almost identicle to DN and MC, and, in this graph, even HS seems to be performing reasonably, although it is overestimating VaR until the volatile period drops out of the window. The 99% condence level shows up bigger dierences in the methods, with DN and MC denitely performing better than HS and HW. HS is again completely overestimating VaR, for the reasons explained above. The deeper into the tails we go, the greater the eect of December 2001 will be on HS, since the 1st percentile would correspond to worse loss than the 5th percentile, which means VaR is overestimated even more. Figure 7.5 shows results for a long position in a 5 year interest rate swap, with a notional principal of R1m and quarterly payments, at the 95% condence level. (MC is not included since it is equivalent to DN again.) This longer dated swap is exposed to bond prices at three monthly intervals out to ve years. We see a similar pattern to before, with HW reacting best to changes in the P &L at the 95% condence level, but the performance of all methods seems to be satisfactory. All in all, for the portfolios which were considered, it seems as though the methods are commonly overestimating VaR. Although a VaR method is assessed by the number of times VaR was exceeded, to few exceedences is not a good thing either, since an overestimation of VaR would lead to a capital requirement for market risk which is higher than it should be (the bank should, in this case, rather be spending some of its capital on risk taking). 32

Figure 7.1: Long 3v6 FRA at 95% condence level

Figure 7.2: Long 3v6 FRA at 99% condence level

33

Figure 7.3: Short 3 year swap at 95% condence level

Figure 7.4: Short 3 year swap at 99% condence level

34

Figure 7.5: Long 5 year swap at 95% condence level

35

Chapter 8

Conclusions
The objective of this project was to implement the various VaR methods, and then to compare the performance of the methods on linear interest rate derivatives. Once the methods had been implemented, it proved to be dicult to draw conclusive results of which method is best, using only two years worth of historical data (which contains one incredible volatile period). However, a lot of the typical characteristics of these methods were evident in the results obtained. One conclusion that can be drawn, is that HS performs the worst. This is despite the fact that the vast majority of banks worldwide use this approach. HW seems to be, in general, an improvement on HS. Also, it is probably considered more feasible than MC (based on Cholesky decomposition) when dealing with large portfolios, since the computational time required for the latter method is incredibly high. Monte Carlo simulation using PCA seems like a promising alternative to Cholesky decomposition, and it would have been very interesting to be able to compare the performance of these two methods, since the PCA code runs signicantly faster. To expand on the work done in this project, further research into the performance of these VaR methods for nonlinear interest rate derivatives could be done. However, a longer period of historical data should denitely be considered in order to be able draw more conclusive results.

36

Bibliography
[1] Basel Committee on Banking Supervision. Amendment to the capital accord to incorporate market risks. www.bis.org/publ/bcbs24.htm, January 1996. [2] John Hull and Alan White. Incorporating volatility up-dating into the historical simulation method for V@R. Journal of Risk, Fall 1998. [3] Philippe Jorion. Value at Risk: the new benchmark for controlling market risk. McGraw-Hill, second edition, 2001. [4] J.P.Morgan and Reuters. RiskMetrics - Technical Document. J.P.Morgan and Reuters, New York, fourth edition, December 18, 1996. www.riskmetrics.com. [5] John Hull and Alan White. V@R when daily changes are not normal. Journal of Derivatives, Spring 1998. [6] James M. Hyman. Accurate monotonicity preserving cubic interpolation. SIAM Journal of Statistics and Computation, 4(4):645654, 1983. [7] John Hull. Options, Futures, and Other Derivatives. Prentice Hall, fth edition, 2002. [8] Robert Jarrow and Stuart Turnbull. Derivative Securities. Second edition, 2000. [9] Graeme West. Risk Measurement. Financial Modelling Agency, 2003. graeme@nmod.co.za. [10] Gene Golub and Charles Van Loan. Matrix Computations. Third edition, 1996. [11] Nicholas J. Higham. Computing the nearest correlation matrix - a problem from nance. IMA Journal of Numerical Analysis, 22:329 343, 2002.

37

[12] Glyn A. Holton. Value at Risk: Theory and Practice. Academic Press, 2003. [13] Carol Alexander. Orthogonal methods for generating large positive semi-denite covariance matrices. Discussion Papers in Finance, 200006, 2000.

38

Appendix A

Appendix
A.1 The EWMA Model

The exponentially weighted moving average (EMWA) model is the model used by [4] to determine historical volatility and correlation estimates. A moving average of historical observations is used, where the latest observations carry the highest weight in the estimates. This is taken from [9]. If we have historical data for market variables x0 , x1 , . . . , xn , rst determine the geometric returns of these variables xi ,1 i n ui (x) = ln xi1 For time 0, dene
n

0 (x) = 10
i=1 n

ui (x)2

Cov0 (x, y ) = 10
i=1

ui (x)ui (y ), 1 i n

For 1 i n, the volatilities and covariances are updated recursively according to the decay factor , which, in this case, is dened to be = 0.94. The updating equations are
2 2 i (x)2 = i 1 + (1 )ui 250

Covi (x, y ) = Covi1 (x, y ) + (1 )ui (x)ui (y )250 These equations give an annualized measure of the volatility and covariance. To determine correlations, for 0 i n, set i (x, y ) = Covi (x, y ) i (x)i (y )

39

A.2

Cash Flow Mappping

This is adapted from the description in [4]. Suppose a cashow occurs at the non-standard maturity t2 , where t2 is bracketed by the standard maturities t1 and t3 . The cashow occurring at time t2 is mapped onto two cashows occurring at t1 and t3 as follows. Firstly, since bond prices dont interpolate without further assumptions, the interpolation is done on yields. Let y1 , y2 , and y3 be the continuously compounded yields corresponding to maturities t1 , t2 , and t3 respectively, so B (t1 ) = exp (y1 t1 ), B (t2 ) = exp (y2 t2 ), B (t3 ) = exp (y3 t3 ). Let 1 , 2 and 3 be the volatilities of these bond prices. The procedure is to rstly linearly interpolate between y1 and y3 to determine y2 , and to linearly interpolate between 1 and 3 to determine 2 . We want a portfolio consisting of the two assets B (t1 ) and B (t3 ) with relative weights and (1 ) invested in each, such that the volatility of this portfolio is equivalent to 2 . The variance of the portfolio is given by (4.2.1), and so we have
2 2 =

(1 )

2 1 1 3 13 2 3 1 13 3

(1 )

2 2 = 2 1 + 2 (1 )13 1 3 + (1 )2 3

where 13 is the correlation coecient of B (t1 ) and B (t3 ). Rearranging the above equation gives a quadratic equation in :
2 2 2 2 1 + 2 (1 )13 1 3 + (1 )2 3 2 = 0 2 2 2 2 2 2 (1 213 1 3 + 3 ) + (213 1 3 23 ) + (3 2 ) = 0

a 2 + b + c = 0 where
2 2 a = 1 213 1 3 + 3 2 b = 213 1 3 23 2 2 c = 3 2

which is then solved for . is taken to be the smaller of the two roots of this equation, in order to satisfy the third condition of the map, i.e. that the standardized cash ows have the same sign as the original cash ow.

40

A.3

The Variance of the Return on a Portfolio

The Variance of the return of a portfolio of assets, based on Markowitz portfolio theory, is given by the following, which is derived in [9]. Var[Rt ] = E[(Rt E[Rt ])2 ]
k

= E = E
i=1 k

wi (Rt,i E[Rt ])
k

wi wj (Rt,i E[Rt ])(Rt,j E[Rt ])


i=1 j =1 k

=
i=1 j =1 k k

wi wj E[(Rt,i E[Rt ])(Rt,j E[Rt ])]

=
i=1 j =1 k k

wi wj Cov[Rt,i , Rt,j ]

=
i=1 j =1

wi wj i,j i j

where i is the standard deviation (volatility) of the ith risk factor and i,j Cov[Rt,i ,Rt,j ] is the correlation coecient of Rt,i and Rt,j , dened by i,j := . i j (A.3.1) can be written in matrix form as Var[Rt ] = T where is the Variance-Covariance matrix 2 1 1 2 1,2 2 2 1 1,2 2 = . . . . . . k 1 1,k k 2 2,k given by . . . 1 k 1,k . . . 2 k 2,k . .. . . . 2 ... k

41

Appendix B

Appendix
B.1 VBA Code: Bootstrapping Module

Option Explicit Public Sub generateCurves() Dim Dim Dim Dim Dim Dim Dim numRates As Integer number of rates included in the array of market yields i As Integer, j As Integer, col As Integer writerow, writecol As Integer d As Date curveCount As Integer, readrow As Integer SCB() As Object curve() As Double

declare an array of standard days (out to 30 years) for which NACC rates will be written to the spreadsheet (these correspond approximately to the 365 day count used in SA) Dim stdVerts As Variant stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _ 2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950) Const stdVertCount As Integer = 24 Application.ScreenUpdating = False numRates = 24 determine the number of curves in the historical data With Sheets("Hist Data") readrow = 2 curveCount = 0 Do Until .Cells(readrow + 1, 1) = "" readrow = readrow + 1 curveCount = curveCount + 1 Loop

42

ReDim SCB(1 To curveCount) For i = 1 To curveCount Set SCB(i) = CreateObject("YieldCurve.SwapCurveBootstrap") Next i read data into the SCB objects For i = 1 To curveCount Application.StatusBar = "Processing data for curve " & i col = 1 SCB(i).Effective_Date = .Cells(2 + i, 1) SCB(i).IntMethod = "MPC" For j = 1 To numRates avoid leaving gaps in the arrays in the case where some rates are missing If .Cells(2 + i, 1 + j) <> "" Then SCB(i).Values(col) = .Cells(2 + i, 1 + j) SCB(i).Rate_Type(col) = .Cells(2, 1 + j) SCB(i).Rate_Term_Description(col) = .Cells(1, 1 + j) col = col + 1 End If Next j SCB(i).Top_Index = col - 1 Next i End With sheets("Hist Data") generate the NACC spot and forward curves and output With Sheets("NACC Swap Curves") .Range(.Cells(1, 1), .Cells(11000, 256)).ClearContents For writecol = 1 To stdVertCount .Cells(1, writecol + 1) = stdVerts(writecol - 1) Next writecol writerow = 2 For i = 1 To curveCount Application.StatusBar = "Generating curve " & i ReDim curve(1 To (SCB(i).Curve_Termination_Date - SCB(i).Effective_Date + 1)) For d = SCB(i).Effective_Date + 1 To SCB(i).Curve_Termination_Date curve(d - SCB(i).Effective_Date) = SCB(i).Output_Rate(d) Next d .Cells(writerow, 1) = SCB(i).Effective_Date .Cells(writerow, 1).NumberFormat = "d-mmm-yy" For writecol = 1 To stdVertCount .Cells(writerow, writecol + 1) = curve(stdVerts(writecol - 1)) Next writecol writerow = writerow + 1 Next i End With Sheets("NACC Swap Curves") For i = 1 To curveCount Set SCB(i) = Nothing Next i Application.StatusBar = "Done!" Beep End Sub

43

B.2

VBA Code: VaR Module

Option Explicit Private P() As Double Private vol() As Double, covol() As Double Private volR() As Double, covolR() As Double Private dates() As Date Private n As Integer # rows Private stdVerts() As Variant Const m As Integer = 24 # cols Const window = 250 Public Sub main() Const lambda As Double = 0.94, confidenceLevel As Double = 0.95 0.95 Dim i As Integer, j As Integer, k As Integer, readrow As Integer, readcol As Integer Dim data1 As Range, data2 As Range, data3 As Range data1 is a row of dates, data2 is the matrix of market variables through time. stdVerts = Array(1, 30, 91, 182, 273, 365, 456, 547, 638, 730, 1095, 1460, 1825, _ 2190, 2555, 2920, 3285, 3650, 4380, 5475, 6205, 7300, 9125, 10950) With Sheets("NACC swap curves") readrow = 2 n = 0 Do Until .Cells(readrow + 1, 1) = "" readrow = readrow + 1 n = n + 1 Loop n = n + 1 Set data1 = .Range(.Cells(2, 1), .Cells(n, 1)) Set data2 = .Range(.Cells(2, 2), .Cells(n, m)) ReDim dates(0 To n) For i = 0 To n - 1 dates(i) = data1.Cells(i + 1, 1) Next i dates(n) = dates(n - 1) + 1 End With Application.StatusBar = "Estimating Parameters..." Call emwa(data2, lambda) With Sheets("Disc factors") Set data3 = .Range(.Cells(3, 2), .Cells(n + 1, m)) End With Call profitAndLoss(data2) Application.StatusBar = "Calculating Historical VaR..." Call histVar(data2, confidenceLevel) Call deltaNormal(data2, confidenceLevel) Call monteCarlo(data2, 0.99) Call doPCA(data2, confidenceLevel) Application.StatusBar = "Done!" End Sub Private Sub emwa(data As Range, lambda As Double) Dim i As Integer, j As Integer, k As Integer Dim sum() As Double, sum2() As Double

44

P(1 To n - 1, 1 To m) vol(0 To n - 1, 1 To m) covol(0 To n - 1, 1 To m, 1 To m) = 1 To n - 1 j = 1 To m P(i, j) = (data.Cells(i, j) - _ data.Cells(i + 1, j)) * stdVerts(j - 1) / 365 bond price log returns Next j Next i get initial vols and covols ReDim sum(1 To m) ReDim sum2(1 To m, 1 To m) For k = 1 To 25 For j = 1 To m sum(j) = sum(j) + P(k, j) ^ 2 For i = 1 To m sum2(j, i) = sum2(j, i) + P(k, j) * P(k, i) Next i Next j Next k For j = 1 To m volR(0, j) = (10 * sum(j)) ^ 0.5 For k = 1 To m covol(0, j, k) = sum2(j, k) * 10 Next k Next j rolling calculator for vols and covols For i = 1 To n - 1 For j = 1 To m volR(i, j) = (lambda * (volR(i - 1, j) ^ 2 + (1 - lambda) * (P(i, j) ^ 2) * 250)) ^ 0.5 For k = 1 To m covol(i, j, k) = lambda * covol(i - 1, j, k) + _ (1 - lambda) * P(i, j) * P(i, k) * 250 Next k Next j Next i Call displayVols("vols", vol, dates, n, m) Call displayCovars("rate covar", covolR, n, m) End Sub Public Sub histVar(data As Range, confidenceLevel As Double) Dim i As Integer, j As Integer, today As Integer, vposn As Integer, k As Integer Dim hs_v(1 To window) As Double portfolio values for each scenario Dim hw_v(1 To window) As Double Dim hsPercentile As Double, hwPercentile As Double Dim hsRateScenario(0 To m - 1) As Double Historical simulation Dim hwRateScenario(0 To m - 1) As Double Hull White Historical Dim hsVar() As Double, hwVar() As Double Dim hsSum As Double, hwSum As Double, percentile As Double ReDim hsVar(1 To n ReDim hwVar(1 To n dont need to value For today = window + window) window) the new instrument each day, only 1 day later under each scenario 1 To n

ReDim ReDim ReDim For i For

45

Application.StatusBar = "Busy with historical VaR for " & dates(today - 1) vposn = 1 hsSum = 0 hwSum = 0 For i = today - window To today - 1 For j = 1 To m hsRateScenario(j - 1) = data.Cells(today, j) + _ data.Cells(i + 1, j) - data.Cells(i, j) hwRateScenario(j - 1) = data.Cells(today, j) + _ vol(today - 1, j) / vol(i - 1, j) * (data.Cells(i + 1, j) - data.Cells(i, j)) Next j contract entered into today, we are interested how much we could lose on it tomorrow: hs_v(vposn) = valueFRA(hsRateScenario, stdVerts, today, dates(today - 1), dates(today)) hs_v(vposn) = valueSwap1(hsRateScenario, stdVerts, today, dates(today - 1), dates(today)) hs_v(vposn) = valueSwap2(hsRateScenario, stdVerts, today, dates(today - 1), dates(today)) hw_v(vposn) = valueFRA(hwRateScenario, stdVerts, today, dates(today - 1), dates(today)) hw_v(vposn) = valueSwap1(hwRateScenario, stdVerts, today, dates(today - 1), dates(today)) hw_v(vposn) = valueSwap2(hwRateScenario, stdVerts, today, dates(today - 1), dates(today)) hsSum = hsSum + hs_v(vposn) hwSum = hwSum + hw_v(vposn) vposn = vposn + 1 Next i quickSort hs_v quickSort hw_v percentile = window * (1 - confidenceLevel) If Round(percentile) = percentile Then hsVar(today - window) = hsSum / (n - window) - hs_v(percentile) hwVar(today - window) = hwSum / (n - window) - hw_v(percentile) Else interpolate hsVar(today - window) = hsSum / (n - window) - 0.5 * (hs_v(Round(percentile)) _ + hs_v(Round(percentile) - 1)) hwVar(today - window) = hwSum / (n - window) - 0.5 * (hw_v(Round(percentile)) _ + hw_v(Round(percentile) - 1)) End If Next today With Sheets("long swap") For i = 1 To n - window .Cells(window + 3 + i, 2) = hsVar(i) .Cells(window + 3 + i, 4) = hwVar(i) Next i End With End Sub Public Sub doPCA(data As Range, confidenceLevel As Double) Dim i As Integer, readrow As Integer, corr() As Double Dim today As Integer, pcaCorr() As Double, pcaCov() As Double, k As Integer Dim v() As Double the 5 eigenvects corresp to the largest evals Dim sqrtLambda() As Double, lambdaTruncated(1 To 3, 1 To 3) As Double Dim rates() As Double, j As Integer, sum As Double, percentile As Double Dim rateScenario() As Double, vols() As Double Dim u(1 To 3) As Double, z(1 To 3, 1 To 1) As Double Dim vals() As Double, mcVar() As Double, result As Variant ReDim pcaCorr(1 To m, 1 To m), pcaCov(1 To m, 1 To m)

46

ReDim ReDim ReDim ReDim

sqrtLambda(1 To m, 1 To m) v(1 To m, 1 To 3), rates(0 To m - 1) rateScenario(0 To m - 1), vals(1 To 8000) mcVar(1 To n), vols(0 To m - 1)

Dim A As Double Dim summ As Double Dim l As Integer Randomize readrow = 2 For today = 1 To n sum = 0 Application.StatusBar = "Busy with PCA for " & dates(today - 1) With Sheets("rate covar") Call covarToCorr(corr, .Range(.Cells(readrow, 1), .Cells(readrow + m, m)), m) For j = 1 To m rates(j - 1) = data.Cells(today, j) vols(j - 1) = volR(today - 1, j) Next j A = 0 For j = 1 To 24 A = A + vols(j - 1) ^ 2 Next j A = Sqr(A / 24 / 250) Call findComponents(sqrtLambda, lambdaTruncated, pcaCorr, v, m, corr) need to convert the corr matrix into a cov matrix For i = 1 To m For j = 1 To m pcaCov(i, j) = pcaCorr(i, j) * vols(i - 1) * vols(j - 1) Next j Next i now we can simulate the components (which are independant) For k = 1 To 8000 For i = 1 To 3 u(i) = Rnd() If u(i) > 0.999999 Then u(i) = 0.999999 If u(i) < 0.000001 Then u(i) = 0.000001 z(i, 1) = Application.WorksheetFunction.NormSInv(u(i)) std normal random numbers Next i For i = 1 To 5 lambdaTruncated(i, i) = lambdaTruncated(i, i) ^ 0.5 Next i result = Application.WorksheetFunction.MMult(v, lambdaTruncated) result = Application.WorksheetFunction.MMult(result, z) summ = 0 For j = 1 To m rateScenario(j - 1) = rates(j - 1) + A * result(j,1) Next j vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today)) vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today)) vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today)) sum = sum + vals(k) Next k

47

readrow = readrow + m End With quickSort vals percentile = 8000 * (1 - confidenceLevel) If Round(percentile) = percentile Then mcVar(today) = sum / 8000 - vals(percentile) Else interpolate mcVar(today) = sum / 1000 - 0.5 * (vals(Round(percentile)) _ + vals(Round(percentile) - 1)) sum / 8000 - 0.5 * (vals(Round(percentile)) _ End If Next today With Sheets("long swap") For i = 1 To n .Cells(3 + i, 15) = mcVar(i) Next i End With End Sub Public Sub findComponents(ByRef sqrtLambda() As Double, ByRef lambdaTruncated() _ As Double, ByRef pcaCorr() As Double, ByRef v() As Double, size As Integer, _ corr() As Double) Dim evals As Variant, evecs As Variant Dim i As Integer, j As Integer Dim result As Variant, inv As Variant Dim eigval() As Double, count As Double ReDim eigval(1 To size) result = Application.Run("NTpca", corr) get eigenvalues and eigenvectors For i = 1 To size eigval(i) = result(1, i) For j = 1 To size If i = j Then sqrtLambda(i, i) = result(1, i) ^ 0.5 End If Next j Next i For count = 1 To 3 For i = 2 To size + 1 v(i - 1, count) = result(i, count) Next i Next count Call getCovPC(lambdaTruncated, eigval) With Application.WorksheetFunction result = .MMult(v, lambdaTruncated) inv = .Transpose(v) result = .MMult(result, inv) End With return the pca correlation matrix For i = 1 To size For j = 1 To size pcaCorr(i, j) = result(i, j) Next j Next i End Sub

48

Private Sub getCovPC(newCov() As Double, eigval() As Double) create a new covariance matrix for the principal components Dim i As Integer, j As Integer For i = 1 To 3 For j = 1 To 3 If i = j Then newCov(i, j) = eigval(i) Else: newCov(i, j) = 0 End If Next j Next i End Sub Public Sub monteCarlo(data As Range, confidenceLevel As Double) Dim z(1 To 24, 1 To 8000) As Double, covar(1 To m, 1 To m) As Double Dim currRates(0 To m - 1) As Double, v(0 To m - 1) As Double Dim i As Integer, j As Integer, k As Integer Dim rateScenario(0 To m - 1) As Double, vals(1 To 8000) As Double Dim sum As Double, vposn As Integer, mcVar() As Double, today As Integer Dim seed1 As Long, seed2 As Long, percentile As Double Dim u1 As Double, u2 As Double, z1 As Double, z2 As Double ReDim mcVar(1 To n) Randomize For today = 1 To n sum = 0 Application.StatusBar = "Busy with Monte Carlo VaR for " & dates(today - 1) get the rates and bond price vols for today For j = 1 To m currRates(j - 1) = data.Cells(today, j) v(j - 1) = vol(today - 1, j) / Sqr(250) daily vols Next j determine the covariance matrix for today For i = 1 To m For j = 1 To m covar(i, j) = covol(today - 1, i, j) / 250 Next j Next i generate 10000 rate scenarios for today and revalue the portfolio under each Call generate(z, 24, 8000) Call getCorrNums(z, covar, v) converts N(0,I) numbers to N(0,covar) numbers For k = 1 To 8000 For j = 1 To m rateScenario(j - 1) = currRates(j - 1) - 365 / stdVerts(j - 1) * v(j - 1) * z(j, k) Next j vals(k) = valueFRA(rateScenario, stdVerts, today, dates(today - 1), dates(today)) vals(k) = valueSwap1(rateScenario, stdVerts, today, dates(today - 1), dates(today)) vals(k) = valueSwap2(rateScenario, stdVerts, today, dates(today - 1), dates(today)) sum = sum + vals(k) Next k quickSort vals percentile = 8000 * (1 - confidenceLevel) If Round(percentile) = percentile Then

49

mcVar(today) = sum / 8000 - vals(percentile) Else interpolate mcVar(today) = sum / 8000 - 0.5 * (vals(Round(percentile)) _ + vals(Round(percentile) - 1)) End If Sheets("long swap").Cells(3 + today, 9) = mcVar(today) Next today End Sub Public Sub generate(ByRef nrmlNums() As Double, rows As Integer, cols As Integer) nrmlNums has dimensions 24x8000 Dim i As Integer, j As Integer, u As Double For i = 1 To rows For j = 1 To cols u = Rnd() If u > 0.999999 Then u = 0.999999 Else: If u < 0.000001 Then u = 0.000001 End If nrmlNums(i, j) = Application.WorksheetFunction.NormSInv(u) Next j Next i End Sub Public Sub getCorrNums(ByRef nrmlNums() As Double, cov() As Double, vol() As Double) Dim i As Integer, j As Integer converts N(0,I) numbers to N(0,cov) numbers using Cholesky decomposition Dim chol As Variant Dim X() As Double Dim corr(1 To n, 1 To n) As Double ReDim X(1 To n, 1 To k) For i = 1 To n For j = 1 To n corr(i, j) = cov(i, j) / (vol(i - 1) * vol(j - 1)) Next j Next i chol = cholesky(corr) get corresponding covar matrix For i = 1 To n For j = 1 To n cov(i, j) = corr(i, j) * vol(i - 1) * vol(j - 1) Next j Next i Call matProd(X, chol, nrmlNums) For i = 1 To n For j = 1 To k nrmlNums(i, j) = X(i, j) these are distributed N(0,cov) Next j Next i For i = 1 To 24 For j = 1 To 24 With Sheets("test") .Cells(i, j) = corr(i, j)

50

.Cells(i, j + 26) = chol(i, j) End With Next j Next i End Sub Function cholesky(Mat) performs the Cholesky decomposition A=L*L^T Dim A, l() As Double, S As Double Dim rows As Integer, cols As Integer, i As Integer, j As Integer, k As Integer A = Mat rows = UBound(A, 1) cols = UBound(A, 2) begin Cholesky decomposition ReDim l(1 To rows, 1 To rows) For j = 1 To rows S = 0 For k = 1 To j - 1 S = S + l(j, k) ^ 2 Next k l(j, j) = A(j, j) - S If l(j, j) <= 0 Then Exit For the matrix can not be decomp l(j, j) = Sqr(l(j, j)) For i = j + 1 To rows S = 0 For k = 1 To j - 1 S = S + l(i, k) * l(j, k) Next k l(i, j) = (A(i, j) - S) / l(j, j) Next i Next j cholesky = l End Function Private Sub matProd(A() As Double, B, C) Multiplies 2 matrices A(n x r) <-- B(n x m) x C(m x r) Dim n As Integer, m As Integer, R As Integer Dim i As Integer, j As Integer, k As Integer n = UBound(B, 1) m = UBound(B, 2) R = UBound(C, 2) ReDim A(1 To n, 1 To R) For i = 1 To n For j = 1 To R For k = 1 To m A(i, j) = A(i, j) + B(i, k) * C(k, j) Next k Next j Next i End Sub Public Sub deltaNormal(data As Range, confidenceLevel As Double) Dim rates(0 To m - 1) As Double, v(0 To m - 1) As Double Dim i As Integer, j As Integer, today As Integer, vposn As Integer

51

Dim covar(1 To m, 1 To m) As Double, variance Dim dnVar() As Double, stddev As Double ReDim dnVar(1 To n) vposn = 1 For today = 1 To n window + 1 To n - 1 Application.StatusBar = "Busy with Delta-Normal VaR for " & dates(today - 1) For j = 1 To m rates(j - 1) = data.Cells(today, j) v(j - 1) = vol(today - 1, j) / Sqr(250) Next j determine the covariance matrix for today For i = 1 To m For j = 1 To m covar(i, j) = covol(today - 1, i, j) / 250 Next j Next i stddev = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today), v, covar) stddev = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today), v, covar) stddev = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today), v, covar) dnVar(vposn) = 2.326 * stddev vposn = vposn + 1 Next today With Sheets("long swap") For i = 1 To n 1 To n - window .Cells(3 + i, 6) = dnVar(i) Next i End With End Sub Private Sub profitAndLoss(data As Range) Dim rates(0 To m - 1) As Double, vols(0 To m - 1) As Double Dim j As Integer, k As Integer, today As Integer, vposn As Integer Dim val1 As Double, val2 As Double Dim valNewInstr() As Double, valOldInstr() As Double valOldInstr values the instrument initiated 1 day previously ReDim valNewInstr(1 To n) ReDim valOldInstr(1 To n) vposn = 1 Application.StatusBar = "Determining P&L..." For today = 1 To n window + 1 To n - 1 For j = 1 To m rates(j - 1) = data.Cells(today, j) Next j If Not (today = n) Then valNewInstr(vposn) = valueFRA(rates, stdVerts, today, dates(today - 1), dates(today - 1)) valNewInstr(vposn) = valueSwap1(rates, stdVerts, today, dates(today - 1), dates(today - 1)) valNewInstr(vposn) = valueSwap2(rates, stdVerts, today, dates(today - 1), dates(today - 1)) End If If Not (today = 1) Then (today = window + 1) Then valOldInstr(vposn) = valueFRA(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1)) valOldInstr(vposn) = valueSwap1(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1)) valOldInstr(vposn) = valueSwap2(rates, stdVerts, today - 1, dates(today - 2), dates(today - 1))

52

End If vposn = vposn + 1 Next today With Sheets("long swap") For j = 1 To n - 1 1 To n - window - 1 .Cells(4 + j, 8) = valOldInstr(j + 1) - valNewInstr(j) Next j End With End Sub

Public Sub quickSort(ByRef v() As Double, _ Optional ByVal left As Long = -2, Optional ByVal right As Long = -2) quicksort is good for arrays consisting of several hundred elements Dim i, j, mid As Long Dim testVal As Double If left = -2 Then left = LBound(v) If right = -2 Then right = UBound(v) If left < right Then mid = (left + right) \ 2 testVal = v(mid) i = left j = right Do Do While v(i) < testVal i = i + 1 Loop Do While v(j) > testVal j = j - 1 Loop If i <= j Then Call SwapElements(v, i, j) i = i + 1 j = j - 1 End If Loop Until i > j sort smaller segment first If j <= mid Then Call quickSort(v, left, j) Call quickSort(v, i, right) Else Call quickSort(v, i, right) Call quickSort(v, left, j) End If End If End Sub used in QuickSort Private Sub SwapElements(ByRef v() As Double, ByVal item1 As Long, ByVal item2 As Long) Dim temp As Double temp = v(item2) v(item2) = v(item1)

53

v(item1) = temp End Sub Public Sub displayVols(sheetname As String, ByRef vols() As Double, ByRef dates() As Date, _ rows As Integer, cols As Integer) Dim i As Integer, j As Integer With Sheets(sheetname) For i = 1 To rows .Cells(i + 1, 1) = dates(i - 1) .Cells(i + 1, 1).NumberFormat = "d-mmm-yy" For j = 1 To cols .Cells(i + 1, j + 1) = vols(i - 1, j) Next j Next i End With End Sub Public Sub displayCovars(sheetname As String, ByRef covol() As Double, _ rows As Integer, m As Integer) writes the covariances to spreadsheet Dim i As Integer, j As Integer, k As Integer, writerow As Integer, writerow2 As Integer writerow = 1 writerow2 = 1 For i = 1 To rows Sheets("dates").Cells(writerow2, 1) = dates(i - 1) For j = 1 To m Sheets(volsheet).Cells(writerow2, j) = vols(i - 1, j) For k = 1 To m convert from annual to daily measure of covariance Sheets(sheetname).Cells(writerow + j, k) = covol(i - 1, j, k) / 250 Next k Next j writerow = writerow + m writerow2 = writerow2 + 1 Next i End Sub

B.3

VBA Code: Portfolio Valuation Module

Option Explicit Private Function valueSwap(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ startDate As Date, valDate As Date, deltat As Integer, n As Integer, NP As Double, _ effSwapRate As Double, jibar As Double, longOrShort As String) As Double Dim T As Integer, daysTillNextFlow As Integer, i As Integer Dim rate As Double, B() As Double, vFix As Double, vFloat As Double, sum As Double Dim nextFlow As Date, Bfloat As Double, R As Double

54

nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate)) ReDim B(1 To n) For i = 1 To n daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) B(i) = Exp(-R * daysTillNextFlow / 365) nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow)) Next i If valDate = startDate Then vFloat = NP * (1 - B(n)) Else nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate)) daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) Bfloat = Exp(-R * daysTillNextFlow / 365) vFloat = NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat - NP * B(n) End If sum = 0 For i = 1 To n sum = sum + B(i) Next i vFix = effSwapRate * NP * sum valueSwap = vFloat - vFix If longOrShort = "short" Then valueSwap = -valueSwap End If End Function Private Function interpRates(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ daysTillNextFlow As Integer) As Double given a set of rates at the set of stdVerts, determines the rate at the specified # days by interpolating between 2 closest nodes in stdVerts Dim R As Double, i As Integer, curr As Integer current position in stdVerts i = 0 curr = stdVerts(i) While daysTillNextFlow > curr find the closest standard vertex occurring after days i = i + 1 curr = stdVerts(i) Wend If daysTillNextFlow = curr Then R = rateScenario(i) no interpolation needed Else R = interp((stdVerts(i - 1)), (stdVerts(i)), rateScenario(i - 1), _ rateScenario(i), (daysTillNextFlow)) End If interpRates = R End Function Private Function splitOntoStdVerts(daysTillNextFlow As Integer, _ ByRef stdVerts() As Variant, ByRef splitVals() As Double, _ val As Double, Optional ByRef vols, Optional ByRef cov) As Integer given the number of days till a cashflow and its value, and vols for the std vertices, split the cashflow onto 2 nodes and determine the 2 values at each node (splitVals array) return the index in stdVerts of the first of the 2 nodes onto which it was split

55

Dim v1 As Double, v2 As Double, v3 As Double Dim i As Integer, curr As Integer current position in stdVerts Dim W As Double, sigma As Double Dim posn As Integer i = 0 curr = stdVerts(i) While daysTillNextFlow > curr find the closest standard vertex occurring after days i = i + 1 curr = stdVerts(i) Wend If daysTillNextFlow = curr Then v2 = vols(i) no interpolation needed W = 1 splitVals(0) = val splitVals(1) = 0 posn = i - 1 Else v1 = vols(i - 1) v3 = vols(i) v2 = interp((stdVerts(i - 1)), (stdVerts(i)), v1, v3, (daysTillNextFlow)) W = quadratic(v1, v2, v3, (cov(i, i + 1))) splitVals(0) = val * W splitVals(1) = val * (1 - W) posn = i - 1 End If End Function Public Function interp(C As Double, d As Double, f_c As Double, f_d As Double, _ X As Double) As Double interp = (X - C) / (d - C) * f_d + (d - X) / (d - C) * f_c End Function Public Function quadratic(sig_a As Double, sig_b As Double, sig_c As Double, sig_ac As Double) _ As Double Dim alpha As Double, beta As Double, gamma As Double Dim root1 As Double, root2 As Double, k As Double alpha = sig_a ^ 2 + sig_c ^ 2 - 2 * sig_ac beta = 2 * sig_ac - 2 * sig_c ^ 2 gamma = sig_c ^ 2 - sig_b ^ 2 k = Sqr(beta ^ 2 - 4 * alpha * gamma) root1 = (-beta + k) / (2 * alpha) root2 = (-beta - k) / (2 * alpha) If (root1 < root2) Then quadratic = root1 Else quadratic = root2 End If End Function Public Function valueSwap1(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ startDatePosn As Integer, startDate As Date, valDate As Date, _ Optional ByRef vols, Optional ByRef cov) As Double

56

short position in a 3 yr vanilla interest rate swap, quarterly payments startDatePosn determines the rate to be used, and valDate is the valuation date (this will either correspond exactly to dateposn, or to 1 day (the holding period) later if vols are provided ie for delta-normal, returns the std dev, else returns the price Const deltat As Integer = 3 Const n As Integer = 12 Const NP = 10000000 notional principal Dim swapRate As Double, effSwapRate As Double, jibar As Double swapRate = Sheets("Hist Data").Cells(startDatePosn + 2, 12) NACQ swap rate effSwapRate = swapRate / 4 effective rate per quarter jibar = Sheets("Hist Data").Cells(startDatePosn + 2, 4) 3 month JIBAR for the first period can use this because we are only ever valuing 1 day into the contract in this case If Not (IsMissing(vols)) Then valueSwap1 = valueThreeYrSwapDN(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _ effSwapRate, jibar, "short", vols, cov) delta normal - returns the std dev Else valueSwap1 = valueSwap(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _ effSwapRate, jibar, "short") general swap valuation End If End Function Public Function valueSwap2(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ startDatePosn As Integer, startDate As Date, valDate As Date, _ Optional ByRef vols, Optional ByRef cov) As Double long position in a 5 yr vanilla interest rate swap quarterly payments, there are 20 payments remaining. startDatePosn determines the floating rate to be used, and valDate is the valuation date (this will either correspond exactly to startDate, or to 1 day (the holding period) later. Const deltat As Integer = 3 Const n As Integer = 20 number of payments remaining Const NP = 10000000 notional principal Dim swapRate As Double, effSwapRate As Double, jibar As Double swapRate = Sheets("Hist Data").Cells(startDatePosn + 2, 14) 5 yr NACQ swap rate effSwapRate = swapRate / 4 effective rate per quarter jibar = Sheets("Hist Data").Cells(startDatePosn + 2, 4) 3 month JIBAR for this period If Not (IsMissing(vols)) Then valueSwap2 = valueFiveYrSwapDN(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _ effSwapRate, jibar, "short", vols, cov) delta normal - returns the std dev Else valueSwap2 = valueSwap(rateScenario, stdVerts, startDate, valDate, deltat, n, NP, _ effSwapRate, jibar, "long") End If End Function Private Function valueThreeYrSwapDN(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ startDate As Date, valDate As Date, deltat As Integer, n As Integer, NP As Double, _ effSwapRate As Double, jibar As Double, longOrShort As String, Optional ByRef vols, _ Optional ByRef cov) As Double

57

simplifying assumptions: no mapping is done for the 1st 8 payments since we have the rate stored (to within 1 or 2 days). The 9th, 10th and 11th payments are mapped onto the 2 yr and 3 yr nodes. The 12th payment is allocated entirely to the 3 yr node. This means that we have a portfolio of 15 cashflows based on 9 of the std vertices. fixed leg and floating leg are considered separately and combined at the end Dim T As Integer, daysTillNextFlow As Integer, i As Integer Dim rate As Double, B() As Double, vFix As Double, vFloat As Double, sum As Double Dim nextFlow As Date, Bfloat As Double, R As Double Dim W(1 To 10, 1 To 1) As Double array of value weights Dim splitVals(1) As Double, index As Integer Dim weightToBeSplit As Double, newCov() As Double, variance nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate)) ReDim B(1 To n) For i = 1 To 8 daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) B(i) = Exp(-R * daysTillNextFlow / 365) PV flow gives the value weight of that cashflow W(i + 1, 1) = effSwapRate * NP * B(i) positive since we are short! nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow)) Next i W(9, 1) = 0 W(10, 1) = 0 For i = 9 To 11 daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) B(i) = Exp(-R * daysTillNextFlow / 365) weightToBeSplit = effSwapRate * NP * B(i) index = splitOntoStdVerts(daysTillNextFlow, stdVerts, splitVals, weightToBeSplit, vols, cov) W(9, 1) = W(9, 1) + splitVals(0) W(10, 1) = W(10, 1) + splitVals(1) nextFlow = DateSerial(Year(nextFlow), Month(nextFlow) + deltat, day(nextFlow)) Next i daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) B(12) = Exp(-R * daysTillNextFlow / 365) W(10, 1) = W(10, 1) + effSwapRate * NP * B(12) now the floating leg: W(10, 1) = W(10, 1) + NP * B(12) since we a are short! If valDate = startDate Then W(1, 1) = -NP Else nextFlow = DateSerial(Year(startDate), Month(startDate) + deltat, day(startDate)) daysTillNextFlow = nextFlow - valDate R = interpRates(rateScenario, stdVerts, daysTillNextFlow) Bfloat = Exp(-R * daysTillNextFlow / 365) vFloat = NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat - NP * B(n) W(2, 1) = W(2, 1) - NP * (1 + jibar * daysTillNextFlow / 365) * Bfloat End If ReDim newCov(1 To 10, 1 To 10) Call getNewCov(newCov, 2, 11, cov) check this

58

With Application.WorksheetFunction variance = .MMult(.Transpose(W), .MMult(newCov, W)) End With valueThreeYrSwapDN = variance(1) ^ 0.5 End Function Private Sub getNewCov(ByRef newCov() As Double, ind1 As Integer, ind2 As Integer, Optional ByRef cov) returns a submatrix of the original covar matrix, from ind1 to ind2 Dim i As Integer, j As Integer, corresp_i As Integer, corresp_j As Integer corresp_i = ind1 corresp_j = ind1 For i = 1 To ind2 - ind1 + 1 corresp_j = ind1 For j = 1 To ind2 - ind1 + 1 newCov(i, j) = cov(corresp_i, corresp_j) corresp_j = corresp_j + 1 Next j corresp_i = corresp_i + 1 Next i End Sub

Public Function valueFRA(ByRef rateScenario() As Double, ByRef stdVerts() As Variant, _ startDatePosn As Integer, startDate As Date, valDate As Date, _ Optional ByRef vols, Optional ByRef cov) As Double long position in a 3v6 FRA if vols and correlations are provided ie for Delta-Normal method, then return the std deviation of the FRA, otherwise return the value of the FRA Const NP = 10000000 Dim fraRate As Double, date1 As Date, date2 As Date, r1 As Double, r2 As Double Dim t1 As Integer, t2 As Integer, B_t1 As Double, B_t2 As Double Dim W1 As Double, W2 As Double value weights, as given by the PV of the 2 payments Dim covFRA(1 To 2, 1 To 2) As Double Dim W(1 To 2, 1 To 1) As Double Dim variance fraRate = Sheets("Hist Data").Cells(startDatePosn + 2, 5) 3v6 FRA rate NACQ B_t1 = Exp(-rateScenario(2) * stdVerts(2) / 365) B_t2 = Exp(-rateScenario(3) * stdVerts(3) / 365) W1 = -NP * B_t1 W2 = NP * (1 + fraRate * 91 / 365) * B_t2 If Not (IsMissing(vols)) Then return std dev of the FRA (applies to Delta Normal) covFRA(1, 1) = cov(3, 3) covFRA(1, 2) = cov(3, 4) covFRA(2, 1) = cov(4, 3) covFRA(2, 2) = cov(4, 4) W(1, 1) = W1 W(2, 1) = W2 variance = W1 ^ 2 * vols(2) ^ 2 + 2 * W1 * W2 * cov(3, 4) + W2 ^ 2 * vols(3) ^ 2 valueFRA = variance ^ 0.5 Else valueFRA = W1 + W2 this applies to historical and HW historical

59

End If End Function

60

Вам также может понравиться