Вы находитесь на странице: 1из 11




Authored by:
David Depew, Vice President, Sapient Global Markets
Ashish Shah, Manager, Business Consulting, Sapient Global Markets

February 2017


Effectively Managing Model Risk 3

The Basics of Model Risk 3

Managing Model Risk 4-8

Conclusion 9

Authors 9


For decades, models used for valuation, market risk, alpha generation, pre-trade cost predictions, liquidity risk, as well as
credit and counterparty analysis have been the exclusive purview of quantitative analysts. With increasing regulation, however,
model risk and the management of that risk have been thrust into the spotlight, with regulators, senior management and
Chief Risk Officers taking notice. For example, with the promulgation of the Fundamental Review of the Trading Book (FRTB),
the Basel Committee for Bank Supervision (BCBS), for the first time, provides very detailed and comprehensive requirements
for model management and validation.

Despite mandates to better manage model risk, many firms are wary of another regulation-related cost eroding the bottom
line. Yet while it can indeed be costly to manage model risk, efficiencies can be gained by using the right analyses and
procedures. This paper describes effective model development and validation processes while detailing techniques and tools
to improve model risk management in the most cost effective way possible.


Defined as the possibility that a financial institution suffers losses due to errors in the development and/or application of
models, model risk comes from a variety of sources, including:

Application and Implementation

Application is rarely mentioned when discussing model risk, but it is a critical aspect. An obvious first question to ask, Is
the model being used? The organization should monitor if the portfolio or process is actually applying the modelis there
exposure to the model within the process? There have been many instances when practitioners market a model-driven
approach but practice something more eclectic.

Based on the assumptions, data and methodology, one needs to use a model properly. The correlation of return forecasts
and actual returns, or ICs, of alpha models tend to be quite low, so a practitioner should not use the model to load up on
a handful of bets hoping for success. One should diversify his or her bets to increase the odds of winning. Sometimes,
valuation models usage is extended to products beyond those for which the models were designed.

It is important to understand a models strengths and limitations. Factor risk models measure exposure and volatility, so
they capture risks that are not obvious. However, they are typically based upon normality and historical correlations that are
stochastic, or randomly determined.

How an organization implements its models is important. One major decision is whether to implement models in a
decentralized fashion or on a centralized platform. Decentralization provides flexibility and the ability to enact faster
changes while centralizing models facilitates the governance process. Another aspect of implementationis the code
accessible and documented?


In 2011, the Federal Reserve and the Office of the 2. The Model Validation Process
Comptroller of the Currency published a paper arguing The primary tool to mitigate model risk is a
for a holistic approach to managing model risk, which comprehensive, ongoing and targeted model validation
can come from a variety of sources (e.g., assumptions, process performed by analysts who are independent from
methodology/equations/calibration, data and application the development and application functions. Validation
and implementation). processes will differ slightly depending on the type of
model considered, but typically include:
To this end, mitigating model risk involves three stages:
model development, validation and governance. Understanding and describing the modeling process,
which addresses the calculations and the underlying
1. Model Development theory as well as assumptions
Model developers should work closely with investment Discussing the validity of these assumptions
and risk personnel to ensure the models reflect their
investment philosophy; that is how they intend to beat
Listing data and parameters
the market or determine what risks are critical. Models Detailing calibration activities
should rely on a minimum set of factors to provide a Considering the models strengths and limitations
stable, predictive output.

Once the researcher has a solid understanding of the

Front-office personnel should also apply a common-
modeling process, he or she needs to determine if the
sense filter. They often are the best quality assurance
model is fit-for-purpose. In other words, is it appropriate
team since they use the models consistently. The front
given the organizations policies, procedures, resources
office should discuss the model purpose with the
and needs? A model not only needs to be effective but
developers to ensure they create the right tools to fix the
also viable for the organization. For example, a firm may
right problem. Finally, there must be organizational buy-
not be able to maintain a sophisticated model using
in for the analytical process ranging from the front office
existing resources.
to senior management. The best way to facilitate buy-in is
to provide education regarding the modeling process that
The next step involves the actual computing and testing.
includes a discussion of its strengths and weaknesses.
Test data needs to be determined and validated. Data
transformations should be tested, and the purpose of
these transformations should be detailed. Parameter
stability tests should be run, at which time the statistician
should look for parameter breaks and other instabilities,
The primary tool to mitigate model as well as testing for model instability where the
functional form of the model changes over time. Model
risk is a comprehensive, ongoing and output should satisfy specified smoothness conditions.
Part of this analysis should examine how the model
targeted model validation process. performs at the domain boundaries.


The next analysis step is to review any calibration An Example of Alpha Model Risk
procedures used in the modeling process. Calibration is Performance data from Lipper indicates that most types
the process of finding the coefficients that enable a model of quantitative funds underperformed in the later half
(the kind and structure of which is already determined) of 2007. People attempted to ascertain the reasons for
to most closely (according to a certain metric) reflect a this systematic underperformance. One of the most
particular aspect of a known dataset. Part of the review accepted reasons was the fact that most quant strategies
should analyze how the coefficients were shocked and use similar factors built with the same data sources.
what metric drives the calibration process. Furthermore, correlations among quant model outputs
during that time period were elevated and rising. This
An analyst may also perform sensitivity analysis (or stress indicates positions had became part of a crowded trade
testing) on a model by changing either model parameters phenomenona situation where most quant alpha
or input data. Changing the model parameters is similar models led investors into the same trades and positions.
to calibration except one is not trying to minimize some When the trades fell, transaction costs rose with everyone
type of error; one is simply viewing model behavior based looking for the exit at the same time.
on changed or extreme inputs.
Crowded trades contradict the primary trading concept
All models should be subjected to back-tests and/or of Michael Steinhardt (a successful trader profiled
statistical tests. Regulators are now mandating these in the book Market Wizards), who looks for variant
tests in the context of FRTB for the internal models perceptions that differ from market consensus. He
approach. Statistical tests include R-squared, correlations feels that to become a successful investor, one should
between estimates and actuals and error analysis. In create investment theses based on outcomes that differ
contrast, back-tests count hit rates (or the number of from those accepted by the market. Investments based
times results exceed a threshold). This is a common test upon accepted relationships and ideas are unlikely to
for value-at-risk. Risk models should be analyzed using be profitable as many are already invested in them and
bias tests where predicted tracking risk is compared to the trades become fully valued. Quants managing alpha
realized risk. models should be concerned about the risk associated
with using the same data and factors as everyone else.
Pre-trade cost models by necessity are structural models
(i.e., models based upon economic theory). The dependent
variable (costs) are so volatile and subject to so many
exogenous influences that standard statistical tests like
R-squared are meaningless. The best the modeler can do
is to select independent variables that unequivocally have
an effect on costs (e.g., bid/offer spread or trading volume)
and then apply specialized fit indices to model the results.

The validation process can also include comparing

alternative models (differing in either kind and/or
structure) to the target model using the aforementioned
tests to determine viability.


Validating a Simulation Algorithm The stability test also indicates whether the simulation
Capital market participants, specifically quants, rely on is behaving properly by determining if the input
simulation models when they want to use a customized parameter is included within the confidence interval
distribution instead of a parametric distribution. of the test statistic over a specific time frame. In other
Simulation is a tool used within all the analytical words, the means for the trials would be compared
endeavors listed above (except for the alpha and to the initial spot rates included in the simulation.
pre-trade models). The analyst would create confidence levels for this
difference based upon the standard error of a mean
and the standard deviations would be compared to the
The test for validating a swaption volatilities that were used. The test to reject
the null hypothesis that the difference is zero is a Chi-
simulation consists of analyzing Square test.

the convergence and the stability The correlation matrix generated would be compared
to a benchmark correlation matrix. This test will be a
of the simulation process. t-test. In addition, the analyst should perform tests to
determine whether the correlations are statistically
different from zero.
The test for validating a simulation consists of analyzing
the convergence and the stability of the simulation A convergence test attempts to determine how quickly
process. For example, consider a scenario where a the accuracy of the simulation improves as the number
simulation is used for generating a spot curve consisting of trials increases. The convergence test begins by
of 10 spot rates with a Libor Market Model. The generation generating a simulation consisting of 10,000 trials (the
of an individual spot rate will be called a trial and current number of trials), and then calculating the test
the collection for every spot rate trial will be called a statistics specified in the stability test (i.e., the mean
simulation. Term swaption volatilities will serve as one for all the spot rates, the standard deviation of the spot
of the inputs into the process. rates and pair-wise correlations of all the spot rates).
This test would then be run for 1,000 trials, 5,000 trials,
A stability test would validate that if a simulation was run 50,000 trials and 100,000 trials.
a certain number of times (for example, 20 times), then
the simulation output would be reasonable and would not The statistician would then determine if the test
change dramatically. The test statistics would include the statistics converge to the input parameters (initial
following: spot rates, swaption volatilities and the benchmark
correlation matrix) as the trials increase. They would
the mean for each spot rate for 10,000 trials also determine the form of the error function (e.g.,
the standard deviation for each spot rate for the exponential decay) by fitting a line through an error
10,000 trials versus the number of trials plot.

a correlation matrix among all the spot rates for the Models are not usually stand-alone calculators. Rather,
10,000 trials they are typically part of a system or a platform with
an orchestrator controlling which models are run
The testing approach calculates different measures of at a certain time and with a specific data set. A model
dispersion (standard deviation, interquartile range, etc.) validation can also test this control structure to ensure
for the test statistics and the analyst defines acceptable it is performing as intended.
levels of dispersion.


Figure 1 shows a universal sample model validation Well-designed reports can also be helpful in model risk
scorecard that contains processes and tests for any management. Cross-sectional or time-series output can
model type. Studying a model scorecard should help highlight outliers or a change in trends.
determine if a model was created and used effectively.

Figure 1: Universal sample model validation scorecard.

Describe industry best practices for the modeling exercise

Description of Current Model

Model equation
Theoretical basis
Determine whether the model is appropriate for its purpose
Computational methods/methodologies
Model assumptions
Calibration procedures
Limitations assessment

Model Validation Tests

Specify (and provide) testing data sets
Sanity tests (boundary conditions, smoothness)
Stability and convergence tests for Monte Carlo simulations
Estimation tests
Benchmarking model and inputs against alternatives
Sensitivity analysis for parameters and inputs (same as stress testing)
System Control Structure
Assign a model risk rating
Model Validation Letter

Summarize test results

Provide conclusions
State recommendations including model enhancements


Tools for Model Validation 3. Model and Data Governance

Model validation can be facilitated by creating and Governance is a control function to ensure designated
applying proper tools. To this end, firms should build parties have access to all of the information necessary to
a model workbench that integrates models, function understand every aspect of an organizations modeling
libraries (including simulation engines), data and a process. A logical first step in model governance is to
front end that enables on-demand visualization and take inventory of the organizations models. While this
aggregation. Models can be created using the function seems intuitive, it is surprising how few organizations
library and analysts can then apply a model to any have an up-to-date and complete inventory that includes
position (or combination of positions) performing a model description, data inputs (and sources), usage
scenario analyses using ad hoc or historical scenarios. and model creators. Model code should be versioned
They can calibrate a model using different sets of market and dated.
prices. The user interface should be straightforward so a
wide range of people from quantitative developers to risk Another aspect of model governance is an effective
analysts to portfolio managers can use it. The workbench challenge. This occurs when a knowledgeable,
should also include a governance module to track model independent party reviews and analyzes models and
versions, back-test results and time stamp data sets. results. As an example of this capability, consider a
group within an asset manager tasked with analyzing
stock selection models. This group ran indendepent
Figure 2: Model workbench components model tests over different time periods as well as
different sectors and capitalizations to measure model
effectiveness and find weaknesses.



Reference Scenario
Data Analysis


Market Backtesting
Data Tool




All data and model changes should be documented so David Depew

models can be run using as-was data. This is an important Vice President , Global Market Solutions
feature that will be necessary to comply with regulatory David Depew brings over 28 years
requests and model validation analyses. All tests should be of experience in fundamental and
able to be replicated. To do this, one needs the specific model quantitative investing. David is a
instance and the data set that was used. subject matter expert in market risk
management, performance attribution, quantitative
Conclusion investing and derivatives. David came to Sapient from
While regulators are prescribing rules, materiality should Putnam Investments, where he was head of fixed
determine the methods and resources devoted to model income risk management and quantitative equity trading
risk management. Firms with complex models that are strategist; he also managed Putnams international
used extensively and have a greater variability of inputs equity fair value process and the equity portfolio analysis
will need a more comprehensive and resourced risk team. Prior to Putnam, David was head of investment risk
management program. management, derivatives and attribution at Wellington
Management Company and the fixed income derivatives
A holistic approach to managing model risk with standards strategist for Morgan Stanley. David holds an MBA
regarding development, validation and governance requires from the University of California, Berkeley and a BA in
dedicated resources and an appropriate culture. The most Business (Accounting concentration) from the University
critical component is a complete and documented validation of Washington. He also completed the coursework for an
process. Validation is the foundation of good model risk MS in applied statistics at the University of California, Los
management. In addition, properly implemented, maintained Angeles. He is a Chartered Financial Analyst.
and updated models provide benefits exceeding just the
model output. The discipline inherent in the process can
Ashish Shah
improve any organization as parties critique and question
Manager, Business Consulting
different aspects of their processes.
Ashish Shah is a Senior Manager of
Business Consulting within the
References Trading and Risk Management
1. B
 oard of Governors of the Federal Reserve System and practice. Based in New York, Ashish
Office of the Comptroller of the Currency. Supervisory has over 14 years of experience in
Guidance on Model Risk Management. April 4, 2011. financial services and risk management. He has worked
http://www.occ.treas.gov/news-issuances/bulletins/2011/ with global and regional financial institutions across North
bulletin-2011-12a.pdf America, Asia and Europe and is responsible for product
management, business analysis and business architecture
2. B
 asel Committee on Banking Supervision. Minimum within regulatory compliance, market risk management,
Capital Requirements for Market Risk. January 2016. equities and fixed income, as well as front- and middle-
http://www.bis.org/bcbs/publ/d352.pdf office processing. Ashish holds an MBA in Finance and BS
in Electronics and Communications.
3. Morini, Massimo. Understanding and Managing Model
Risk. January 14, 2014. http://econ.au.dk/fileadmin/

About Sapient Global Markets
Sapient Global Markets, a part of the Publicis.Sapient digital transformation platform, is a leading provider of services to todays
evolving financial and commodity markets. We offer services and unique methodologies across business consulting, user
experience, operations, program management, technology development and solutions. Fusing creativity, technology and industry
expertise, we enable our clients to grow and enhance their businesses, create robust and transparent infrastructure, manage costs,
and foster innovation to their customers and throughout their organizations. Sapient Global Markets operates in key financial and
commodity centers worldwide, as well as in large technology development and operations outsourcing centers globally. For more
information, visit sapientglobalmarkets.com.

2017 Sapient Corporation.

Trademark Information: Sapient and the Sapient logo are trademarks or registered trademarks of Sapient Corporation or its subsidiaries in the U.S. and
other countries. All other trade names are trademarks or registered trademarks of their respective holders.

Sapient is not regulated by any legal, compliance or financial regulatory authority or body. You remain solely responsible for obtaining independent legal, compliance and
financial advice in respect of the Services.