Академический Документы
Профессиональный Документы
Культура Документы
ESTIMATION OF
FINANCIAL RISKS
Finally, I owe the greatest debt to my Dad, Mom, Brother, Sister for their
wholehearted support and encouragement all the way. They are the great
source of inspiration for me to overcome all the challenges during my course
as well as dissertation process.
Madhan Kondle
Table of contents
Particulars page
no:
Abstract
Acknowledgments
Table of contents
List of tables
Chapter 1: Introduction
1.1Introduction
1.2Structure of thesis
5.1 Matlab
Chapter 6: conclusions
6.1 Conclusions
Chapter 7: References
Abstract:
Chapter 1:
1.1 Introduction:
There is little hesitation among financial managers and researchers in the field
of finance during the process of decision making due to the level of
uncertainty involved with it, which can be said as the greatest challenge. In
fact, Risk which comes in the form uncertainty is elementary to the
contemporary financial theory and is viewed as an independent restraint, to
which many intellectual resources have been assigned to analyse the risk.
This risk besides making the decision making process complicated also
generates opportunities to those individuals who can handle risk’s in an
efficient manner.
Chapter 2:
Literature review:
Investors are being faced off with a constant trade off between
adjusting likely returns at high risks. Financial markets can be affected with in
no time by the varying interest rates, exchange rates and commodity charges
[(Sto00a) and (Mit06)] As a result, at all times it is essential to make sure that
the financial risks are identified and managed appropriately in advance. A
poor risk management system can result in bankruptcies and threaten the
complete collapse of finance division [KH05]
Initially risk was treated as negative impact or treated in downside but in the
recent past it is also accepted that risk has potentially positive impact. Kaplan
and Garrick (1981) stated that risk itself comes with three parts a) what can
go wrong? b) Chances of it going wrong? c) What are the consequences if at
all it goes wrong?
Answering, the first involves figuring out set of likely scenarios.. The
second requires estimating the likelihood of every scenario and the third
involves evaluating the consequences of each scenario. This explanation
focuses on the improvement of the scenario thus making it part of the
definition.
A) Pure risk: In this kind of risk there will be no gain and the best possible
outcome is outcome with no loss occurring.
B) Speculative risk: In this type of risk there are chances for both gain and
loss.
http://www.scribd.com/doc/26507765/Financial-Risk-Management
http://www.theshortrun.com/finance/finance.html
Any risks associated with any form of financing are known as financial
risks. It is the probability of actual return being less than the expected return.
There is no business without uncertainty in it and that uncertainty is called
risk. The greater the risk the greater the potential benefit you expect.
Thus, taking a risk should mean:
Capital asset pricing model is most widely used financial model to facilitate
the portfolio diversification.
http://www.scribd.com/doc/26507765/Financial-Risk-Management
http://www.clinicalgovernance.scot.nhs.uk/documents/risk_control.pdf
http://www.businessdictionary.com/definition/risk-identification.html
Deregulation has been the main driving force in the financial markets which
has foremost importance on the risk management practices today. Since
1970’s increased globalisation is only due to the deregulation of industries and
also with the deregulation of the financial operations new risks came into
existence where banks started to offer insurance products and insurance
organisations addressing market and credit derivatives.
Understanding and assessing financial risk:
If a plan is developed to meet all the above sources then it can benefit from
long term success. Developing a plan must be from financial statements and
measures of its financial performances. The fact is that without proper
business planning and financial performance analysis, financial managers can
generate a way out of business as easily as a way developing a way out fron
the difficult situations.
All the financial managers must question themselves some questions like
What are the company’s short terms and long term goals or targets?
These are the main questions that are to be answered in order to access the
financial risk of any operation.
There are certain minimum requirements to access risk factors and measure
the performance of the organisation these include
a) Balance sheet
b) Income statement
c) Cash flow statement
d) Equity statement of owner.
Balance sheet:
Income statement:
http://agecon.uwyo.edu
a) How and where the company spent the earned cash returns?
b) What did the company do with the cash returned from financing or
from the sale of investments?
c) How did company manage to invest in new investments and clear the
debts?
Owner equity and net worth are the two terms often used interchangeably
by non- accountants and these mean the same. For only business prepared
statements the term owner equity is used and net worth is used in
statements prepared for both business and personal entities. The main
perception of this owner equity statement is to conciliate the value of
business as stated at the staring of the accounting period to the end of the
period. This conciliation authenticates that both statements are in
agreement. Change in owner equity comes from only few sources and the
first is from the sustained earnings and contributed capital. Sustained
earnings are portions from the net income that are reinvested into the
business. Contributed capital is the capital that is invested into the
business from the external sources. The next source of transform is from
the valuation equity. This represents the difference between the present
market value and the book value.
What does all this mean for finance manager? In times of low returns
received for the operations performed the first affected area will be the
operational cash flow within the business. If sufficient returns are not
generated from the business activities profitability is the first thing that
is affected. We should remember that the cash flow statement will
examine the operations that will generate the income.
One of the best ways to gain knowledge on the concept is to study the
development from its history. By perceiving the issues faced by the
researchers during development will help gain more knowledge on the
concept. It all [risk estimation] started with Pre- Markowitz risk measure.
Markowitz an ambitious student in 1950’s defined risk in a new way. Investors
knew that stocks are risky and many businesses were left over with a great
depression after 1920 financial crash.
Risk measures relate intimately to the utility function and the asset pricing
theories. Based on the Markowitz portfolio optimization theory (µ =expected
return and σ for standard deviation), and (µ, ρ) is the portfolio optimisation
where ρ is for coherent risk measure. (µ, ρ) can be altered to the maximizing
problem U= µ- λρ, where λ= Lagrangian variable. U can be defined as a utility
function and is used only when ρ is convex. U can also used as an utility
function for specific decision maker in φ(X) = E [U(X)], Dyer and Jia [43]
proved that we can derive an accurate risk measures. ρ(X) = -E
[U(X-E(X))] is for the specific utility purpose. Jaschke and Kuchler [42] have
proved that coherent risk measures are necessarily correspondent to the
comprehensive arbitrage bonds and called “good deal bonds “by the Cerny
and Hodges [21]. Thus coherent bonds link well with both utility maximisation
and economic theory of arbitrage.
In this way if all the above conditions are satisfied then it is said to be a
coherent risk.
These coherent risk measure axioms have been very much influential, and can
be used to regulate capital requirements; risk predicted by the market
associates and insurance underwriters and also it is used to allocate the
capital available. But we should be aware of the fact that these coherent risk
measures are realistic to use under certain practical conditions.
Coherent risk measures were extended by Delbaen [26] and later to convex
risk measures also called as weak coherent risk measures by freeing both
Subadditivity and positive homogeneity conditions and as an alternative
requiring a weaker constraint:
Convexity (e): ρ (XY + (1-λ) Y) ≤ λρ(X) + (1-λ) ρ(Y), ∀λ∈ [0, 1]
ρ (λX) ) ≤ λ(X), ∀ λ ≥ 1, ∀X ∈ G,
The second inequality says that when the value of λ goes large then the whole
position of (λX) will be less liquid than the λ single position for X. the second
inequality says that when λ goes small the first inequality should hold for
certain reasons.
http://madis1.iss.ac.cn
The main question that has to be answered by VaR is “how much can we
expect to lose given cumulative probability ς, and in given time prospect T”?
And therefore VaR can be defined as [Sze05]
F (Z (T) ≤ VaR = ς
Where
Σ represents the cumulative probability related with the threshold value VaR,
to the loss distribution of Z (t).
Implementation of VaR:
ΔV(t)=
The final area of risk management is in devising risk specific risk measures
like liquidity risk measures, credit risk measures etc. Any how these should be
noted in such manner that such risk exists already in those areas example
Merton’s structural credit default model [Mer74] and the KMV model [Kea03].
Simply, a decision tree can be defined as a diagram that sets out the various
options connections related with a decision and the possible outcomes that
may come from those options.
On the whole decision trees provide a flexible and influential approach when
dealing with risks mainly those involved in phases where decisions in each
phase reflect the outcomes in previous one. Besides providing us with the
measures of risk exposure they also make us think during the course how we
will think in both adverse and positive outcomes after each phase. On the
other side there are also some limitations in using decision trees
1. Decision trees call for more time and money to complete. So,
they are not suitable for making minor decisions where the cost
involved may exceed the benefit derived from those trees.
2. As the information is accessible in quantitative form there is a
risk that it might be taken as it is and it is essential to make sure
that the information used in these trees are reliable.
3. Non- quantifiable factors like market changes, government
policies and peoples decision etc may play a vital role but these
cannot be used in decision trees.
There are mainly three different types of analysis that can be made using
Bayesian analysis- prior analysis, pro posterior analysis and posterior analysis.
Prior analysis:
Posterior analysis:
EMGII= EMVII-CI
1. The first step is to identify all the possible outcomes and analyse the
marginal/ unconditional probabilities.
2. Suppose that each research result has been obtained. Now for every
research outcome
a) Calculate posterior probabilities b) determine the expected
monetary value of each course of action under consideration c)
select that course of action that yield highest value of expected
monetary value and d) multiply the highest expected value to the
marginal probability of research outcome.
1. Add the products of step 2 to get the expected monetary value of the
plan that includes commissioning of research before coming to the final
decision.
2. Calculate EMVII
3. Calculate EMGII
4. Decide the strategy that gives highest EMGII provided at least there is
at least 1 strategy that produces a positive EMGII. In case if there is no
strategy or plan with a positive EMGII then select the strategy that
yields highest EMV.
Expert system:
From the group of AI, Artificial Neural Networks is the one that deals
best with uncertainty levels. Dealing with financial uncertainty mainly
concerns with identification of patterns in previous data and predict future
events using that information. Accurate estimation of financial tasks like
foreign exchange rates, currency movements etc is highly complicated
practices in business, it also lines most important factor for business survival.
This issue is better dealt by ANN’S than any other technique and the reason is
they deal well with large noisy data sets. Unlike expert systems ANNs are not
transparent this makes hard to interpret.
Medsker et al. [1993] mentioned the following financial analysis task of which
sample neural network model have been build:
Hsieh [1993] has stated that the following business financial applications can
be drastically improved with ANN technology.
➢ Economical simulation
➢ Predicting investors actions
➢ Estimation
➢ Credit endorsement
➢ Security and portfolio management
➢ Determining optimal investment structure
Trippi and Turban [1996] stated that financial organisations are in the second
place for the US defence academy for sponsorship on research in artificial
neural networks applications.
These ANNs models were encouraged by the biological sciences which learn
how Neuroanatomy of living animals has developed in solving problems.
Nelson and Illingworth [1990] also called these ANNs as the following:
➢ Neuro computing
➢ Neuromorphic systems
➢ Parallel distributed processing models
➢ Self organizing systems
➢ ANNs consist of various interconnected processors called as neurons
which execute the summing function. Information here is stored on the
connections weights.
An ANN resembles the human brain neural network. Living organism’s nervous
system functions with the biological neural network through which complex
tasks can be processed instinctively. Neurons are the central processing units
of the human nervous system. Human brain has around 10 to 100 billion
neurons each linked to many other by ‘synapses’. There are around 100
trillion synapses in human brain. Simply, they imitate the learning process of
human brain. The first ANN theory was illustrated by researchers to examine
the human brain and the thinking process.
➢ These ANNS are used to model the biological networks which enable to
gain knowledge on the human brain and its working. This area of
interest to researchers in neuroanatomy and psychologists.
➢ These ANNs can be used as an educational tool to solve complex
problems that the conventional AI methodologies and computer
algorithms cannot solve. Researchers in this include the computer
engineers, scientists etc. who are mainly concerned to construct better
algorithms by studying the problem solving process of ANN.
➢ ANNs are also used to solve real world problems in various commercial
and financial applications. Many researchers in this are from different
backgrounds than those that are related to ANN and the main reason
for this is its simplicity in using the tool and the reported success in the
fields that’s already in use. There are many ANN software tools in the
market which enables users to get used to it very quickly without any
hassles and also without in depth knowledge about the ANN algorithms.
This is unlike the traditional computer algorithms where user must go
through and understand the whole algorithm before working on it. In
case of ANN all user need to know is to how to present the problem at
hand in form that ANN can understand.
➢ Improving ANN algorithms. Researchers in this area are interested in
designing better ANN algorithms that can work more effectively i.e.
more accurate results.
Research efforts on ANN are going on global basis. Nelson and Illingworth
[1991] stated that jasper Lupo, director of the tactical technology office of the
Defence Advanced Research Projects Agency (DARPA) called the neural
technology as “more powerful than atomic bomb” [Johnson and Schwartz
1998]. Japan’s primary ANN research is sponsored by its govt. Under its post
fifth generation computer program called “THE HUMAN FRONTIERS”. Anyhow,
Japanese corporations are already in the process of developing ANN based
products.
The human expert’s knowledge is also very much important for the progress
the computerised or programmed expert system. In view of the fact that, this
is a time taking process and also there are more chances of producing
erroneous results the decisions made by the human expert systems are
inferred or secondary besides considering the decisions.
a) Combination of inputs(s);
b) Combination of outputs (s);
c) Weighting procedures i.e. initialization of weights and the
weighting algorithms
d) Types of Activation or transfer functions
All these properties can also be applied to the entire network on system basis.
Network architecture which is also called as network topology defines the
network structure and also includes the following characteristics:
4.1 Neurodynamics:
Oj = fj
∑wij xi
i
Equation 4.1
1
logistic: f ( x) =
1 + e- x
x - x
hyperbolic tangent: f ( e - e
x)= ex + e - x
Due to its ease of computing, logistic function remains the most frequently
used in ANN
f’ ( x) = f ( x)(1 − f ( x))
j= me/n+z;
Where, j= neurons in hidden layer
m= number of data points in the training sets
e= error tolerance
n= inputs
z=outputs
The final two standards are almost similar and might not be useful when
the error tolerance values are much smaller than the training facts. For
example if the total number of training facts are 1000 and the error
tolerance is 0.0001 which means that the number of hidden neurons
would be 0.001 which is meaningless in Lawrence’s suggestion, on the
other hand Baum and Haussler’s suggestions would result in much
smaller value for the hidden neurons. Most researchers still don’t
believe that the hidden neurons cannot be determined by input and
output values.
∑w ij
xi > θ
Equation 4-2
where w
ij = weight of neuron from i to neuron j
x
i= neuron i input
θ = threshold on neuron j
Ripley [1993] believes that the error rate will be finite for a random pattern
with N perceptron inputs, given that the patterns are linearly separable. And
this is irrespective to the learning pattern algorithm.
Figure
Output, Oj=f(hj,θ j)
hj
j w2
w1j j
x1 Inputs x2
http://www.theshortrun.com/finance/finance.html
http://www.clinicalgovernance.scot.nhs.uk/documents/risk_control.pdf
http://www.businessdictionary.com/definition/risk-identification.html