Risk Measurement has become a whole new soul-search. In the late 1980s, financial markets began a transition into Risk Factor Sensitivities and Potential Loss Amount estimations, as alternatives to talking about Risk in notional amounts of exposure to one instrument or another. Senior management said well what does all that mean? How much could we lose?
And then came VaR!
And now a world with only two types of people – loud VaR-haters and very quiet and docile VaR lovers! Let me make my position crystal-clear: Some of my friends love VaR, some of my friends hate VaR. I agree with my friends.
Strengths of VaR
- Simple, with intuitive explanations as rough measure of how much a firm could lose e.g.
- VaR (95% confidence, 1-day): actual losses should exceed this only once in 20 days
- VaR (99%, 1-day horizon) : actual losses should only exceed this once in 100 days
- Broad application across instruments and classes; relatively easy to calculate and to understand in terms of dollars, additive, like all dollars fungible. Life was good!
VaR Assumptions, weaknesses, and misunderstandings
- Tendency to assume Normal distributions, and thus low probability of ‘extremes’. Reality is that financial returns are more skewed than normality suggests - excessively high and low return days are far more common than would be expected;
- There is often an assumption that history repeats itself, or, the past can predict the future;
- Now that we all know this, let me say it boldly: VaR does NOT describe the worst case loss (the problem of the little-knowledge-but-very-dangerous-manager). All it estimates is the worst case for a specified probability. In fact, an interpretation of VaR is that LOSSES WILL EXCEED VaR, with probability equal to (1 – VaR confidence level);
- VaR does not describe the losses in the extreme left “tail” of the distribution. (Conditional VaR can help to measure “the expected loss, given the loss exceeds VaR”)
- VaR does not distinguish portfolio liquidity; very different portfolios can have the same VaR i.e. VaR is a static measure of risk and does not capture the dynamics of possible losses if a portfolio were to be unwound;
- Computations can be very complex; there is model risk; precision should not be assumed;
- VaR-constrained traders can game the system i.e. maximize risk subject to keeping VaR steady. The game repeats itself at several levels; and can trigger an avalanche, because everyone misjudges risk in the same way.
Risk professionals will always use the caveats listed above, and warn against assuming VaR as the worst-case, or even the advisability of relying on a simple number that is statistically generated. With your continued involvement and comments, let us talk some more about VaR, Models, Stress-Tests…
21 comments:
There has been much recent debate around the value of risk models and the validity of VaR. Your summary provides a good overview but what about correlation? The model typically assumes fixed correlations, which, as we saw during the height of the credit crisis, is a major weakness when markets go haywire.
VaR is a useful method but not an exclusive method. Stress test has to be taken to work with VaR.
And any model needs assumptions, no model could reflect the real world. So knowing the disadvantages of the model is essential.
Anonymous
VaR is relatively simple to calculate and easy to interpret when analyzing risk in a single portfolio. It becomes complicated and less reliable when it is used to measure risk across an entire organization. Without incorporating proper stress testing and scenario analysis its output is even less telling. Unfortunately that is precisely how regulators have historically applied it.
There is more than a thing or two wrong with VaR. Depending on where you get your sigma to calculate VaR (i.e. historical analysis) you could be dead wrong about what you get as "Value at Risk". The history of a certain entity has nothing to do with its future. Then comes the deception of "security" and "safety" that comes with VaR which is talked about a lot. So for the reasons above, I vote NO to VaR.
We use VaR-like calculations to measure the value of property or casualty insurance alternatives for our clients. We do not use the term VaR but rather "cost of volatility." Most of our risks are highly skewed to the left so we would not assume a gaussian distribution . We usually do not consider alternative liquidities but we could by altering the cost of capital.
VaR is one of the most intuitive risk measures. However, I personally prefer looking at its analogue ES (Expected Shortfall) also called ETL (Expected Tail Loss) for it provides useful information about the risk in the tail, or about all those "rare" events that occur beyond VaR.
When looking at VaR, ETL and whatever risk measure the underlying distributional model is extremely important. In this respect the Normal distribution or the correlations that we all used to study at universities and courses proved to not realistically model the real world. That is why more complex distributions and models are sought after, like Student T, Stable Distribution, Extreme Value Theory, etc. Using such models is a challanging tasks because of their complexity, but at least they give a more realistic results.
This is a very succinct and well-argued analysis of the strengths and weaknesses of VaR. We tend to focus on either of the two, and it is very refreshing to have a comprehensive view on VaR. As a tool, we cannot underestimate its usefulness; shortcomings notwithstanding it is a very simple and reasonably accurate tool.
Other comments here have discussed stress testing as a complementary tool. It would be very beneficial to hear what the consensus is on the relative contribution of stress testing on the usefulness of VaR, particularly how risk managers can (and should) communicate the results of VaR within the organization, and how stress test results can better communicate the extent of possible losses.....
The problem is not VaR, the problem is how we use it.
People cannot use it without knowing exactly what it is (strengths and weakness).
The problem is not in VaR it's in the people who use it and don't know it well enough.
This is pretty obvious that VaR cannot be the only risk measure to be used, when it comes to risk management nothing replaces an analytical approach.
VAR has and always will be a measurement that should be in place. There are many "gaps" but the key is to effectively communicate the results both internally and externally. The use of time series or Component/Marginal VAR breaks down the distribution of the results and provides more effective communication of the results. It's a tool that can be used to manage if you are a good risk manager.
this is the general misconception that VaR is "easy to calculate" even for one portfolio.
it's easy to calculate only for portfolio which contains liquid instruments or products with linear payoff but optionality or illiquidity brings the task of calculation to the level of THE ART - and we ARE the ARTISTS :) ...some of us are Pollack-like (throwing tons of irrelevant data into also irrelevant statistical models within a risk engine) and others are Leonardo da Vinci :)
VaR can be useful when assumptions are clearly stated and computations done appropriately. Many risk measurement software use Gaussian multivariate distibutions (copulas) to estimate portfolio VaR. Of course this can be grossly inappropriate due to liquidty and correlation dynamics of the portfolio's constituent asset classes. in this context, Archimedean Copulas allow for flexibility in VaR and ES calculations as itrelates to correlation. They are relatively easy to implement in a microsoft excel framework. The validity of VaR estimates should be regularly conducted using backtesting exercises. My vote is conditionally-Yes (along with dyanimcally simulated stress tests)
All of the above comments are useful and helpful. There is no question that CVaR or Expected Shortfall a.k.a Expected Tail Loss are superior to VaR because of their subadditivity.
The inclusion of higher moments and liquidity terms are all essential improvements as are moves away from the Gaussian paradigm.
Correlation in particular needs work becuase it is only a linear measure of dependence that fails to capture the non-linear tail dependence effects we see in a crisis.
For one approach to this problem see:
http://infiniti-analytics.com/kb/kb/article/four_moment_risk_decomposition
Your summary is very interesting. Today, the Gaussian law belongs to the past and many fat-tailed probability distributions can be used under the VaR framework. I would also like to stress four other points a an academic risk analyst. As a first point, people shouldn't forget that the main results depend on the forecast about the future. When you try to forecast a specific loss over a coming more or less short time horizon, you assume that the risk structure (i.e. the selected probability distribution) you used and you calibrated on past data will prevail on this future horizon. The problem is to anticipate breaks in upcoming risk structures, which is a hard task. As a second point, the methodology is very flexible since you can consider various risk levels while employing different values. Moreover, if your concern is the fat-tail property, you can also employ extreme value distributions so as to cope with very far or extreme distribution tails. Everything is possible. As a third point, correlation can be considred while focusing on multivariate dependence structures of assets in a portfolio. Such scheme is achieved with the Copula theory, which copes with symmetric and asymmetric distributions with or without tail dependencies. Extreme value copula functions are also powerful. Hence, common dependence as well as tail dependence can be considered. Finally, it is possible to switch from a static setting to a dynamic one. This concern requires applying conditional VaR principle while considering either conditional probability distributions or conditional copula functions... It is then easier to update the loss estimates and to detect more or less structural breaks in the risk profile of portfolios.
HAYETTE.
Peter J. de Marigny / DITMo Equity Hedge & Overlay Strategies:
VaR is just an X score of a distribution - the problem with that is the same problem as predicting outliers and using multiples of max drawdowns: RELIANCE ON THE DATA SERIES THAT HAS A LACK OF REPRESENTATION OF OUTLIERS. WE SHOULD LOOK TO NON-PARAMETRICS. -pj de Marigny *.*
Too much reliance has been placed on VaR as a single risk measurement. During normal times,it works well and that is when many users took it for granted and complacency sets in. Need to assess and measure risk at various dimension and level. Nevertheless, it good to have VaR as one of those risk measurement.
As my background involves managing a VaR model and relaying it to firm executives, I am biased, but VaR is an integral part of risk management. VaR has been the scapegoat for so much of what has happened over the last couple of years, but one must not forget that most of these shortcomings are the fault of the end users of VaR, not the model itself. No matter how sophisticated your model, backtesting, stress testing, and less-quantitative risk management methods are critical. VaR is a nice, pretty little number for CEOs and CFOs to use, but without proper understanding of what that number represents, the firm is doomed. It's the risk manager's responsibility to stress the shortcomings of the model and represent the outputs as they relate to common sense.
In this new financial world we all now live in, it's time for risk managers to rise to the occasion and educate their firms on holistic approaches to risk management. Our practice extends far beyond outputting a single, tidy number, and educating those who are not risk specialists is the key.
I DO NOT HAVE MUCH TO SAY ABOUT VaR. I WILL COMPARE VaR WITH EXPECTED SHORTFALL WHICH IS MORE EFFECTIVE AS COMPARED TO THE LATER BECAUSE EXPECTED SHAORTFALL IS SUB-ADDITIVE WHILES VaR IS NOT.THIS IS THE ONLY PROPERTY OF ANY GOOD COHERENT RISK MEASURE THAT DIFFERENTIATE THE TWO.THE SUB-ADDITIVE PROPERTY: Rm(A+B) < Rm(A)+Rm(B) WHERE Rm IS A COHERENT RISK MEASURE(SAY VaR AND EXPECTED SHORTFALL) AND A AND B ARE FINANCIAL INTRUMENTS WHICH ARE CONTINEOUS MEASURABLE BY THE COHERENT RISK MEASURE Rm. VaR TURNS TO VIOLATE THIS NATURAL AND BASIC PROPERTY OF ANY GOOD COHERENT RISK MEASURE.
Problem with contemporary financial (and economics) theory is that it continues to evolve on wrong footings ('primary laws') and it never questions them, let alone abandon one of them when empirical results strongly contradicts them.
This is more to do with prevailing mentality and characteristics of people involved in this area of human knowledge i.e. abundance of vanity) than with, which in the end only hurts finance and economics as scientific disciplines making them more like systems of belief that actual science (with consistent scientific, empirics driven, methods).
To paraphrase one of the greats; Economists, bankers and 'financial experts' are making their maps of terrain much simpler than they should have been in order for them to be useful in practice.
Regarding VaR measure; let me just say that installing a single statistical measure (regardless of its type) to the pedestal of finance to serve as a 'wholly grail' of risk measurement and management, did not bring any good to financial system. Risk is so complex and sneaky beast that to hunt it down with a single bullet (measure) may result in hunting accident for the hunter himself.
Give the beast the respect it deserves; admit first that none of the hunters doesn’t know much about it nor about the forest itself for that matter since it changes and grows rapidly and unpredictably (admittance of arrogance) and than spend some time learning about its habits (showing humility and respect) and during that time use all the hunters, rifles, ammunition, decoys in your disposal (carefulness) and never abandon good sense of reality and measure of doing things accompanied wit a degree of healthy skepticism (prudence).
Although it has several limitations market risk manager are aware of (you would expect)when combined with other measures it should be a good complementary tool.
I´m convinced that having simple measures that Senior Mgmt. and public can understand (also disclosing the assumptions and limitations of the model) provide a good governance. Obviously market risk managers are able to develop more complex models and they should continue to do so if they help to explain the risk involved but pretending that Senior Mgmt. will be able to understand those in all cases will be failing to accept reality.
VaR is useful but only as a guide. As was shown in the GFC the implied correlations can break down and VaR Models are very dependant on the time horizon they are built. That is how GFC outcomes are 1/3000 year events according to some VaR models that were built on two years of data.
Where we are concluding this is that VaR is what it is – a tool, a useful measure, but one strategy among many. And as fallible as any model….Clearly, VaR is NOT ideal in isolation, and as the sole determinant of risk and consequent decision making.
Meanwhile, approximately half our voters want liquidity assumptions built in, and another third would suggest we leave VaR alone where it is so long as we recognize its limitations and use it as just one tool in a good kit; a reasonable percentage meanwhile want to get out of the Gaussian state of mind and/or abandon VaR completely.
All excellent thoughts!
As a tail-piece, I would add a couple of considerations, certainly reflected in many of the comments received:
- There is need for Risk Managers to take up some more active communication and education within their firms about measuring risk, and the benefits and limitations of VaR. Particularly emphasize perhaps that VaR at best may only talk about abnormal losses in normal markets; the ‘so what’ is important when markets turn ‘abnormal’
- Even within the constraints of current modeling mechanisms, there may be an argument for using LOWER confidence intervals in computing VaR. See from the prior posting that a 95% VaR produces (by definition) a far greater number of fails or ‘exceptions’ than 99% VaR. The advantage? Fuller data sets for back-testing, and a better acknowledgment that we do not have much knowledge of what lurks in the extremes!
Coming up shortly, Scenario Analysis & Stress Testing as a way out of the VaR dilemma, first with some basics!
Jaidev Iyer, MD, GARP
Post a Comment