Not logged in.

Contribution Details

Type Master's Thesis
Scope Discipline-based scholarship
Title Comparative Performance of Quantile-Based Risk Measures VS Value at Risk and Expected Shortfall
Organization Unit
Authors
  • Dominic Schaub
Supervisors
  • Cosimo Munari
Language
  • English
Institution University of Zurich
Faculty Faculty of Business, Economics and Informatics
Number of Pages 100
Date 2019
Abstract Text The Banking & Insurance sectors are naturally faced with various sources of risk. Given their critical role regarding financial stability both banks and insurance companies have been regulated. For banks, the focus has mainly been on protecting liability holders by reducing the likelihood of insolvency by specifying regulatory (risk) capital requirements that act as a buffer against unexpected losses. Insurance companies have mainly been regulated due to the critical role that they play for both households and firms (by insuring them against risks they’re faced with). The banking world is predominantly regulated by the Basel Committee on Banking Supervision and its Capital Accords while the insurance sector (in the European Union) falls under the Solvency regulations by the European Parliament. Insurance companies in Switzerland have to comply with the Swiss Solvency Test as prescribed by the Swiss Financial Market Supervisory Authority FINMA. Over the years, many different approaches were taken in an attempt to ensure financial stability. While the Glass Steagall Act of 1933 separated investment and commercial banking activities, most concepts didn’t employ such drastic measures but instead focused on (adequately) capturing and managing the various sources of risk. The methodologies used evolved from simple ratios (such as the Cooke Ratio employed under Basel I) to more sophisticated ones. The introduction of the concept of Value-at-Risk (VaR) in 1994 (by JP Morgan) marked the beginning of a new era regarding the assessment of capital adequacy and was officially endorsed by the Basel Committee in its Basel II framework, which was released in 2004. It was used to assess minimal capital requirements for market risk - other sources of risk were taken care of by other means within the three-pillar framework inherent to the current Basel- and Solvency frameworks. The concept of VaR did receive some (mainly academic) criticism even before its adoption but it wasn’t until the financial crisis of 2007 that those concerns were taken seriously. The main reason for that was that the financial crisis exposed a major weakness of the VaR concept: its structural blindness to tail risk. On the grounds of the theory of coherent risk measures (as put forth by Artzner et al. in 1999), Expected Shortfall (ES) emerged as a viable alternative to VaR because it takes the entire tail of the loss distribution into account. Such was its perceived improvement upon VaR that the Basel Committee announced a switch from VaR to ES in its regulatory framework Basel I III (2010-2011). On the insurance side, Solvency II replaced Solvency I in 2009 but stuck with VaR (albeit at a higher significance level of 99.5%). The Swiss Solvency Test on the other hand applies a 99% ES for the assessment of market risk. The theory of coherent risk measures still forms the backbone of risk measures used for regulatory purposes today. It measures the risk of a portfolio of assets and liabilities by determining the minimum amount of capital that needs to be raised and held in cash (or invested in the eligible asset) in order to make the portfolio’s future value acceptable. This capital then constitutes a risk measure that may be coherent if it satisfies the axioms of Monotonicity, Translation Invariance, Positive Homogeneity and Sub-additivity. The main difference between VaR and ES is that VaR fails to be sub-additive and thus isn’t coherent whereas ES is. This implies that aside from not being sensitive to tail risk, VaR also fails to encourage diversification. Expected Shortfall isn’t without its flaws either as it trades surplus invariance for coherence, thus mixing the interests of liability holders and owners of an institution as well as allowing for regulatory arbitrage when used as a global regulatory measure. It is for that reason that Bignozzi et al. (2018) introduced a new class of risk measures based on a Benchmark Loss Distribution (BLD) called Loss Value-at-Risk (LVaR). This concept can be applied to any VaR estimate in order to make it sensitive to tail risk while avoiding the drawbacks of ES. The main principle is that losses should be acceptable only if they occur with a pre-specified low probability and that the degree of acceptability should depend on the loss size: higher losses will be tolerated with lower probability. This is achieved by requiring the use of a higher significance level if losses exceed the VaR estimates. Conceptually, this can be seen as a strengthening of the VaR criterion with the positive difference between the two representing the cost of aligning the empirical distribution with the BLD. The empirical performance of VaR, ES and LVaR was analysed using absolute daily returns of the S&P 500 for the years 2006-2018. Two different BLDs were specified and used to obtain the estimates for the following methodologies: Historical simulation, Weighted Historical simulation, Risk Metrics, Monte Carlo Simulation and Extreme Value Theory. The results indicate that LVaR may improve upon the corresponding VaR estimates by means of a lower scoring function and a reallocation of percentages in the Traffic Light System from yellow to green. The effect sizes, however, were modest at best with an infinitesimal impact on the number of breaches. As the effects were not unidirectional either, more work needs to be done in order to determine the optimal specification of the BLD based on the distribution in question. Due to the small effects observed for LVaR, a novel concept was put forth by the author in the form of Distance-VaR (DVaR). The concept can also be seen as an extension of VaR and relies on a BLD just like LVaR. However, instead of comparing a method’s VaR estimates with the realized losses, it compares, for every point in time, the estimates at all significance levels defined within the BLD with the corresponding critical loss levels and adds the maximal positive difference to the baseline VaR estimate. As a consequence, DVaR may result in higher capital requirements than VaR even in the absence of breaches. This results in a significant shift of percentages within the Traffic Light System from red to yellow to green (as well as a significant reduction in the number of breaches). Unfortunately, this comes at the expense of a marked increase in the scoring function. As with LVaR, more work is needed to determine the optimal BLD for use with DVaR. Out of all the methodologies used to obtain VaR estimates, the Weighted Historical Simulation performed best, followed by the Extreme Value Theory approach in second place (by a margin). Interestingly, the LVaR estimates for those two methods were positively influenced for the former and negatively for the latter. Regarding DVaR, the conclusion is not as straightforward: from a regulatory point of view, it might be seen as an improvement as it markedly reduces the number of breaches regardless of the methodology used. The more inefficient use of capital, however, makes it appear to be an inferior risk measure. If one’s sole focus were on banks (or insurers), then this would be a compelling argument against DVaR’s use. Yet on a larger (macroeconomic) scale, bailing out financial institutions doesn’t constitute efficient use of capital either. This drawback might thus be more of a political issue rather than a financial/statistical one.
PDF File Download
Export BibTeX