Cosimo Munari, Multi-utility representations of incomplete preferences induced by set-valued risk measures, Finance and Stochastics, Vol. 25 (1), 2021. (Journal Article)
We establish a variety of numerical representations of preference relations induced by set-valued risk measures. Because of the general incompleteness of such preferences, we have to deal with multi-utility representations. We look for representations that are both parsimonious (the family of representing functionals is indexed by a tractable set of parameters) and well behaved (the representing functionals satisfy nice regularity properties with respect to the structure of the underlying space of alternatives). The key to our results is a general dual representation of set-valued risk measures that unifies the existing dual representations in the literature and highlights their link with duality results for scalar risk measures. |
|
Fabio Bellini, Pablo Koch Medina, Cosimo Munari, Gregor Svindland, Law-invariant functionals on general spaces of random variables, SIAM Journal on Financial Mathematics, Vol. 12 (1), 2021. (Journal Article)
We establish general versions of a variety of results for quasiconvex, lower-semicontinuous, and law-invariant functionals. Our results extend well-known results from the literature to a large class of spaces of random variables. We sometimes obtain sharper versions, even for the well-studied case of bounded random variables. Our approach builds on two fundamental structural results for law-invariant functionals: the equivalence of law invariance and Schur convexity, i.e., monotonicity with respect to the convex stochastic order, and the fact that a law-invariant functional is fully determined by its behavior on bounded random variables. We show how to apply these results to provide a unifying perspective on the literature on law-invariant functionals, with special emphasis on quantile-based representations, including Kusuoka representations, dilatation monotonicity, and infimal convolutions. |
|
Yanpei Lin, An empirical study of Loss Value at Risk with applications to portfolio risk management and catastrophic risk, University of Zurich, Faculty of Business, Economics and Informatics, 2021. (Master's Thesis)
|
|
Maria Arduca, Pablo Koch Medina, Cosimo Munari, Dual representations for systemic risk measures based on acceptance sets, Mathematics and Financial Economics, Vol. 15 (1), 2021. (Journal Article)
We establish dual representations for systemic risk measures based on acceptance sets in a general setting. We deal with systemic risk measures of both "first allocate, then aggregate" and "first aggregate, then allocate" type. In both cases, we provide a detailed analysis of the corresponding systemic acceptance sets and their support functions. The same approach delivers a simple and self-contained proof of the dual representation of utility-based risk measures for univariate positions. |
|
Zhe Peng, Technical Pricing for Motorcycle Insurance Portfolios, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
|
|
Niushan Gao, Cosimo Munari, Surplus-invariant risk measures, Mathematics of operations research, Vol. 45 (4), 2020. (Journal Article)
This paper presents a systematic study of the notion of surplus invariance, which plays a natural and important role in the theory of risk measures and capital requirements. So far, this notion has been investigated in the setting of some special spaces of random variables. In this paper, we develop a theory of surplus invariance in its natural framework, namely, that of vector lattices. Besides providing a unifying perspective on the existing literature, we establish a variety of new results including dual representations and extensions of surplus-invariant risk measures and structural results for surplus-invariant acceptance sets. We illustrate the power of the lattice approach by specifying our results to model spaces with a dominating probability, including Orlicz spaces, as well as to robust model spaces without a dominating probability, where the standard topological techniques and exhaustion arguments cannot be applied. |
|
Zita Marossy, Frequency analysis for detection of financial market cycles in risk factor models, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
|
|
Xian Li, Backtesting Expected Shortfall with multinomial Value at Risk tests, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
|
|
Jiaxuan Zhao, Estimation of Value at Risk in conditional models, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
|
|
Valeria Bignozzi, Matteo Burzoni, Cosimo Munari, Risk measures based on benchmark loss distributions, Journal of Risk and Insurance, Vol. 87 (2), 2020. (Journal Article)
We introduce a class of quantile‐based risk measures that generalize Value at Risk (VaR) and, likewise Expected Shortfall (ES), take into account both the frequency and the severity of losses. Under VaR a single confidence level is assigned regardless of the size of potential losses. We allow for a range of confidence levels that depend on the loss magnitude. The key ingredient is a benchmark loss distribution (BLD), that is, a function that associates to each potential loss a maximal acceptable probability of occurrence. The corresponding risk measure, called Loss VaR (LVaR), determines the minimal capital injection that is required to align the loss distribution of a risky position to the target BLD. By design, one has full flexibility in the choice of the BLD profile and, therefore, in the range of relevant quantiles. Special attention is given to piecewise constant functions and to tail distributions of benchmark random losses, in which case the acceptability condition imposed by the BLD boils down to first‐order stochastic dominance. We investigate the main theoretical properties of LVaR with a focus on their comparison with VaR and ES and discuss applications to capital adequacy, portfolio risk management, and catastrophic risk. |
|
Roman Poole, Catastrophe Bonds A Comprehensive Analysis, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
The aim of the thesis is to provide a comprehensive analysis of catastrophe (CAT) bonds. Topics
covered include the definition and classes of CAT bonds and related insurance linked securities,
the structure and workflow of a CAT bond transaction and the types of trigger. Supply and
demand, pricing and hedging as well as risk analysis and regulation are also covered.
As a complement to the described descriptive work, the historical returns of CAT bonds from
January 2006 until October 2019 are analyzed by applying standard financial metrics such as
CAPM, Sharpe Ratio and Beta. These results are compared with the performance of other investment
products such as bonds and equities. In addition, empirical evidence is collected to
show that, over the measurement period, CAT bonds are largely uncorrelated with the broader
financial markets and exhibit a beta close to zero. |
|
Michel Baes, Cosimo Munari, A continuous selection for optimal portfolios under convex risk measures does not always exist, Mathematical Methods of Operations Research, Vol. 91 (1), 2020. (Journal Article)
Risk control is one of the crucial problems in finance. One of the most common ways to mitigate risk of an investor’s financial position is to set up a portfolio of hedging securities whose aim is to absorb unexpected losses and thus provide the investor with an acceptable level of security. In this respect, it is clear that investors will try to reach acceptability at the lowest possible cost. Mathematically, this problem leads naturally to considering set-valued maps that associate to each financial position the corresponding set of optimal hedging portfolios, i.e., of portfolios that ensure acceptability at the cheapest cost. Among other properties of such maps, the ability to ensure lower semicontinuity and continuous selections is key from an operational perspective. It is known that lower semicontinuity generally fails in an infinite-dimensional setting. In this note, we show that neither lower semicontinuity nor, more surprisingly, the existence of continuous selections can be a priori guaranteed even in a finite-dimensional setting. In particular, this failure is possible under arbitrage-free markets and convex risk measures. |
|
Pablo Koch Medina, Cosimo Munari, Market-Consistent Prices An Introduction to Arbitrage Theory, Springer, Cham, 2020. (Book/Research Monograph)
|
|
Niushan Gao, Cosimo Munari, Foivos Xanthos, Stability properties of Haezendonck–Goovaerts premium principles, Insurance: Mathematics and Economics, Vol. 94, 2020. (Journal Article)
We investigate a variety of stability properties of Haezendonck–Goovaerts premium principles on their natural domain, namely Orlicz spaces. We show that such principles always satisfy the Fatou property. This allows to establish a tractable dual representation without imposing any condition on the reference Orlicz function. In addition, we show that Haezendonck–Goovaerts principles satisfy the stronger Lebesgue property if and only if the reference Orlicz function fulfills the so-called ∆2 conditions. We also discuss (semi)continuity properties with respect toΦ-weak convergence of probability measures. In particular, we show that Haezendonck–Goovaerts principles, restricted to the corresponding Young class, are always lower semicontinuous with respect to the Φ-weak convergence. |
|
Simon-Pierre Gadoury, Performance Analysis and Comparison of Portfolio Immunization Strategies, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Immunization strategies are portfolio construction techniques minimizing the
interest rate risk of a company’s surplus over time. The interest rate risk arises
from the fact that the future value of both assets and liabilities are sensitive
to future interest rates movements, which are uncertain. An immunized portfolio
will make sure that both assets and liabilities react in a similar manner
to shocks on interest rates, having the smallest impact on the associated surplus.
Although the concept of immunization is very old, dating back the the
1950’s, the idea is still widely used today. In this project, we analyze and implement
recently proposed portfolio immunization techniques, as well as the
more older and standard ones. As the old approaches are extensively used
across asset-managers, it is of interest to explore was has been brought forward
in the recent years. Theoretical setups are explicitly derived to fully grasp the
logic behind each approach, to then be able to apply it in practical contexts.
The goal is not to find the best approach, but to compare them and find out
their advantage(s) and disadvantage(s), theoretically, but also in practice. We
find that the main differences between the more sophisticated techniques and
the standard one come from the added handling of data. For example, parametric
immunization requires a good yield curve model and an efficient way
to estimate it, which may become problematic when the yield curve behaves
strangely (multiple humps). However, it is also shown that this added effort
helps reduce the volatility of the asset portfolio around the liability benchmark,
especially for methods hedging multiple part of the discount rate curve. This
study is done via an internship in the Liability-Driven Investment team at Fiera
Capital, in Canada. |
|
Fabio Brambilla, Expectiles as Market Risk Measures: Estimation and Sensitivity Analysis, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
Expectiles, originally introduced in the context of regression analysis as the minimisers
of an asymmetric quadratic loss function, have been recently shown to be not only coherent
measures of risk but also elicitable. Two fundamental properties that make the Expectilebased
Value at Risk (EVaR) as a more than valid alternative to the Value at Risk (VaR)
and the Expected Shortfall (ES) in the context of market risk management. This paper
rst sets a clear framework for the estimation of the expectile-based risk measure under
conditional and unconditional frameworks by relying on the bisection method for root-
nding. Then, it oers extensive analyses and applications on simulated and real-world
nancial data that provide evidence in favour of the working hypothesis that the EVaR
could be a viable future alternative to the VaR and the ES for what concerns capital and
margin requirements calculations. |
|
Jiani Zhou, Risk and Return Replication of Trend Following Strategies, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
This thesis aims to explore and develop approaches to infer the asset positions of managed
futures funds and to further replicate their risk proles. The approaches studied can be cat-
egorized into two classes, namely a regression-based, top-down approach, and a trend signal-
based, bottom-up approach.
The replication problem for return time series is of general theoretical and practical interest.
This requires, in particular, the specication of trading models and risk models to determine
how the daily positions are adjusted based on market prices, risk and diversication indicat-
ors.
On the one hand, our top-down approach aims to replicate a given daily return series within a
pre-specied investment universe by using regression methods to estimate position weights of
individual instruments. The idea is to regress the time series of the strategy's returns against
a collection of the returns of the pre-dened investment universe instruments by employing
dierent rolling regressions. Then one can assess the results of varying regression methods
and choose the optimal model based on the robustness of the estimators.
On the other hand, our bottom-up approach aims to construct a generic trend following
strategy that captures the return and risk characteristics of the same benchmark. The method
consists of three components. The rst step concerns the trend signal generation using lter
techniques. The second step is portfolio construction using a risk budgeting approach. Finally,
the last step applies volatility targeting for each asset class and the entire portfolio. Similar
to the rst approach, there are a few alternative models available, and the parameters which
best feature the performance of the given trend following strategy would be chosen as the
optimal one. |
|
Fang Zhang, Estimating and Backtesting Risk Measures, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
In this article, rst of all, I introduce the basic idea of risk. In second part, I introduce the idea
and formula of Value-at-Risk and Expected Shortfall. In third part, I use three methods to estimate
VaR and ES. In detail, they are analytical method, historical simulation method and Monte Carlo
method. Then I try to backtest my estimation by two method:score function method and violation
test method. In the last part, I construct a loss operator and lll my plan above. based on the real
data, I try to draw some conclusions for VaR and ES estimators.
II |
|
Markus Sven Müller, Backtesting Forecasting Methods of Value at Risk and Expected Shortfall, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
Since Expected Shortfall (ES) has been assigned a crucial role for capital determination within
banking regulation, a debate concerning its practicability has been going on. Primarily, doubts
have been raised regarding the possibility to backtest corresponding risk measure forecasts. The
starting point was confusion between model validation and model selection, which has motivated
researchers to study and describe both areas thoroughly. On the other hand, regulation has
not caught up and still proposes to backtest ES models by validating their Value at Risk (VaR)
counterparts, the former prevailing risk measure. The thesis at hand presents validation and
selection techniques for models estimating each of these two risk measures. An empirical study
that focuses on the returns of a specic equity portfolio for a certain timeline, surveys reactions
of dierent estimation approaches to changes in parameter settings. For this particular empirical
setting, it is found that backtesting outcomes of the historical approach and of a Monte Carlo
variant of it appear to be highly sensitive to the amount of past data used to make predictions
as well as to the chosen condence level. Two dynamic historical approaches which proceed in
the core the same, demonstrate dierent backtesting outcomes as well, which is due to distinct
distributional assumptions on the innovation term. It is shown that the variant with student-t
innovations performs particularly good for strict condence levels, such as those required by
regulation. Furthermore, this approach is in the majority of cases ranked as the relatively most
accurate one. On the other hand, critically inspecting the backtests by means of further empirical
ndings and based on relevant literature reveals that blindly trusting the numbers of backtesting
methods falls short of an adequate model quality assessment. Estimation and model risk, usage
of erroneous prot and loss data or distributional uncertainties of certain test statistics are
among those issues that might result in inadequate risk predictions as well as distorted backtests. |
|
Dominic Schaub, Comparative Performance of Quantile-Based Risk Measures VS Value at Risk and Expected Shortfall, University of Zurich, Faculty of Business, Economics and Informatics, 2019. (Master's Thesis)
The Banking & Insurance sectors are naturally faced with various sources of risk. Given their critical
role regarding financial stability both banks and insurance companies have been regulated. For banks,
the focus has mainly been on protecting liability holders by reducing the likelihood of insolvency
by specifying regulatory (risk) capital requirements that act as a buffer against unexpected losses.
Insurance companies have mainly been regulated due to the critical role that they play for both
households and firms (by insuring them against risks they’re faced with). The banking world is
predominantly regulated by the Basel Committee on Banking Supervision and its Capital Accords
while the insurance sector (in the European Union) falls under the Solvency regulations by the
European Parliament. Insurance companies in Switzerland have to comply with the Swiss Solvency
Test as prescribed by the Swiss Financial Market Supervisory Authority FINMA.
Over the years, many different approaches were taken in an attempt to ensure financial stability.
While the Glass Steagall Act of 1933 separated investment and commercial banking activities, most
concepts didn’t employ such drastic measures but instead focused on (adequately) capturing and
managing the various sources of risk. The methodologies used evolved from simple ratios (such as the
Cooke Ratio employed under Basel I) to more sophisticated ones. The introduction of the concept
of Value-at-Risk (VaR) in 1994 (by JP Morgan) marked the beginning of a new era regarding the
assessment of capital adequacy and was officially endorsed by the Basel Committee in its Basel II
framework, which was released in 2004. It was used to assess minimal capital requirements for market
risk - other sources of risk were taken care of by other means within the three-pillar framework
inherent to the current Basel- and Solvency frameworks.
The concept of VaR did receive some (mainly academic) criticism even before its adoption but it
wasn’t until the financial crisis of 2007 that those concerns were taken seriously. The main reason
for that was that the financial crisis exposed a major weakness of the VaR concept: its structural
blindness to tail risk. On the grounds of the theory of coherent risk measures (as put forth by Artzner
et al. in 1999), Expected Shortfall (ES) emerged as a viable alternative to VaR because it takes
the entire tail of the loss distribution into account. Such was its perceived improvement upon VaR
that the Basel Committee announced a switch from VaR to ES in its regulatory framework Basel
I
III (2010-2011). On the insurance side, Solvency II replaced Solvency I in 2009 but stuck with VaR
(albeit at a higher significance level of 99.5%). The Swiss Solvency Test on the other hand applies a
99% ES for the assessment of market risk.
The theory of coherent risk measures still forms the backbone of risk measures used for regulatory
purposes today. It measures the risk of a portfolio of assets and liabilities by determining the minimum
amount of capital that needs to be raised and held in cash (or invested in the eligible asset) in order
to make the portfolio’s future value acceptable. This capital then constitutes a risk measure that may
be coherent if it satisfies the axioms of Monotonicity, Translation Invariance, Positive Homogeneity
and Sub-additivity. The main difference between VaR and ES is that VaR fails to be sub-additive
and thus isn’t coherent whereas ES is. This implies that aside from not being sensitive to tail risk,
VaR also fails to encourage diversification.
Expected Shortfall isn’t without its flaws either as it trades surplus invariance for coherence, thus
mixing the interests of liability holders and owners of an institution as well as allowing for regulatory
arbitrage when used as a global regulatory measure. It is for that reason that Bignozzi et al. (2018)
introduced a new class of risk measures based on a Benchmark Loss Distribution (BLD) called Loss
Value-at-Risk (LVaR). This concept can be applied to any VaR estimate in order to make it sensitive
to tail risk while avoiding the drawbacks of ES. The main principle is that losses should be acceptable
only if they occur with a pre-specified low probability and that the degree of acceptability should
depend on the loss size: higher losses will be tolerated with lower probability. This is achieved by
requiring the use of a higher significance level if losses exceed the VaR estimates. Conceptually, this
can be seen as a strengthening of the VaR criterion with the positive difference between the two
representing the cost of aligning the empirical distribution with the BLD.
The empirical performance of VaR, ES and LVaR was analysed using absolute daily returns of the
S&P 500 for the years 2006-2018. Two different BLDs were specified and used to obtain the estimates
for the following methodologies: Historical simulation, Weighted Historical simulation, Risk Metrics,
Monte Carlo Simulation and Extreme Value Theory. The results indicate that LVaR may improve
upon the corresponding VaR estimates by means of a lower scoring function and a reallocation of
percentages in the Traffic Light System from yellow to green. The effect sizes, however, were modest
at best with an infinitesimal impact on the number of breaches. As the effects were not unidirectional either, more work needs to be done in order to determine the optimal specification of the BLD based
on the distribution in question.
Due to the small effects observed for LVaR, a novel concept was put forth by the author in the
form of Distance-VaR (DVaR). The concept can also be seen as an extension of VaR and relies on
a BLD just like LVaR. However, instead of comparing a method’s VaR estimates with the realized
losses, it compares, for every point in time, the estimates at all significance levels defined within
the BLD with the corresponding critical loss levels and adds the maximal positive difference to the
baseline VaR estimate. As a consequence, DVaR may result in higher capital requirements than
VaR even in the absence of breaches. This results in a significant shift of percentages within the
Traffic Light System from red to yellow to green (as well as a significant reduction in the number of
breaches). Unfortunately, this comes at the expense of a marked increase in the scoring function. As
with LVaR, more work is needed to determine the optimal BLD for use with DVaR.
Out of all the methodologies used to obtain VaR estimates, the Weighted Historical Simulation
performed best, followed by the Extreme Value Theory approach in second place (by a margin).
Interestingly, the LVaR estimates for those two methods were positively influenced for the former
and negatively for the latter. Regarding DVaR, the conclusion is not as straightforward: from a
regulatory point of view, it might be seen as an improvement as it markedly reduces the number of
breaches regardless of the methodology used. The more inefficient use of capital, however, makes it
appear to be an inferior risk measure. If one’s sole focus were on banks (or insurers), then this would
be a compelling argument against DVaR’s use. Yet on a larger (macroeconomic) scale, bailing out
financial institutions doesn’t constitute efficient use of capital either. This drawback might thus be
more of a political issue rather than a financial/statistical one.
|
|