Michail Ntaoutis, Risk Sharing: between profitability and systemic risk , University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Insurance companies use risk sharing to improve their profitability by mitigating their
losses and reducing their cost of capital. Optimizing risk sharing across a network of
insurance companies leads to a minimization problem of the risk-based capital of the
network. We obtain the optimal risk transfer scheme using a numerical simulation
of a risk sharing network based on an optimality criterion. This optimality criterion,
widely used in the field of cooperative game theory, describes a unique and fair way
to transfer risk in a coalition of insurance companies. We analyze and implement both
the proportional and the non-proportional risk transfer approaches. In a second part,
we study the effect of risk sharing on systemic risk. We introduce a risk measure called
SyRi that quantifies the systemic risk contributions of the insurers of the network. SyRi
is based on a stress test methodology and uses elements of the risk sharing simulation.
We also briefly discuss a regulatory regime build by interchanging proportional and
non-proportional risk sharing. |
|
Valentin Geoffroy, Why is American Option Pricing so Complicated?, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
American option pricing requires to solve an optimal stopping problem with no known
exact closed-form solution. The option price computation essentially centers around
determining the time dependent so-called early exercise boundary. Modelling the underlying process as a geometric Brownian motion, we propose a novel closed formula
to approximate the early exercise boundary within a framework with both continuous
and discrete dividends. Applying this result, we suggest an extended local volatility
formula to calibrate American option market prices and overcome the limitations of
our initial underlying model assumptions. |
|
Thomas Lagos, Machine Learning Applications for Reverse Stress Testing, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Reverse stress testing is a novel idea that intends to identify scenarios that can lead a financial institution to unviability without the cognitive biases that traditional stress testing imposes. In this thesis, we propose a reverse stress testing framework that is based on historical bootstrapping simulation and machine learning techniques. To achieve this, we create a mini-bank balance sheet and map its components to risk factors that are generated by a historical bootstrapping methodology. The mini-bank balance sheet enables us to create a framework that is applicable to real-world problems. The choice of modeling risk factors with a non-parametric bootstrapping method provides us with an additional degree of realism since we capture many stylized facts observed in financial time series. Furthermore, we are able to generate a vast amount of unexplored scenarios to train and test our machine learning methods.
To verify the validity of our approach and to present the structure of our problem we start
with a simplified version of our problem. To demonstrate the capability of our framework, we
introduce non-linear risk factors by including CoCo bonds and a more complex credit risk modelling approach that includes jumps. For the complex problem, we use Support Vector Classifiers, Tree-based models, Ensemble-based models, Linear models, and Neural Networks (Multilayer Perceptron and Convolutional Neural Networks). The best performing model was the Multilayer Perceptron. The Support Vector Classifier, Random Forest, and Decision Tree were the next best performing models with small differences compared to the champion model.
After testing our models under different conditions, we conclude that machine learning techniques can be used for reverse stress testing purposes. As a final task, we investigate the performance of the best performing models in a data set that was generated by a balance sheet for which its allocations are changed up to 15%. We find, that even though the model was trained and tested in datasets that were generated by different balance sheets, the performance remained surprisingly high. This indicated two things. The first is that some scenarios are so severe that even with a 15% less participation they can still cause unviability. The second is that our model is robust since it can make good generalizations. |
|
Erich Walter Farkas, Fulvia Fringuellotti, Radu Tunaru, A Cost-Benefit Analysis of Capital Requirements Adjusted for Model Risk, Journal of Corporate Finance, Vol. 65, 2020. (Journal Article)
Capital adequacy is the key microprudential and macroprudential tool of banking regulation. Financial models of capital adequacy are subject to errors, which may prevent from estimating a sufficient capital base to absorb bank losses during economic downturns. In this paper, we propose a general method to account for model risk in capital requirements calculus related to market risk. We then evaluate and compare our capital requirements values with those obtained under Basel 2.5 and the new Basel 4 regulation. Capital requirements adjusted for model risk perform well in containing losses generates in normal and stressed times. In addition, they are as conservative as Basel 4 capital requirements, but they exhibit less fluctuations over time. |
|
Tobias Herrmann, Wertschöpfung durch Fusionen und Übernahmen in verschiedenen Branchen, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
M&A Transaktionen, insbesondere Zukäufe oder Fusionen, gehören zu den wichtigsten
strategischen Entscheidungen eines Unternehmens. Aus Sicht der Aktionäre ist es das
Ziel, mit Fusionen oder Übernahmen eine Wertsteigerung zu generieren. Das Bieterunternehmen
erzielt den Mehrwert aus der Transaktion vorwiegend aus operativen Synergien,
bestehend aus Ertragssynergien und Kostensynergien. Daneben kommen finanzielle
Synergien (unter der Annahme, dass keine perfekten Kapitalmärkte existieren) und
Restrukturierungspotenzial (bei «distressed» Zielunternehmen) hinzu. Die Aktionäre
des Zielunternehmens erzielen eine ausserordentliche Rendite aus der Übernahmeprämie.
Die Übernahmeprämie darf aus Sicht des Bieterunternehmens nicht höher als die
erwarteten Vorteile abzüglich der Akquisitions- und Integrationskosten sein. Nur unter
diesen Voraussetzungen wird Wert für die Aktionäre des Bieterunternehmens geschaffen,
und somit sollten die Aktionäre das Management bei der Durchführung der Transaktion
unterstützen.
Die Frage, ob in der Praxis ein kurzfristiger Mehrwert für Bieterunternehmen generiert
wird, ist umstritten und wird in dieser Arbeit mithilfe einer globalen Literaturrecherche
und einer empirischen Analyse über den Schweizer Markt beantwortet. Die Literaturrecherche
zeigt eindeutig, dass in den Zeiträumen vor (1960 bis 2008) und nach (2008
bis 2020) der Weltfinanzkrise eine signifikante ausserordentliche Rendite für Aktionäre
der Zielunternehmen durch M&A Transaktionen generiert wird. Für die Aktionäre der
Bieterunternehmen sind die Resultate weniger eindeutig. Bis 1970 sind die kurzfristigen
Renditen mehrheitlich positiv. Danach sinken sie zwischen 1990 - 2008 in den USA
aufgrund von regulatorischen Veränderungen bis ins Negative. In Europa und Asien
sind die Renditen jedoch, vermutlich aufgrund des weniger intensiven Markts für Unternehmenskontrolle,
leicht positiv. Es zeigt sich, dass seit der Weltfinanzkrise 2007 die
kurzfristige ausserordentliche Rendite für Bieterunternehmen global positiv ist. Der
Grund dafür könnte, neben der generellen positiven Entwicklung an den Aktienmärkten,
die Entwicklung einer «Corporate Governance» Struktur innerhalb der Unternehmen
sein, welche fördernd für optimale Investitionsentscheidungen ist.
Es gibt viele Faktoren, welche einen Einfluss auf die kurzfristige ausserordentliche
Rendite ausüben. Beispielsweise hat die Intensität des Wettbewerbs für den Markt für
Unternehmenskontrolle, die Art der Transaktion, die Art der Finanzierung und das
Transaktionsvolumen eine Auswirkung auf die ausserordentliche Rendite der Aktionäre
der Bieterunternehmen. Über ein langfristiges Beobachtungsfenster von einem Tag nach
der Transaktionsankündigung bis fünf Jahre danach kann für Bieterunternehmen im
Durchschnitt kein Mehrwert für die Aktionäre erzielt werden.
Der empirische Teil bestätigt, dass in der Zeitperiode von 2009 bis 2020 eine signifikante
ausserordentliche Rendite für die Aktionäre von Schweizer Bieterunternehmen
durch Fusionen oder Übernahmen in einem kurzen Beobachtungsfenster geschaffen wird.
Zudem können aus der Literaturrecherche gewonnene Erkenntnisse bestätigt werden.
Kleinere Transaktionen erzielen grössere Kursgewinne für die Aktionäre der Bieterunternehmen
als «grosse Transaktionen». Inländische Transaktionen und Transaktionen mit
nicht börsenkotierten Zielunternehmen schneiden ebenfalls besser ab als «Cross Border»
Transaktionen und Transaktionen mit börsenkotierten Zielunternehmen.
Zusätzlich wird ein Branchenvergleich durchgeführt. Die Resultate zeigen, dass innerhalb
der einzelnen Branchen deutliche Unterschiede in der positiven ausserordentlichen
Rendite ermittelt werden. Die Unterschiede sind jedoch nicht signifikant. Einzig die
Branche Rohstoffe und Bau weist eine negative ausserordentliche Rendite auf, die im
Vergleich zu jeder anderen Branche signifikant ist. |
|
Cyril Walker, Fitting interest derivatives' volatility smile in negative interest landscape, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
In this thesis, I concentrate on negative interest rates and how they caused problems in
pricing interest rate derivatives. In this regard, two forms of the market-standard SABR
interest rate model are presented and examined both, theoretically and empirically. To
this end, it is shown how these models can be applied in the Swiss market in order to
price options such as caps and floors. This work includes the calibration process as well
as a possible answer to how the optimal β parameter can be chosen. Furhtermore, an out
of sample analysis is conducted, in which selected strikes were removed from the implied
volatility smile. I conclude that the Normal SABR model outperforms the shifted Black
SABR model under quantitative as well as qualitative viewpoints . |
|
Michal Kobak, Financial Time Series Clustering for Portfolio Optimization, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Optimization of financial portfolios has been rigorously studied in the literature, with
Harry Markowitz being the first to consider the risk-return trade-off of a portfolio
as a whole in his Nobel Prize-winning paper [11]. He proposed to solve a quadratic
optimization problem that outputs a set of efficient portfolios using as inputs the vector
of expected returns and the matrix of covariances. Inversion of the covariance matrix
is, however, needed for the solution. If a covariance matrix is estimated on too few
data points compared to its dimension, inversion may amplify estimation errors and
lead to undiversified portfolios. The stability and usefulness of the expected return
estimates are also doubted in the literature.
This thesis tries to answer the question of whether one can construct diversified
portfolios using only the historical return time series of a universe of assets while
avoiding expected return estimation and covariance matrix inversion. A hierarchical
clustering approach is chosen and assets are clustered using newly-defined distance
functions based on semicorrelation and momentum. A comparison is made with the
correlation distance clustering, which is the default method used in the cited literature.
The results show that correlation and semicorrelation clustering is able to uncover
information relevant to portfolio construction. In combination with investing in low
variance or semivariance assets, one can construct clustering portfolios that perform
similarly to the Markowitz minimum-variance portfolio. It is also shown that pairing
clustering with the selection of high-momentum assets leads to a high-yielding portfolio
with an impressive Sharpe ratio. |
|
Nicolas Ettlin, Erich Walter Farkas, Andreas Kull, Alexander Smirnow, Optimal Risk-Sharing Across a Network of Insurance Companies, Insurance: Mathematics and Economics, Vol. 95, 2020. (Journal Article)
Risk transfer is a key risk and capital management tool for insurance companies. Transferring risk between insurers is used to mitigate risk and manage capital requirements. We investigate risk transfer in the context of a network environment of insurers and consider capital costs and capital constraints at the level of individual insurance companies. We demonstrate that the optimisation of profitability across the network can be achieved through risk transfer. Considering only individual insurance companies, there is no unique optimal solution and, a priori, it is not clear which solutions are fair. However, from a network perspective, we derive a unique fair solution in the sense of cooperative game theory. Implications for systemic risk are briefly discussed. |
|
Andreas Egger, Empirical analysis of a non-affine stochastic volatility model, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
This thesis is about the pricing performance of the Inverse Gamma model of European
vanilla options on an equity underlying. Since the Inverse Gamma model is non-affine it has
no closed form solution hence an approximation formula is used. A theoretical foundation is
built by introducing the basic tools for option pricing and explaining the standard models
which are the foundation for all the different extensions. It is demonstrated why the
standard Black-Scholes model fails when it comes to pricing options and how to escape
these problems. For the problem with the constant volatility assumption of the Black-
Scholes model, stochastic volatility models are introduced. Besides the Inverse Gamma
model the Heston model is presented too and serves as the benchmark model.
The empirical analysis is based on data sets from two different days. A least squared
error fitness function is used to calibrate the parameters. The Heston model outperforms
the Inverse Gamma model clearly because it turns out that the approximation formula for
this empirical analysis is not good enough. By making use of simulation it can be shown
that the Inverse Gamma model can be competitive with the Heston model. |
|
David Anderson, Pricing of American Options in a Market Making Environment Using Artificial Neural Networks, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Traditional Monte Carlo pricing methods for American options in a market making environment are too slow, impeding the ability to quote prices consistent with an ever-changing market environment. We propose a novel method for the valuation of American options by which market information, passed in the form of an implied volatility surface, is evaluated instantaneously using a feed-forward neural network. Utilising data generated using Monte Carlo methods, we propose a beginning-to-end framework for the creation of a neural network pricing system for a market making environment. |
|
Urban Ulrych, Pawel Polak, Dynamic Currency Hedging Using Non-Gaussian Returns Model, In: 11th CEQURA Conference on Advances in Financial and Insurance Risk Management. 2020. (Conference Presentation)
|
|
Urban Ulrych, Pawel Polak, Dynamic Currency Hedging Using Non-Gaussian Returns Model, In: International remote conference - Mathematical and Statistical Methods for Actuarial Sciences and Finance. 2020. (Conference Presentation)
|
|
Ludovic Mathys, On Extensions of the Barone-Adesi & Whaley Method to Price American-Type Options, Journal of Computational Finance, Vol. 24 (2), 2020. (Journal Article)
This paper provides an efficient and accurate hybrid method to price American standard options in certain jump-diffusion models and American barrier-type options under the Black-Scholes framework. Our method generalizes the quadratic approximation scheme of Barone-Adesi and Whaley and several of its extensions. Using perturbative arguments, we decompose the early exercise pricing problem into subproblems of different orders and solve these subproblems successively. The solutions obtained are combined to recover approximations to the original pricing problem of multiple orders, with the zeroth-order version matching the general Barone-Adesi-Whaley ansatz. We test the accuracy and efficiency of the approximations via numerical simulations. The results show a clear dominance of higher-order approximations over their respective zeroth-order versions and reveal that significantly more pricing accuracy can be obtained by relying on approximations of the first few orders. In addition, they suggest that increasing the order of any approximation by one generally refines the pricing precision; however, this happens at the expense of greater computational costs. |
|
Pedro Daniel Partida Güitrón, A Machine Learning Approach for a Blockchain-Crypto Portfolio Construction, ETH Zürich, Department of Mathematics, 2020. (Master's Thesis)
This master’s thesis integrates concepts from the fields of machine learning, quantitative finance, and digital investments in cryptocurrencies and their underlying technology blockchain. This research aims to create and manage actively a blockchain-crypto portfolio based on machine learning prediction models.
In this research project, we developed several supervised machine learning models to predict the behavior of three cryptocurrencies with the highest market capitalization (Bitcoin, Ethereum, and Ripple), one blockchain exchange-traded product, and gold. We use different input data types, such as technical indicators (momentum, volume, volatility and trend indicators), economic data, currency exchange rates, commodity prices, and Google Trends, to construct and calibrate predictive models to forecast future crypto-asset returns. We use daily data from 2015 until 2019 to build and train the machine learning models. To test the machine learning models’ out-of-sample accuracy, we use the time frame from January until May 2020. We found out that ensemble techniques such as random forest and gradient boosted trees work particularly well to classify cryptocurrencies’ price direction and regress their bi-weekly returns. Mostly the prediction results for Bitcoin and Ethereum are satisfactory and promising for future use.
The machine learning models’ output serves as the investor’s views for the Black-Litterman model to construct and manage an active blockchain-crypto portfolio rebalancing every two weeks. We set portfolio constraints for the rebalancing, such as no short-selling, maximal asset positions, and maximal turnovers. We set a live portfolio from May 2020 until August 2020 with bi-weekly rebalancing to test the constructed active portfolio results. As benchmark portfolios, we take equity exchange-traded funds, cryptocurrency exchange-traded products, an equally weighted and a passive portfolio. The active portfolio results outperformed the returns from traditional investments and reduced the volatility of pure cryptocurrency investments substantially, achieving the highest Sharpe ratio and the lowest drawdown among all portfolios. This research’s output can be carried on in academia, exploring more profound the estimation of the investor’s views in the Black-Litterman model based on machine learning predictions. Additionally, the growing interest in cryptocurrencies offers the industry the opportunity to create this active blockchain-crypto portfolio and launch it in the market.
|
|
Shahire Hylaj, Application of different forecasting methods for the volatility of industry indices, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
The relevance of measuring and predicting risk in the financial world has increased in recent years. With these demands on research, countless possibilities for volatility forecasts have been developed and applied to a wide range of historical financial data. The aim is to uncover differences in the forecasting accuracy of various forecasting methods for the volatility of industry indices and to find explanations for the results. The knowledge gained from this study can generate added value in the application of such forecasting models in risk and asset management.
For this purpose, volatility forecasts for selected indices from different markets and industries are created and compared with each other. The models used are long-term average, simple moving average, weighted moving average, exponential weighted moving average, ARCH(1) and GARCH(1,1), whereby the analysis refers exclusively to in-sample forecasts. The industries examined for the selection of the indices are the automotive, biotechnology, pharmaceuticals, communications and energy and equivalents industries.
For the ultimate performance evaluation, the mean absolute error is used, and two other measures, MAPE and RMSE, are discussed to illustrate the impact of the definition of error on the overall performance assessment. For the same reason, two additional measures are calculated in addition to the original benchmark and differences between the result are discussed.
The results show a high performance of the exponential moving average, as well as the GARCH(1,1) over all industries considered. ARCH(1) has consistently delivered higher errors for the period 2014 to 2019, while moving averages deliver lower errors. Exponential moving average has outperformed all models across all industries and regions. Thus, a tendency of overspecification from GARCH(1,1) and ARCH(1) models have lead to poorer results than expected. Additionally, calculations based on constant volatility asssumptions as long-term average have proven to underperform due to their inability to capture long-term fluctuations in the financial market. |
|
Artem Dyachenko, Erich Walter Farkas, Marc Oliver Rieger, Volatility Dependent Structured Products, In: Swiss Finance Institute Research Paper, No. 19-64, 2020. (Working Paper)
We construct a derivative that depends on the SPY and VIX and, in this way, incorporates both the market risk premium and the variance risk premium. We show that the product's Sharpe ratio is higher than the SPY Sharpe ratio. If we invest $10000 into the product, the products' payoff is around $60000 at the end of 2018. In comparison, if we invest $10000 into the SPY, the SPY payoff is around $30000. |
|
Simon Albisser, Analysis of Corda and Hyperledger Fabric regarding their applicability in specific areas, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Since the emergence of Bitcoin in 2008 and with it the blockchain, a new technological field
has opened up, the field of Distributed Ledger Technology (DLT). DLT enables immutable and cryptographically secure databases that are distributed and synchronized across multiple parties where there is no need for a central authority managing them. Since 2008 there has been a lot of development in this area, especially regarding applications in the economy. Two
technologies in particular have attracted attention. One is Fabric from the Hyperledger project
of the Linux Foundation and the other is Corda from the DLT company R3. These two platforms enable the setup of a private network based on DLT. The development of applications for Fabric and Corda is at the beginning. Therefore, it is important to find out which advantages and disadvantages each platform has in order to determine which platform is better suited for which area. This thesis shows whether Fabric or Corda is better suited for the field of digital currency, supply chain, trade finance and trading infrastructure. The two use cases we.trade (trade finance) and SDX (trading infrastructure) were used to find out how Fabric and Corda are designed and how they work in practice. For this purpose, the public documentation of Fabric and Corda and further literature was used. In addition, interviews with managers responsible for we.trade and SDX were conducted. Based on this, a SWOT analysis of the two was carried out regarding their applicability. Then, three specific characteristics of Fabric and Corda were identified to determine which of them is better suited in which application area and why. This work shows that neither Fabric nor Corda can provide a decentralized consensus in an untrustworthy environment, as possible consensus algorithms are not yet ready. It is also shown that the setup of Fabric is more versatile and simpler than that of Corda. Furthermore, the different network structure is shown. While Fabric works with subnetworks called channels, Corda allows all participants of a network to freely interact with each other. Since Fabric and Corda enable private networks and do not allow for decentralized consensus in untrusted environments, none is an appropriate technology for a digital currency. In the field of the supply chain, Fabric is preferred because of its versatility and simpler setup. In contrast, only Corda can be considered as technology for a trading infrastructure. This is due to the fact that only with Corda, in a large network with many different parties, everyone can freely interact with each other. In the field of trade finance both are similarly well applicable. In general, it can be said that Fabric primarily offers a solution for establishing a common platform for data exchange, storage and tracking. Corda, on the other hand, also offers this, but can also be used for highly complex networks where the exchange of assets (or tokens on the DLT) between untrusted parties plays an important role. |
|
Martynas Mazrimas, Approximation schemes for stochastic differential equations with applications to derivatives pricing and Greeks estimations, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Master's Thesis)
Pricing exotic derivatives under the local volatility model requires numerical methods that have an inherent trade-off between accuracy and efficiency. By increasing the number of simulations and choosing a dense discretization grid, smaller errors can be obtained, although at the cost of significantly higher computational complexity. In order to decrease the errors without increasing the computational complexity, alternative stochastic differential equation (SDE) approximation schemes and variance reduction methods can be considered. In this thesis, we investigate the benefit of higher order SDE approximation schemes and variance reduction methods in derivatives pricing under the local volatility model. Several strong and weak higher order stochastic Runge-Kutta approximation schemes are derived and applied to estimate errors present in fair prices and price sensitivities for vanilla and path-dependent financial products under differently shaped parabolic and real local volatility surfaces. Antithetic sampling, Quasi-Monte Carlo and Brownian bridge numerical schemes are also discussed and applied in search for a better Monte Carlo convergence. |
|
Michael Schwab, Risk measures: the interplay of eligible assets and acceptance sets, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Bachelor's Thesis)
We provide an overview of relevant concepts in the context of eligible assets, risk measures and acceptance sets. Since the publication of the landmark paper by Artzner, Delbaen, Eber and Heath in 1999, the bulk of the literature has focused on risk measures corresponding to a risk-free eligible asset, namely cash-additive risk measures. Recently, the focus has shifted towards risk measures corresponding to general eligible assets that need not necessarily be essentially bounded away from zero. We prove the standard correspondence between risk measures and acceptance sets in the case of such an eligible asset. It is well-known that risk measures corresponding to general eligible assets are not necessarily finitely valued and continuous. Therefore, we study how the choice of the eligible asset, i.e. whether it follows a discrete or a continuous probability distribution, influences the corresponding risk measures. We focus on the finiteness and continuity properties of these risk measures. Our investigation is complemented with frequently used acceptability criteria, namely the Value-at-Risk- and the Expected Shortfall-acceptability. The theory of general risk measures allows for a wider range of eligible assets. Hence, we investigate the influence of concrete choices of general eligible assets, i.e. defaultable bonds and call options. Our results show that the finiteness and continuity properties of general risk measures can depend on the probability distribution of the corresponding eligible assets. Therefore, it is important to distinguish between discrete and continuous eligible assets. |
|
Ludovic Mathys, American-type exotic options and risk management in Lévy-driven markets, University of Zurich, Faculty of Business, Economics and Informatics, 2020. (Dissertation)
|
|