Laurent Florin, Interdependencies of Cryptocurrencies, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Bachelor's Thesis)
Abstract
In this thesis I consider the implementation of a VAR-DCCGARCH based strategy on a cryptocurrency portfolio consisting of Bitcoin and six widely used Altcoins, to take advantage of possible spillover effects between the currencies in the portfolio. Furthermore, the use of a moving average crossover rule, based on the network value to transaction volume ratio, to improve the strategy is examined. As a benchmark a simple Buy and Hold portfolio strategy is used. To compare the the strategies the Sharpe ratio is used and a studentized bootstrap interference is implemented to test the hypotheses that there is no difference in the Sharpe ratios of the strategies. Strong evidence that the VAR-DCCGARCH strategy renders superior risk adjusted returns compared to the Buy and Hold benchmark is presented. Additionally evidence, that VAR-DCCGARCH is improved upon by combining it with the moving average crossover rule, is presented. |
|
Patrick Aschermayr, Inf ere nee Algorithms for Hidden (Semi) Markov Models, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Master's Thesis)
The goal of this thesis is to explore and test new methods to learn, describe and predict economic cycles. In particular, a comparison of inference algo-rithms for hidden Markov models (HMM) and hidden semi-Markov models (HSMM) is conducted. Both proposed approachesthe popular Expectation-maximization (EM) algorithm and a Markov chain Monte Carlo (MCMC) samplerhave advantages depending on the size and noise of the underlying data as well as whether interval estimation or the addition of data-specific knowledge is desired. Furthermore, HMMs and HSMMs are used to describe financial markets. While the hidden Markov model performs well as a trading tool, it is less suitable to model economic cycles, since its implicit geometric state duration distribution flips states unrealistically often. In contrast, the hidden semi-Markov regime switching model appears to be very promising, demonstrating high potential as a strategic asset allocation (SAA) overlay from a finance perspective and as a model for economic cycle predictions from an economics perspective.
Keywords: Hidden Markov Models, Hidden Semi-Markov Models, Statistics, Machine Learning, Algorithms, EM-Algorithm, MCMC, Economic Cycles, Financial Markets, ETH Zurich, University of Zurich |
|
Jan Krepl, Supervised Learning for Financial Market Predictions, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Master's Thesis)
This thesis proposes a general framework combining portfolio optimization techniques with supervised learning. The ultimate goal is to outline an active portfolio management scheme that is to a large extent automated. The predic-tive models use time series of historical asset returns as the features and future returns as the targets. The predictions are in turn used as inputs to a portfolio optimization problem that leads to a portfolio allocation strategy. Together with detailed model description we also test the methods on a recent financial data set and analyze the feasibility of the entire framework. The main emphasis is put on comparison to benchmarks and performance assessment both in terms of prediction quality and risk and return profile of the strategy. |
|
Marc Paolella, Pawel Polak, COBra: Copula-Based Portfolio Optimization, In: Predictive Econometrics and Big Data, Springer International Publishing, Cham, p. 36 - 77, 2018. (Book Chapter)
The meta-elliptical t copula with noncentral t GARCH univariate margins is studied as a model for asset allocation. A method of parameter estimation is deployed that is nearly instantaneous for large dimensions. The expected shortfall of the portfolio distribution is obtained by combining simulation with a parametric approximation for speed enhancement. A simulation-based method for mean-expected shortfall portfolio optimization is developed. An extensive out-of-sample backtest exercise is conducted and comparisons made with common asset allocation techniques. |
|
Patrick Walker, Multivariate non-Gaussian models for quantitative risk and portfolio management, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Dissertation)
|
|
Marc Paolella, Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH, John Wiley & Sons, New York, 2018. (Book/Research Monograph)
|
|
Marc Paolella, Fundamental Statistical Inference: A Computational Approach, John Wiley & Sons, New York, 2018. (Book/Research Monograph)
|
|
Martin Waser, A hybrid least-Squares support vector machines based local neuro-fuzzy model using a feed-forward artificial neural network for class membership weight generation, University of Zurich, Faculty of Business, Economics and Informatics, 2018. (Master's Thesis)
Heterogeneous data spaces with multiple underlying regimes may
present difficulties for a global supervised inference engine. A way
of increasing prediction accuracy is to divide the input space among
multiple models and aggregate predictions. This paper proposes a
novel hybrid neuro-fuzzy model combining a feed-forward artificial
neural network (ANN) for data space partition with weighted leastsquares
support vector machines (LSSVM) as local inference engines.
Model parameter are estimated via error backpropagation by using
the derivatives of the LSSVM core equations with regard to the ANN
final layer outputs. Empirical tests on benchmark time series show
increased forecasting performance for some data sets compared to less
flexible architectures but this comes at a significant increase in computational
costs. |
|
Pasquale Riviezzo, Calibration of the Implied Volatility Surface using High-Frequency Data, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
The accurate estimation of volatility is an extremely important task in quantitative fi- nance, being it an unknown but fundamental parameter in many financial models of derivatives. Implied volatility (IV) derived from option contracts has been long con- sidered representative of the market’s expectations of future realized volatility over the remaining life of the option. This thesis makes use of high-frequency option data in order to study this important quantity at the intraday level. Numerical and closed- form approximation methods for the solution of the implied volatility inverse problem are discussed, compared and implemented. Retrieving the implied volatility surface at higher frequencies is however a complex task, due to high data sample sizes, filtering is- sues, asynchronicity of the observations and frequent arbitrage opportunities in the data. Moreover, advanced derivatives models may require an arbitrage-free implied volatility surface as an input for correct calibration and price valuations. In order to derive the arbitrage-free implied volatility surface, we implement a set of filtering rules, interpo- lation and extrapolation procedures and spline smoothing of the price surface under no-arbitrage constraints. Using the value-second derivative representation of natural cu- bic splines, we implement and solve the problem numerically as a constrained quadratic optimization problem. Using the calibrated implied volatility surfaces to extract time series of high-frequency implied volatility, we show that day-of-the-week effects exist in implied volatility. In particular, close-to-close implied volatility increases on Mondays, rises slightly from Tuesday to Thursday and decreases drastically on Fridays. Further investigation shows that W-shaped patterns exist in intraday implied volatility, which can be linked to particular trading activity and liquidity patterns in option markets. By defining an intraday measure of implied volatility (IDIV), it is found that intraday implied volatility contains additional information about the future realized volatility compared to the normally used daily closing level of IV. In particular, computing intraday implied volatility from high-frequency option data allows to achieve better estimations of future volatility. |
|
Fabian Smits, Stock market volatility: Identification of risk drivers and forecasting using random forest, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
Random forest is a machine learning method which can be used for regression and classification. Due to its ability to handle large data setswith highly nonlinear dependencies, random forest is a powerful prediction tool. Furthermore, despite being a non-parametric approach, random forest allows for inference regarding the importance of input variables. |
|
Ronald W Butler, Marc Paolella, Autoregressive Lag-Order Selection Using Conditional Saddlepoint Approximations, Econometrics, Vol. 5 (3), 2017. (Journal Article)
A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that the new method is usually competitive with, and often better than, common model selection methods. |
|
Pascal Schenk, On market timing ability of Switzerland-based mutual funds, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Bachelor's Thesis)
This thesis examines the market timing performance of Switzerland-based mutual funds
between 2009 and 2017. Two well-established models derived by Treynor and Mazuy (1966)
and Henriksson and Merton (1981) will be introduced and then applied on our data set,
containing historic returns of 472 mutual funds, to measure their performance in regard to
market timing. Further, this thesis aims to put the obtained results in a bigger picture by a
comparison with the minimum degree of forecast accuracy to beat a passive buy-and-hold
strategy, computed with a model developed by Sharpe (1975). We find that (1) there is
no evidence of significant timing ability even though the funds tend to have positive timing
coefficients on average, irrespective of the applied model and (2) that superior market
timing does not imply that one fares better than a passive buy-and-hold investor.
Keywords: Market timing, mutual fund performance, efficient market hypothesis, capital
asset pricing model, Switzerland, empirical analysis
I |
|
Manuel Kannenberg, Machine Learning Based Views in a Generalized Black-Litterman Framework, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
The proposed hybrid model supports non-Gaussianity of the returns in the reference model and
incorporation of prior views from exogenous information which is quantied into a joint alpha
signal using machine learning methods. The predictive power of the hybrid model is investi-
gated using Quantopian backtest engine. The introduction of subjective views derived from
alternative datasets combined with COMFORT as the reference model leads to a faster adapt-
ing model, which has extraordinary and steady risk monitoring properties, captures trends
and leads to a higher Sharpe ratio. Two of our best model and their combination deliver net
portfolio returns which are systematically higher than the returns of the S&P 500 index. |
|
Marco Gambacciani, Marc Paolella, Robust normal mixtures for financial portfolio allocation, Econometrics and Statistics, Vol. 3, 2017. (Journal Article)
A new approach for multivariate modelling and prediction of asset returns is proposed. It is based on a two-component normal mixture, estimated using a fast new variation of the minimum covariance determinant (MCD) method made suitable for time series. It outperforms the (shrinkage-augmented) MLE in terms of out-of-sample density forecasts and portfolio performance. In addition to the usual stylized facts of skewness and leptokurtosis, the model also accommodates leverage and contagion effects, but is i.i.d., and thus does not embody, for example, a GARCH-type structure. Owing to analytic tractability of the moments and the expected shortfall, portfolio optimization is straightforward, and, for daily equity returns data, is shown to substantially outperform the equally weighted and classical long-only Markowitz framework, as well as DCC-GARCH (despite not using any kind of GARCH-type filter). |
|
Damai David Stuber, Conditions for the application of a geometric-mean portfolio optimization framework subjected to a risk restriciton in contrast to the conventional arithmetic-mean-variance portfolio optimization framework, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Bachelor's Thesis)
This thesis inquires into conditions under which the application of geometric-mean portfolio optimization framework results in different optimal portfolio choices in con trast to the arithmetic-mean-variance portfolio optimization framework and therefore may offer additional benefit to the investor over the mean-variance approach. lt is shown, that the difference in results arises out of choosing or forgoing to subject the geometric-mean optimization to a particular measure of risk. An analysis of equiv alence between the mean-variance and the geometric-mean optimization is clone for subjection of the latter to variance, value-at-risk and expected shortfall. The analysis results in conditions of equivalence for each of these three cases. Violation of these conditions are cause for the difference in optimal portfolio choices. Furthermore it is shown, that a trade-off formulation of the respective optimization exists in some cases, a formulation that may allow for better accounting of an investor's risk profile. |
|
Marc Paolella, The Univariate Collapsing Method for Portfolio Optimization, Econometrics, Vol. 5 (2), 2017. (Journal Article)
The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets under interest and, via simulation or optimization, repeating this process until the desired portfolio weight vector is obtained. The UCM is well-known conceptually, straightforward to implement, and possesses several advantages over use of multivariate models, but, among other things, has been criticized for being too slow. As such, it does not play prominently in asset allocation and receives little attention in the academic literature. This paper proposes use of fast model estimation methods combined with new heuristics for sampling, based on easily-determined characteristics of the data, to accelerate and optimize the simulation search. An extensive empirical analysis confirms the viability of the method. |
|
Tomas Kvasnicka, Filtering of Jumps using Wavelet Decomposition: Application to Portfolio Selection, University of Zurich, Faculty of Business, Economics and Informatics, 2017. (Master's Thesis)
|
|
Marc Paolella, Asymmetric stable Paretian distribution testing, Econometrics and Statistics, Vol. 1, 2017. (Journal Article)
Two new tests for the symmetric stable Paretian distribution with tail index 1 < α < 2 are proposed. The test statistics and their associated approximate p-values are instantly computed and do not require use of the stable density or distribution or maximum likelihood estimation. They exhibit high power against a variety of alternatives, and much higher power than the existing test based on the empirical characteristic function. The two tests are combined to yield a test that has substantially higher power. A fourth test based on likelihood ratio is also studied. Extensions are proposed to address the asymmetric case and are shown to have reasonable actual size properties and high power against several viable alternatives. |
|
Anna Ricci, Risk Parity in a Risky, Non-Elliptic World: A Coherent, Non-Gaussian Approach, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Master's Thesis)
|
|
Boris Wälchli, A proximity based macro stress testing framework, Dependence Modeling, Vol. 4 (1), 2016. (Journal Article)
In this a paper a non-linear macro stress testing methodology with focus on early warning is developed. The methodology builds on a variant of Random Forests and its proximity measures. It is embedded in a framework, in which naturally defined contagion and feedback effects transfer the impact of stressing a relatively small part of the observations on the whole dataset, allowing to estimate a stressed future state. It will be shown that contagion can be directly derived from the proximities while iterating the proximity based contagion leads to naturally defined feedback effects. Since the methodology is Random Forests based the framework can be estimated on large numbers of risk indicators up to big data dimensions, fostering the stability of the results while reducing inaccuracies in estimated stress scenarios by only stressing a small part of the observations. This procedure allows accurate forecasting of events under stress and the emergence of a potential macro crisis. The framework also estimates a set of the most influential economic indicators leading to the potential crisis, which can then be used as indications of remediation or prevention. |
|